Oh what shall we do with our new probablistic toys?

Patterns, synthetic reasoning, human readable formats… oh my.

Andrew Bowers
Scribblings on Slate

--

I attended an MIT alum AI conference this past weekend in the Bay Area. It was a great opportunity to think and talk about AI across a broad range domains. Throughout the day, I jotted down some notes which I hope to expand on in the coming weeks. So much to think about. Sharing here, as usual, to clarify my thinking but more importantly to hear what others are thinking — let me know!

When it comes to foundational generative AI, some are making comparisons to the early days of the PC — it is good at a lot of things, but we won’t know what it is really good at until domains of specialization are applied.

I’ve heard Benedict Evans often use the home-computer-as-kitchen-recipe-organizer as an example of how off-the-mark early use cases can be. I love those quirky ads , though I don’t know how widespread this thought actually was vs how meme-ready the ads are. Nevertheless, what seems beyond debate is that new general purpose technologies will lead to experiences that just don’t fit with today’s mental models. In the 1980’s people mostly used a home PC to play games, write documents, and maybe track financials. No one was thinking about a computer as a camera or a communication device that you carry around in your pocket.

Midjourney has become a key part of my workflow for posting. It enables me to communicate visually in ways that weren’t within time or cost bounds previously. I think that’s indicative of generative AI more broadly.

Patterns, Reasoning, Human readable output

From my thinking, generative AI can be described as having three main capabilities:

  1. Discovering patterns people can and, more profoundly, can’t see
  2. Reason synthetically about these patterns
  3. Output data in human readable & understandable formats like text and visuals

The first pertains to machine learning in general, but the second two appear to be emergent out of large language models.

Probabilistic not deterministic

There’s one other characteristic to layer on top of those capabilities. Unlike computer programs we are used to, generative AI is probabilistic in nature. GenAI naturally slots into places where there is ‘no right answer’. It can also fit in where there needs to be a right answer by designing the system with a human in the loop to judge quality, verify, and edit outputs.

While time will ultimately tell what the killer use cases are, from where we stand today here are a few areas that seem ripe for applied genAI.

1) Creation based on known patterns

Anywhere there are established patterns upon which to build, generative AI is going to have an application. Creating text and images to communicate are two obvious ones, but I think this extends much further to essentially anywhere there are known and documented patterns that work. User interfaces and programming fit this. I mean, there are whole books and resources devoted to best practice design patterns in both.

The wild part is how these models can extrapolate and identify ‘unknown’ patterns. Will we have a new Hero’s Journey come out of generative AI? New user interfaces which we find work really well but never would have imagined? We saw novel Go moves come out of AlphaGo, so I don’t see why not. But I don’t think we have to fixate on the Sci-fi part of this. The productivity boost from working off existing patterns will be a success in itself.

2) From search to generation

With ‘traditional’ search, the burden of processing and synthesizing the answer was largely on the human. You found resources that contained the answer, read them, and synthesized the answer yourself.

Generative AI takes the burden of processing and synthesizing off the end user. Ironically, search engines like Google have been capable of synthesizing the answer with text for quite a while but for various reasons chose to be selective about where they applied this ‘answer’ approach. I don’t think we fully understand the second order effects of synthesizing an answer, though we are potentially seeing some signs (e.g. Stackoverflow usage dropping).

3) Inference assistants

An area which I think should be getting more attention is gen AI as an inference assistant. Given a set of inputs, it can provide, in human language, a set of potential causes. Notwithstanding the potential risks like bias and hallucinations, this is a really powerful concept that seems fairly new in the world of computers.

4) Automated Interactions

Lastly, because of gen AI’s unique 2nd and 3rd capabilities (reasoning and human readable output), it also has the ability to partially automate interactions that require humans and are bottlenecks today. This may sound like a threat, but I’m hearing people in healthcare and customer service talking more about being able to expand access and serve more people rather than replacing workers.

These are just some of my notes from the weekend. I’m curious to hear how others are thinking about the capabilities and uses of generative AI. Let me know.

--

--