Generative AI Use Cases
"Field of Flowers" by OpenAI's DALL-E 2

Generative AI Use Cases

Where we are (now)

ChatGPT was introduced on November 30, 2022 so it will be 8 months old at the end of July. Yes it is true that GPT-3 was introduced in 2020 and was the first LLM that created surprisingly human level text. And BERT was introduced in 2018. And the "Attention is all you need" article was published in 2017. And gee, if we want to get historical, the term "artificial intelligence" was coined in 1955... But let's be realistic -- real interest and awareness of the possible uses of machine intelligence in business has only been a "thing" for about 8 months.

I mention this because it is a little bit unrealistic to be saying, 8 months into any "new" technology, that you need to have any level of precision on what the use cases are and what the ROI in those use cases might be. This is especially true when talking about a general purpose technology -- a good parallel that has been widely discussed was the electrification of factories. It took time to change tools in factories from belt driven to electric motors, and then it took a breakthrough in thinking to redesign the factory building itself to unlock the real value of electricity because at that point it entirely changed the systems of the work.

Foundation models (generative AI) are going to change our work more quickly since changing a digital process will be easier and faster than changing a physical process (as happened with electrification). But there are still many human barriers to change -- this post is intended to free your mind to help make that change happen for yourself and your company...

"I'm trying to free your mind, Neo. But I can only show you the door. You're the one that has to walk through it. You have to let it all go." (Morpheus in The Matrix)

A starting point - let's anchor on what a foundation model can offer -- importantly, not what it is -- this is where people often get caught in their thinking and then cannot progress to understand how it should be applied. Elsewhere there are philosophical explorations of sentience, consciousness, self-awareness, and other related topics but for this discussion, focus only on the capability that is manifest in the technology: foundation models offer a quantum of human level thinking often oriented around a specific symbolic system (language, code, images) but increasingly multi-modal (more than one symbolic system simultaneously).

I use the word "quantum" because I want to emphasize that it is a discrete unit of the phenomenon of thought - it does not (today) offer general human level thinking. Again, lots of other places where you can have the debate about artificial general intelligence (AGI) from a technical, a philosophical or even a metaphysical perspective - but for this discussion let's stick with the concept of a capability which contributes a discrete unit of thought.

So in order to get value in a real world use case from this technology, we can logically take the next step and recognize that this quantum of thought must be combined with human thought in order to be valuable. There are two primary ways that it can be combined which I will call steerage and correction.

Steerage -- in order to engage a foundation model in contributing its quantum of thought, a human being must set it on its way with direction and context. This can happen both explicitly (such as through a prompt) or implicitly, through the LLM following the action that the person is doing (such as code or sentence completion while the person is typing).

Correction -- since the LLM is merely providing a discrete unit of thought, there is a need to examine the content of that discrete unit to be sure that it is a suitable response to the steerage that has been provided and that it has not introduced inaccuracies.

It stands to reason that the quality of the steerage and correction done by the human will determine the value of the quantum of thought contributed by the foundation model. We can see this already playing out in the most advanced use case for foundation models, software development coding assistants. These tools are steered by a developer either by prompt or through active monitoring of the code writing process. And the results are managed and improved by the programmer through testing and analysis. A recent Github survey on AI's impact on the developer experience found that:

  • 92% of US developers are already using AI coding tools
  • 57% of developers believe AI coding tools help them improve their coding language skills
  • In previous research Github conducted, 87% of developers reported that the AI coding tool GitHub Copilot helped them preserve mental effort while completing more repetitive tasks.
  • Github also measured a 55% reduction in time to complete a sample programming task when using Copilot
  • AI coding tools also help developers upskill while they work. 

While Github is a product vendor, other independent studies are being conducted to look at this first mature use case for foundation models including this study conducted in conjunction with MIT: The Impact of AI on Developer Productivity.

So here is the step where I free your mind. But let me first recount the three prior steps:

  1. Foundation models offer a quantum of human level thinking often oriented around a specific symbolic system (language, code, images) but increasingly multi-modal (more than one symbolic system simultaneously);
  2. The value of the contribution made by a foundation model will depend upon combining this quantum with steerage and correction by humans;
  3. An early proof point on the value in a complex human thinking use case can be found in software development coding assistants which are already broadly adopted and are changing how developers work while increasing their productivity...

Step 4 - define possible use cases: everything, everywhere, all at once.

I really mean this. Everything we do requires thought. Adding a quantum of thought from machine intelligence will therefore help accelerate everything we do. The challenge to implement any given use case will be in the redesign of the work to provide for human steerage and correction.

Other purely office tasks might look like a coding assistant but let's take a challenging example -- what would be required for a building inspector to utilize a foundation model in the inspection process? It might require a wearable camera and microphone to actively monitor the physical environment that the inspector is moving through. Image processing and voice to text processing would take these real world inputs and allow the model to make suggestions in real time as well as helping to write-up the inspection results.

"You take the blue pill... the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill... you stay in Wonderland, and I show you how deep the rabbit hole goes." (Morpheus in The Matrix)

You have the opportunity to, like Alice, leave the old world behind and enter a strange new world where the usual physical laws don't apply -- Wonderland. In this rabbit hole they old laws don't apply because the nominal cost of a quantum of human thought is going to zero and the speed of that thought is effectively instantaneous.

Instantaneous thought at a price near zero changes everything. So then you just have to figure out the steerage and correction for a given use case and you can disrupt that use case... eventually everything you (or your company) does.

Red pill anyone?


Rahsaan Dean, CFO, MBA, MSBA

Data driven C-Suite professional championing business intelligence and fact-based decision making.

12mo

Between this and ICOM, we are redefining creativity. Exciting!

Like
Reply

  • No alternative text description for this image
Like
Reply

Love the metaphors. So far, the Gen AI tools only work backwards on data of the past - time. Thoughts are real time. Thoughts must be turned into data which is not real time. Even quantum time. So - I am not sure I understand what you mean as a quantum of thought.

Andrea Peters

Leader I MarTech | Strategy | Speaker | Listener

12mo

Totally agree. When iPhones launched 2008 think how many use cases there were then vs now. Apps weren’t developed yet. How did Einstein define intelligence: it’s imagination!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics