Winning in the Generative AI Stack: Analyzing Layers and Players, Including the Rise of E2E AI Applications

Winning in the Generative AI Stack: Analyzing Layers and Players, Including the Rise of E2E AI Applications

Artificial intelligence (AI) is transforming every industry and creating new opportunities for businesses and consumers. But how do companies compete and differentiate themselves in this rapidly evolving field?

To understand how companies will compete in this space, we need to first understand the AI stack and analyze how companies can differentiate in or disrupt each layer. The Generative AI stack consists of the following layers:


Layer 1: Connectivity Platforms - Most AI applications require high-speed, low-latency, and reliable connectivity to perform complex tasks such as model training and inference. This layer includes wired and wireless systems that enable access to cloud-based AI services. The key to success in this layer is to provide fast, secure, and affordable connectivity to a large customer base, while also identifying ways to leverage AI to provide value-added services. The winners will be those who optimize their networks for AI payloads by offering dedicated bandwidth, edge computing, or 5G services. The losers will be those who lag behind in network infrastructure or quality of service and miss out on the AI disruptions coming from the increased demand from the rest of the stack.  

Today, no single operator or wireless network has emerged as a leader in AI-focused connectivity solutions, beyond the benefits of the underlying 5G networks. There is a lot of opportunity for incumbents to disrupt themselves with new competitive AI services at this layer.


Layer 2: Hardware Platforms - This layer consists of chipsets that accelerate AI workloads such as model training and inference. These include CPUs, GPUs, TPUs, FPGAs, ASICs, or neuromorphic chips that can process large amounts of data efficiently and effectively. Hardware manufacturers may compete on performance, power consumption, cost, or compatibility with various frameworks and platforms. 

Examples of early leaders in this layer include NVidia, Intel, Google and Qualcomm. The winners at this layer are those who deliver AI hardware that can meet the diverse needs of AI developers in terms of performance, power consumption, cost, compatibility with various frameworks and platforms.

There is still huge opportunity to further disrupt the AI space by getting more complex AI models to work at the edge without needing to go to the cloud. 


Layer 3: Cloud Platforms- This layer provides most of the compute power to process all those AI software services. The winners at this layer are those who can provide comprehensive and easy-to-use AI packages that cater to different use cases and industries. The losers are those who have limited or outdated AI offerings that do not meet customer expectations.

Top examples here are Microsoft Azure, Google Cloud, Amazon Web Services, and IBM Cloud. These providers offer various models for different domains such as natural language processing, computer vision, speech synthesis, etc. They also provide tools and frameworks for building, training, deploying, and managing custom generative AI models. They differentiate themselves by offering high scalability, reliability, security, performance, and cost-effectiveness of their cloud services.


Layer 4: AI Model Layer - This is the heart of the AI stack where the algorithms sit that can analyze data, learn from it, and make predictions or decisions based on that data. 

The winners are those who can create state-of-the-art models trained on large enough datasets to deliver high accuracy outputs that solve real-world problems and deliver value to customers. The losers are those who produce suboptimal or irrelevant models that do not meet customer needs or expectations.  Model developers may compete on accuracy, speed, robustness, interpretability, or scalability of their AI models. 

The AI model layer can be further broken into three different types of AI models:

  1. General AI Models - Foundational models trained on massive amounts of heterogeneous data from various sources and modalities. These models can generate any kind of content with minimal input and guidance. Examples include GPT-3: a language model developed by OpenAI that is one of the largest and most advanced language models to date. DALL-E: A generative AI system that can create images from text descriptions using multimodal understanding., CLIP: a model developed by OpenAI that can understand the contents of images and videos in a way that is similar to how humans understand them.
  2. Specific AI Models - Generic models trained on large datasets of a particular type of content, such as natural language, images, audio, or video. These models can generate diverse and realistic content across different domains. Examples include BERT: a language model that can understand the context and meaning of text to perform natural language processing tasks. FaceNet: a facial recognition algorithm that can identify and verify faces in images and videos, etc.
  3. Hyperlocal AI Models - Specialized models tailored for specific domains, use cases, or customers. These offer higher quality and performance than general or specific models by incorporating domain knowledge and user feedback such as face recognition for security, sentiment analysis for customer service, product recommendation for e-commerce, etc. Examples include Amperity: a customer data platform that uses hyperlocal AI models to analyze customer behavior and preferences on a city-by-city basis. Freenome: A healthcare startup that uses hyperlocal AI models to detect early-stage cancer by analyzing blood samples. Freenome's models incorporate machine learning algorithms that can identify subtle patterns and anomalies in blood biomarkers to detect cancer at an early stage, when it is most treatable. Farmers Edge: a precision agriculture platform that uses hyperlocal AI models to analyze weather and soil data for specific fields and crops.


Layer 5: API (AI OS) Layer - This layer provides access to generative AI models through APIs so developers and users can easily leverage generative AI capabilities without having to build their own models or infrastructure. API providers may offer various features such as authentication, billing, monitoring, security, documentation, or support for their APIs. API providers may compete on availability, reliability, performance, pricing, or customer service of their APIs. The winners are those who can provide easy-to-use and reliable APIs that enable developers and users to access generative AI models with minimal effort and cost. The losers are those who provide complex or unreliable APIs that frustrate developers and users.

Some API providers that offer easy access to generative AI models are Hugging Face, OpenAI APIs, Amazon Sagemaker and Microsoft Azure APIs. These providers offer various APIs for different use cases such as text generation (e.g., GPT-3), image generation (e.g., StyleGAN), video generation (e.g., First Order Motion Model), etc. They also provide user-friendly interfaces and documentation for using their APIs. They differentiate themselves by offering high quality, scalable, and affordable generative AI models that can be easily integrated into various applications and platforms. These providers aim to democratize access to generative AI and enable more developers and users to create novel and engaging content with minimal coding or technical skills.


Layer 6: Application Layer - This layer consists of the end-user applications that leverage generative AI models through APIs to create engaging and personalized experiences for customers. Application developers may compete on usability, functionality, quality, creativity, or customer satisfaction of their applications.

The winners in the Application Layer are those who can create innovative and engaging applications that delight customers and solve real, meaningful problems. The losers are those who create boring or ineffective applications that fail to attract or retain customers.

Examples include Kore.ai: A generative AI startup that uses Conversational AI to help companies deliver extraordinary customer and employee support experiences. Copy.ai: A generative AI startup that uses GPT-3 to create digital advertising and marketing content to help businesses save time and increase conversion rates. Jasper: Jasper can generate content such as ads, blog posts, website copy, social media posts, etc. based on user input such as keywords, tone, style or audience12. 

This is a simplified overview of the generative AI stack and some of today’s main players. Of course, there may be overlaps and collaborations between different layers and actors. For example, some cloud providers may also offer AI models or APIs; some model developers may also offer APIs or applications; some application developers may also build their own models or APIs; etc.  

The Rise of E2E Vertical Solutions

We are also witnessing an increasing number of end-to-end (E2E) vertical AI applications that are emerging in various industries whereby one company is building across several layers of the stack -- from their own proprietary models and APIs to their own applications. For instance, in the healthcare industry, E2E AI applications are being developed to diagnose diseases, predict treatment outcomes, and automate medical procedures. In the finance industry, E2E AI applications are being developed to detect fraud, predict market trends, and personalize investment advice. Companies such as Runway, UiPath, and DataRobot are examples of E2E AI app developers who use their own proprietary models, APIs, and end-user applications.

The key to success for E2E AI app developers is to have a deep understanding of the industry they are serving and the specific problems they are trying to solve. They need to have the technical expertise to build robust AI models, APIs, and end-user applications that can deliver accurate and reliable results. They also need to have a deep understanding of their customers' needs and preferences and provide personalized and engaging experiences that can differentiate them from their competitors.

Developing E2E AI applications requires a significant investment in resources and expertise, but it also offers significant opportunities for companies to differentiate themselves and create new revenue streams. By leveraging their own proprietary models, APIs, and end-user applications, companies can create unique value propositions that can help them win in highly competitive markets. Additionally, by having control over the entire AI stack, companies can ensure data privacy, security, and ethical considerations, which is becoming increasingly important for customers and regulators alike.

 

Bottom Line

The generative AI stack offers a diverse range of opportunities for companies to compete and differentiate themselves in the rapidly evolving AI landscape. Each layer of the stack presents its own set of challenges and opportunities, from providing reliable and fast connectivity to developing state-of-the-art AI models and end-user applications. As the AI industry continues to mature, we will see more startups innovate on top of existing models and large scale cloud and API providers, as well as more industry specific companies build end-to-end AI applications that leverage their own proprietary models, APIs, and end-user applications. While developing E2E AI applications requires significant investments in resources and expertise, it also offers significant opportunities for companies to differentiate themselves and create new revenue streams. Today such vertical development is an expensive endeavor, but innovations across the stack (especially at the hardware and model layers) will lower the barriers to entry over time. As AI continues to transform every industry, it is essential for companies to stay up-to-date with the latest AI trends and technologies to remain competitive and deliver value to their customers.

Beny Rubinstein, Eng., M.B.A., Ph.D. candidate 🎗

Global Business Leader, Artificial Intelligence ♦ Microsoft Azure Founding Team Member ♦ Chairman, Future of Health Summit ♦ Startup Investor, Board Member, Co-Founder, Advisor ♦ Coach ♦ Keynote Speaker ♦ Fractional CxO

1y

Super clear and useful, kol ha'kavod!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics