The age of copilots

The age of copilots

The following is an excerpt from my keynote today at Microsoft Ignite.

It’s hard to believe that it’s just been a year since ChatGPT first came out. We’ve done a lot since then and the pace of innovation has been astounding.

It’s exciting to be entering a new phase of AI where we’re not just talking about innovation in the abstract, but about product making, deployment, AI safety, and real productivity gains. We are at a tipping point. This is clearly the age of copilots.

And, today, we shared new data that shows the real productivity gains that Microsoft Copilot is already driving. We’re taking a very broad lens to deeply understand the impact of Copilot on both creativity and productivity. And the results are pretty striking: With Copilot, you are able to complete tasks much faster, and that’s having a cascading effect on work and workflow everywhere.

We believe Copilot will be the new UI that helps you gain access to the world’s knowledge and your organization’s knowledge, but most importantly, it will be your agent that helps you act on that knowledge.

So, this week at Ignite, we are introducing 100 new updates across every layer of the stack, to help us realize that vision.

Our end-to-end Copilot stack for every organization spans infrastructure, foundation models, toolchains, data, and, of course, Copilot itself:

Copilot stack

And, today, I highlighted five key themes across everything we’re announcing.

AI Infrastructure

It starts with the AI infrastructure layer and our approach to Azure as the world’s computer. We offer the most comprehensive global infrastructure with more than 60 data center regions, more than any other provider.

Being the world’s computer means that we also need to be the world’s best systems company across heterogeneous infrastructure.

We work closely with our partners across the industry to incorporate the best innovation from power, to the datacenter, to the rack, to the network, to core compute, as well as AI accelerators. And, in this new age of AI, we are redefining everything across our fleet.

As we build our datacenters, we’re working to source renewable power. In fact, today, we are one of the largest buyers of renewable energy around the globe. We’ve sourced over 19 gigawatts of renewable energy since 2013.

And we’re working with producers to bring on new energy from wind, solar, geothermal, and nuclear fusion, as well. And I was excited to share that we are on track to meet our target of generating 100 percent of the energy we use in our datacenters, from zero-carbon sources by 2025.

Today, I was also thrilled to announce the general availability of Azure Boost. It was fantastic to talk about this new system that offloads server virtualization processes onto purpose-built software and hardware. This enables massive improvements in networking, as well as remote storage and local storage throughput, making Azure the best cloud for high-performance workloads, while strengthening security as well.

We’re also tapping into the innovation across the industry, including from our partners AMD and Intel, and making that available to you.

As a hyperscaler, we see workloads, we learn from them, and then have an opportunity to optimize the entirety of the stack, from the energy draw to the silicon, to maximize performance and efficiency. It’s thanks to this feedback cycle that I was thrilled to introduce our very first custom in-house CPU series, Azure Cobalt, starting with Cobalt 100.

Azure Cobalt

Cobalt is the first CPU designed by us, specifically for the Microsoft Cloud. This 64-bit 128-core ARM-based chip is the fastest of any cloud provider. It’s already powering parts of Microsoft Teams, Azure Communications Services, as well as Azure SQL.

When it comes to AI accelerators, we’re also partnering broadly across the industry to make Azure the best cloud, no questions asked, for both training and inference.

It starts with our very deep partnership with Nvidia. We have built the most powerful AI supercomputing infrastructure in the cloud using Nvidia GPUs.

In fact, last week, Azure was the largest submission to the MLPerf Benchmarking Consortium, with 10,000 H100 GPUs, three times more than the previous record, delivering better performance than any other cloud. And, in the Top500 list of the world’s supercomputers, Azure was the most powerful supercomputer in the public cloud, and third, all up.

As we build supercomputers to train these large models, InfiniBand gives us a unique advantage. And today, we went even further. We announced we would add Nvidia’s latest GPU AI accelerator, H200, to our fleet, to support even larger model instancing with the same latency.

We also introduced the first preview of Azure Confidential GPU VMs, so you can run your AI models on sensitive datasets on our cloud. We co-designed this with Nvidia.

I was also excited to announce that AMD’s flagship MI300X AI accelerator is coming to Azure to give you even more choices for AI-optimized VMs. Again, this means we can serve large models faster. We’ve already got GPT-4 up and running on MI300X, and today we offered early access to select customers.

And we’re not stopping there. We are committed to taking the entirety of our know-how from across systems, and bringing you the best innovation from our partners and us.

It’s why we today we also announced our first, fully-custom in-house AI accelerator Azure Maia, which is designed to run cloud AI workloads, like LLM training and inference. This chip is manufactured on a 5-nanometer process and has 105 billion transistors, making it one of the largest chips that can be made with current technology:

Azure Maia

But it goes beyond the chip. We designed an end-to-end Maia rack for AI. AI power demands require infrastructure that is dramatically different from other clouds. The compute workloads require a lot more cooling, as well as networking density. And we’ve designed the cooling unit to match the thermal profile of the chip, and we added rack-level closed loop liquid cooling for higher efficiency.

This architecture allows us to take this rack and put it into existing data center infrastructure and facilities rather than building new ones. And we’re already testing this with many of our own AI services, including GitHub Copilot. And we will roll out Maya accelerators across our fleet supporting our own workloads first, and we’ll scale it to third-party workloads after.

This silicon diversity is what allows us to power the world’s most powerful foundation models, and all of our AI workloads.

Foundation models & AI toolchain

Now, let’s go to the next layer of the stack: the foundation models that are only possible because of these advanced systems I talked about.

Generative AI models span from trillions of parameters for LLMs that require the most powerful GPUs in Azure, to a few billion parameter, task-specific small language models, or SLMs. And we offer the best selection of models which you can use to build your own AI apps, while meeting your specific cost, latency, and performance needs.

It starts with our deep partnership with OpenAI. They’re doing breakthrough work to advance the state of AI models, and we are thrilled to be all-in on this partnership together.

And our promise to our customers is simple: As OpenAI innovates, we will deliver all of that innovation as part of Azure AI. It's why we are bringing the very latest OpenAI models, including GPT-4 Turbo, and GPT-4 Turbo with Vision, to our Azure OpenAI service.

GPT-4 Turbo offers lower pricing, structured JSON formatting, and extended prompting. It will be available in Azure OpenAI service this week in preview, and the token pricing for these new models will be at parity with OpenAI.

We are also all-in on open source, and we want to bring the best selection of open-source models to Azure and do so responsibly. Our model catalog has the broadest selection of models already, and we are adding even more to our catalog.

And, today, we took one more big step in support of open-source models, adding a models as a service offering in Azure. This will allow you to get access to these large models without having to provision GPUs so that you can focus on development, not backend operations.

We are proud to be partnering with Meta on this. You can fine-tune Llama 2 with your data to help the model understand your domain better and generate more accurate predictions. We also want to support models in every language and in every country. And we are partnering with Mistral to bring their premium models as a service, as well as with Group 42 to bring Jais, the world’s highest quality Arabic language model, again, just as a service.

Now, when we talk about open source, there is one more very exciting thing that’s happening in this space and that is SLMs. We love SLMs. In fact, one of the best is Phi, a model that was built by Microsoft Research on highly specialized data datasets, which can rival models 50 times bigger.

In fact, Phi 1.5 has only 1.3 billion parameters, but nonetheless demonstrates state-of-the-art performance against benchmarks testing things like common sense, language understanding and logical reasoning.

And today, I was thrilled to announce Phi 2, a scaled-up version of Phi 1.5 that shows even better capabilities across all of these benchmarks, while staying relatively small at 2.7 billion parameters. Phi 2 is open source and will be coming to our catalog soon.

Once you have these models, next up tooling. With Azure AI Studio, we offer the full lifecycle tool chain for you to be able to build, customize, train, evaluate and deploy the latest next generation models. It also includes built in safety tooling.

The other thing we’re doing with Azure AI Studio is extending it to any endpoint, starting with Windows. You can customize state-of-the-art SLMs and leverage our templates for common development scenarios, so that you can integrate these models right into your applications.

Earlier, I mentioned our partnership with Nvidia. Together, we are innovating to make Azure the best cloud for training and inference. Our collaboration extends across the entirety of the stack, including our best-in-class solutions for AI development. And, at Ignite, we announced an expansion of our partnership to bring Nvidia’s Generative AI Foundry Service to Azure.

Nvidia founder, president, and CEO Jensen Huang joined me to talk about our collaboration. Here’s our conversation:

Your data

Now, let’s talk about data, which is perhaps the most important consideration, because in some sense, there is no AI without data.

Microsoft Fabric brings all your data, as well as your analytics workloads, into one unified experience. Fabric has been our biggest data launch since SQL Server, and the reception to the preview has been just incredible.

And today, I was thrilled to announce the general availability of Microsoft Fabric:

We also announced a new capability that we call mirroring. It’s a frictionless way to add existing cloud data warehouses, as well as databases to Fabric, from Cosmos DB or Azure SQL DB, as well as from Mongo DB and Snowflake, not only on our cloud, but any cloud.

Microsoft Teams

Now, let’s move up the stack and talk about how we’re reimagining all of our core applications in this era of AI, including Teams.

Our vision for Teams has always been to bring together everything you need in one place across collaboration, chat, meetings, and calling. More than 320 million people rely on Teams to stay productive and connected.

Just last month, we shipped new Teams, which we reimagined for this new era of AI. New Teams is up to two times faster, uses 50 percent fewer resources, and can save you time and help you collaborate a lot more efficiently. We’ve also streamlined the user experience. It’s easier to get more done, with fewer clicks. It’s also the foundation for the next generation of AI-powered experiences, transforming how we work. And it’s available on both Windows and Mac, and, of course on all the phone endpoints.

But Teams is more than a communications and collaboration tool. It’s also a multiplayer canvas that brings together business processes directly into the flow of your work. Today, more than 2,000 apps are part of the Teams store. Apps from Adobe, Atlassian, ServiceNow, Workday have more than 1 million monthly active users. And companies in every industry have built 145,000 custom line of business applications in Teams.

And when we think about Teams, it’s important to ground ourselves that presence is, in fact, the ultimate killer application. And that’s what motivates us to bring the power of Mesh to Teams, reimagining the way employees come together and connect using any device, whether it’s a PC, HoloLens or Meta Quest. I was excited to share that Mesh will be generally available in January.

Microsoft Copilot

Now, let’s move up to the very top of the stack, which is Microsoft Copilot.

Our vision is straightforward: We are the copilot company. We believe in a future where they will be a copilot for everyone and everything you do. Microsoft Copilot is that one experience that runs across all our surfaces, understanding your context on the Web and on your device. And when you’re at work, it brings the right skills to you when you need them.

It starts with search, which is built into Copilot and brings the context of the Web to you. Search, as we know of it, is changing, and we are all-in. Bing Chat is now Copilot. It’s a standalone destination, and it works wherever you are, on Microsoft Edge, on Google Chrome, on Safari, as well as mobile, coming soon.

Our Enterprise version, which adds commercial data protection, is also now Copilot. You simply log in with your Microsoft Entra ID to access it. It will be available at no additional cost to all eligible Entra ID users.

And, just two weeks ago, we announced the general availability of Copilot for Microsoft 365. It can reason across the entirety of the Microsoft Graph. And it integrates Copilot into your favorite applications, whether it’s Teams, Outlook, Excel, and more, and it comes with plug-ins for all the enterprise knowledge and actions available in the Graph.

When it comes to extending Copilot, we support plug-ins today, and we are also very excited about what OpenAI announced last week with GPTs. GPTs are a new way for anyone to create a tailored version of ChatGPT that’s more helpful for very specific tasks at work or at home. Going forward, you will be able to use both plug-ins and GPTs in Copilot to tailor your experience.

You will, of course, need to tailor your Copilot for your very specific needs, your data, your workflows, as well as your security requirements. No two business processes, no two companies are going to be the same. That’s why today we announced Copilot Studio:

We are already using this pattern to extend Copilot for every role and function.

For developers, GitHub Copilot is turning natural language into programing language, helping them code 55 percent faster.

For SecOps teams, Copilot is helping them respond to threats at machine speed. In fact, this week we’re adding plug-ins for identity management, endpoint security, and for risk and compliance managers as well.

For sellers, Copilot is right there, helping you close more deals. Whether you’re responding in email or in a Teams meeting, you can enrich that customer interaction by grounding Copilot with your CRM data, whether it’s in Salesforce or Dynamics 365.

And, for customer service teams, today we announced Copilot for Service to help agents resolve cases faster. It provides agents with access to the right knowledge and context within the tools they use every day, and it can be embedded directly inside agent desktop applications too.

We’re already seeing a new Copilot ecosystem emerge as all of you extend Copilot. Dozens of ISVs, including Confluence, Jira, Mural, Ramp, and Trello, have all built Copilot plug-ins for their applications, and customers are building their own line of business plug-ins, too, to increase productivity and create deeper insights. Not only can you access these in Copilot, but you can surface them across our applications.

**

I want to close by talking about the arc of innovation going forward in two critical areas: AI and mixed reality, and AI and quantum.

First, AI is not just about natural language as an input. Of course, it starts with language, but it goes beyond that. Here’s a glimpse of what’s possible when the real world becomes your prompt and interface.

Pay attention to how not just your voice, but your gestures and even where you look, becomes the new input and how transformative it can be to someone like a frontline worker using Dynamics 365:

The second area is the convergence of quantum computing and AI. Key to scientific discovery today is complex simulation of natural phenomena, whether it’s chemistry, biology, or physics, on high performance computing.

You can think of AI as an emulation of those simulations by essentially reducing the search-space. And that’s what we’re doing with Azure Quantum Elements. Just like large models can generate text, you will be able to generate entirely new chemical compounds.

Using Quantum Elements, any scientist can design novel new molecules with unique properties for developing more sustainable chemicals, drugs, advanced materials, or batteries.

In parallel, we are also making progress on quantum computing, because quantum will ultimately be the real breakthrough for speeding up all these simulations. In fact, just last week, we announced a strategic collaboration with Photonic to expand this full stack quantum approach that we have taken to quantum networking.

**

At the end of the day, all of this innovation will only be useful if it’s empowering all of us in our careers, in our communities, in our countries. That’s our mission. We want to empower every person and every organization across every role and business function with a Copilot.

Just imagine if 8 billion people always had access to a personalized tutor, a doctor that provided them medical guidance, or a mentor that gave advice for anything they needed. I believe all that’s within reach. It’s about making the impossible possible.

I want to leave you with a video of Anton Mirhorodchenko, a developer from Ukraine, who shares his story of how Copilot has empowered him:

 

Dhananjay Puranik

Co-Founder at Tuffle | We help Startups, SMBs, and Enterprises with their complex needs for Software Development and Automations | SaaS Development | Strategic Advisor | AI | Cloud Application Development

2d

Satya Nadella Brilliant vision!! Revolution is on the way!!

Like
Reply
Shubham Gaumat (He / Him / His)

Your Architect for Demand Generation💹I Only Talk About AI🤖GTM🚀 & ROI🎯Marketing Development Specialist @Netcore Cloud || Ex - Founders' Person

1mo

Microsoft celebrates Copilot's first year, highlighting its impact on productivity and unveiling a wave of updates to solidify its dominance in the AI copilot space.

Ariadna Cruz G.

Program and Project Manager, Agilist, AI professional, Author, PMI UK Vice President and PMI Northern Ireland Chair.

2mo

Wonderful! 

Jack Carroll

B2B Revenue Growth Strategy. Ideal Client Profiling. Target List Building. Lead Generation. Nurturing. Conversion. 500+ Content Resources. Guaranteed. Ask me how.

4mo

OMG. The sleeping giant is alive and well and cranking on all fronts.

Nick Palomba ☁🔒

Managing Director CoPilot, Modern Work & Security - Retail & CPG | Former Vice Mayor of Indian Rocks Beach, FL | Motivational Speaker | Industry Influencer | Board Director | CoPilot Change Champion | 7K followers

4mo

What we have already don with CoPilot is amazing, but the next 6 months are going to be transformational. Microsoft CoPilot on your phone is a must-have, I used it all weekend to help me around the house and do shopping.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics