Filter Events:
IR Fireside Chat Series – Environmental Sustainability and Responsible AI
Who: Melanie Nakagawa, CVP, Chief Sustainability Officer
Natasha Crampton, VP, Chief Responsible AI Officer
Brett Iversen, VP, Investor Relations
Event: IR Fireside Chat: Environmental Sustainability and Responsible AI
Date: June 13, 2023
Brett Iversen: Welcome everyone. I'm Brett Iverson, Vice President of Investor Relations. This is the ninth in our series of quarterly videos, focusing on strategic areas that are top of mind for our investors. Today's discussion will focus on a couple of our environmental, social and governance or ESG initiatives. We brought together two of our key leaders to answer your most frequently asked questions. We have Melanie Nakagawa, CVP, and Chief Sustainability Officer, and Natasha Crampton, Vice President and Chief Responsible AI Officer. As always, we welcome any feedback on specific topics, the format, or other suggestions you might have after you view the video. Please reach out to our investor relations team directly with any feedback. With that, let's kick things off. Thank you both for being here. Excited to get into this. To start, you know, the areas of responsible AI and environmental sustainability, we get a lot of questions from our investors. It's very top of mind, you know, how would each of you summarize the core commitments in your areas?
Melanie Nakagawa: I'll get started.
So thanks so much. So in 2020, Microsoft had made commitments to be carbon negative, water positive and zero waste all by 2030, while also protecting ecosystems. We're three years into that journey and we remain steadfast in that commitment. 2022 was a reminder to all of us that to mitigate the most severe impacts of climate change, our impact needs to extend beyond the four walls, and we must continue to accelerate investments that will enable progress for decades to come. We believe that Microsoft has an important role to play in developing and advancing new climate solutions, but also recognize that the climate crisis can't be solved by any single company, organization, or government. The global community needs partnerships, policies, and global commitments to really ensure a healthy future for all of us.
Brett Iversen: Nice. Natasha?
Natasha Crampton: Yeah, this is a great question, Brett. And when it comes to responsible AI at Microsoft, it is both a practice and a culture. So in practice, we've been working to advance responsible AI for nearly seven years now. We've built an internal governance structure for developing and deploying AI, and it's really grounded in these six core principles that we have. So, fairness, privacy and security, reliability and safety, inclusiveness, accountability, and transparency.
We put these principles into practice across the company and at all levels of the company. And this approach is really helping us build a culture of responsible AI as well. As a company-wide effort, it engages teams working in policy and research and engineering. And as we've done for sustainability and accessibility and security, we really want to evolve from responsible AI being something that people must comply with to something that they actually really strive for and take pride in.
Brett Iversen: Love it. Thank you both. You know, for each of you, you could talk about, and you've kind of alluded to this even in the opening, but why this is so important to Microsoft's success. You know, how do they mitigate risk? What do they mean to our employees and customers? How do they create new opportunities? Whatever you want to hit within that basket of questions I asked you. Melanie, maybe you can start us again on this.
Melanie Nakagawa: Well, first on this, it's a, you know, this basket of issues.
We really see it in three ways. It's getting our house in order, it's what we're doing with customers, and then globally, how do we have an impact? So earlier this month, we released our annual report in May of this year. That is our annual sustainability report. And it really shows that progress into this three-year journey that I talked about on our way, the 2030 commitments to be carbon negative, water positive, zero waste, and to protect our ecosystems. We reported on carbon, and I'll get into some details here.
On carbon, while our business grew 18%, emissions actually declined by 0.5%. On water, we were able to contract for over 15 million cubic meters of water replenishment projects. And that's really going to be over the volumetric lifetime of those projects. On waste, we have that zero-waste target. We actually diverted around 12,000 metric tons of solid waste from landfills and incineration across our direct operational footprint. And then on ecosystems, we actually have protected over 12,000 acres of land.
So meeting our company's carbon negative commitment is intricately interdependent on transitioning the world to clean energy infrastructure at the end of the day. Our paths will be similar, and progress will appear slow until the foundations are really set and ready to take place. So like training for a marathon as a runner, I think as some of us are, it'll take focus, planning, and perseverance to reach the finish line. Our Sustainability Connected Community at Microsoft, they're over 9,000 strong here at Microsoft. These members and these employees provide such a vital component to how we're collaboratively driving progress on environmental change, both within the company and externally.
On that second basket, the customer sustainability journey, as a tech company, we have a role to play with the thousands of corporate customers who put their trust in Microsoft technology. Many of our customers have already made a climate pledge, and Microsoft is working to help move them from pledges to progress, much like we are. Companies can only manage what they can measure, and Microsoft is committed to helping our customers measure their environmental impact in a timely and accurate manner.
So in June of last year, 2022, we launched the Microsoft Cloud for Sustainability. This is a comprehensive environmental sustainability management platform that includes the Microsoft Sustainability Manager. Microsoft Azure customers also benefited from significant upgrades over the past year to the Emissions Impact Dashboard, which helps customers understand that emissions impact from the results of their use of the Microsoft Cloud. And then we also released a preview version of something we call the Microsoft Planetary Computer. This enables customers to measure, monitor, and manage ecosystems that may be affected by their operations and to make important decisions related to climate risk.
Brett Iversen: Yeah, a lot happening. I have three girls at home. And they perk up the most when they hear what we're doing on sustainability because it's so talked about amongst their peer group as well. So what would you add?
Natasha Crampton: So from a responsible AI point of view, we've always operated at Microsoft with the core belief that when we are designing and developing AI responsibly, we are not just leading, developing responsible products, we're developing fundamentally better products. And we've known for a while that trust is really essential to the adoption of new products. And that's particularly true when you're talking about frontier tech like AI. So that's why we've been working so hard to operationalize our efforts across the company and build this culture of responsible AI, as I was mentioning.
Look, our employees are being trained on responsible AI practice. They're learning how to integrate our practices into their work, and we've been building tools and providing resources to help them on their journey. Look, as our customers are exploring the immense potential of AI, they're turning to Microsoft, not just for our AI products, but for insights into how to deploy it and use it responsibly. So we really want to share as much as we can. We are committed to making sure that our customers do have that knowledge necessary to use AI responsibly. And that's in part why we've published our own material. So we have a playbook called the Responsible AI Standard, and we published that externally to make sure it's available as a resource. We've also shared a template for an impact assessment, which is one of the early practices that we adopt for our responsible AI work. And we've shared some of our transparency notes as well.
Brett Iversen: You both are busy. I'm sure.
You know, the trust piece that you mentioned. I have a lot of friends on the sales side, and you know going kind of to the cloud portion of our business and you know, all the products and solutions we sell there, it's the biggest thing. Like you would think it's price and things like this when we're competing against some of the other great companies out there, but that trust piece is the biggest portion of the decision that they're hearing consistently. So, I'm glad you touched on it. Maybe another one for you. You've been involved in, you know, the responsible AI work through, you know, key points of the development that we've been on. Can you help us understand how it's evolved over time and moved from principles to really, to your point, kind of core in the operations on how we do it?
Natasha Crampton: Sure. So I think it's fair to say that a decade ago, this field of responsible AI that we've been pioneering at Microsoft, it just barely existed.
So for nearly seven years now, we've worked to advance responsible AI at the company, and we think of this as sort of building artificial intelligence products and services grounded in strong ethical principles. So let me give you two examples, concretely, of how we've progressed this practice over the last seven years.
So at the very beginning, we had those six principles that I mentioned earlier, and those principles continue to serve as the North Star, but principles alone are not enough. So one of the things that we've done is we've taken those principles and really double clicked down on them to provide concrete, actionable guidance for our engineering teams. So we've basically answered the question of how do you build an AI system fairly? What does it mean to be accountable? What does it mean to be transparent?
So we now have this playbook that our engineering teams use, and as I mentioned, we make it available publicly to really spell out the concrete actionable steps that need to be taken to live up to our commitments here.
Second, we've been working on tooling. You know, one way of making sure there's a consistent practice of responsible AI at Microsoft is to build tools to keep engineers in their day-to-day workflows. So whether it's tooling for fairness or interpretability, or making sure that we're really understanding if we are seeing errors, why are they clustering in a particular way? We've built tooling for teams to be able to understand that better. And we don't just build that for our teams internally, we actually bake it into Azure ML, our development platform, so that all of our customers can take advantage of it too.
So look, today we have almost 350 employees focused on responsible AI as part of their role. And we've got people integrated in our product teams, and they come from a huge range of disciplines. You know, next year we intend to grow this number of people working in this space even more.
Look, I have found over time that organizational structures do matter to our ability to meet the ambitious goals that we've set for ourselves here. And so one of the things that we have done over time, as well as building out the practice itself, is to make changes to our structure, to make sure that our needs are being met. So last year we made two key changes to our responsible AI ecosystem. First, we made critical new investments in the team responsible for the Azure Open AI service. This is a service that has all of the cutting-edge models from open AI available within it. And second, we infused some of our user research and design teams with specialist expertise that had previously been centered in one particular team.
So, you know, the way I think about this is, you know, responsible AI has today been a journey and it will continue to be. So as we go forward, we know we need to learn and continue to evolve along the way, especially as governments now are starting to define more concrete policies and regulations that will govern this space. Now as we go, we're committed to sharing our learnings and making sure that we are learning from outside the company as well.
Brett Iversen: Yeah.
Melanie Nakagawa: Her governance point is just so important because on sustainability, we share many of the same important issues. So on governance for sustainability, you know, we've had the, I guess, privilege and opportunity to have our commitments announced by Satya and Amy Hood and Brad Smith in 2020, really demonstrating the executive sponsorship for our commitment. And then from a governance perspective internally, Microsoft also charges a carbon fee, internally to our business groups that help us do the work on carbon removal and carbon reductions and really helps business groups see the opportunities in their own mitigation plans and reduction plans. So I couldn't underscore that point enough about the importance of the internal governance that allows us to frankly move forward in both of our key issues.
Brett Iversen: Yeah. Resonated with me too, yeah, Thank you. Well, we'll stay here. I know you're newer to Microsoft. Maybe a little bit on, you know, what brought you here and how you think about helping the company reach the ambitious sustainability commitments and how you see the work evolving over time.
Melanie Nakagawa: Yes, I am. I am still relatively new here.
Brett Iversen: We're recording this. So this, you know.
Melanie Nakagawa: Yes, I am still relatively new here and it has first been just an incredible privilege and a pleasure to join in a company that has been at frankly one of the forefronts of leading on ambitious commitments around sustainability and matching that with actions. And that annual report we do each year is really just chock full of the specific projects and actions and initiatives where we're investing. You know how I got here, so my background is I've been a policy maker, a diplomat, an investor in a private equity fund. Working with also, you know, tech companies around the world.
And part of what brought me here is really bringing the nexus of technology, policy, and finance all together from a sustainability perspective. And how do we leverage those experiences that I've had into the role of sustainability all up at Microsoft? And so that was part of the journey to get here and what led to it.
Also in terms of kind of where we go from here, you know, this past few months has really been about learning, you know, about Microsoft's journey to get to its 2030 goals and making those investments. Every day, I wake up to our policy team in Washington, D.C. who helps inform us about what's happening both in the US and in Europe and around the world on policy actions that are helping to drive the sustainability agenda. You know, we are really benefiting from the fact that in 2015, the world governments got together and said, "We are committed to, you know, no greater than two-degree Celsius warming future for the world.”
And that's really important. And you know, we now, since 2015, governments have been putting in place policies, initiatives, and actions and companies have been stepping up. That's helping drive businesses for us to participate in on sustainability, but also helping to drive these commitments to actually meeting the goals that we need for our planet. So it's been quite a journey so far and I'm excited to be here.
Brett Iversen: Yeah, good. Well, we're happy to have you. Yeah, so we've already talked a bit about how Microsoft operationalizes your responsible AI commitments, but if we can get a little deeper into just the governance and management, these issues. We get a lot of questions on this, like what's the board involvement? How does your team work with Microsoft leadership and even the broader org to achieve our goals? So you know, Natasha, can you help with this?
Natasha Crampton: Sure. So it's really not a cliche to say that for responsible AI to be meaningful, it really starts at the top. So at Microsoft, our Chairman and CEO, Satya Nadella, supported the creation of our Responsible AI Council to oversee our efforts across the company. Now the council is chaired by Vice Chair and President Brad Smith, to whom I report, and also Kevin Scott, who's our Chief Technology Officer. Kevin's in charge of setting the technology vision for the company, as well as overseeing Microsoft Research.
Now I think for me, this joint leadership is really core to our efforts because what it does is it sends a very clear message that Microsoft is not just going to be a leader in the technology of AI, but Microsoft is going be a leader in responsible AI. Now the Responsible AI Council convenes regularly and it brings together representatives of each of the policy and engineering and research groups that are dedicated to responsible AI.
This includes the Office of Responsible AI that I lead, and our AETHER Committee, and we also bring together senior business partners. These are our partners who are responsible for the implementation of our program in their teams. For my part, I find those council meetings to be both challenging and refreshing, but they're challenging because we are working on a hard set of problems and progress is not always linear. Yet, we know we've got to confront the difficult questions and make sure that we are driving accountability. The meetings are refreshing because there is just this collective energy and wisdom among the members of the Responsible AI Council. And I often come away with fresh ideas about how we can advance the state-of-the-art.
And to your specific question, Brett, about the board, the Environmental, Social and Public Policy Committee of the board provides oversight of our responsible AI program. So we have regular engagements with the committee and that really makes sure that we are bringing the full rigor of Microsoft's enterprise risk management program to our responsible AI work.
Melanie Nakagawa: Actually, similarly, that ESPP Committee of the board also provides oversight for our sustainability work as well.
Brett Iversen: Love it. Our investors love hearing that. So thank you both. Melanie, there's so much discussion on the challenging topic of Scope 3 emissions and how we address that and, you know, for our audience, and maybe not be as familiar, you know, there's greenhouse gas emissions that aren't from a company's own sources or it's purchased electricity, but you know, rather from across the entire value chain, you know, going upstream and downstream to customers that are using our products. So, how do we think at Microsoft about this challenge and what are we doing to help ourselves and others address it? Because you know we don't have direct control of that. I'm sure it's a challenging topic.
Melanie Nakagawa: Well, great. Your knowledge of Scope 3 is fantastic.
Brett Iversen: Yeah, that's right. I studied, I studied.
Melanie Nakagawa: So at Microsoft we are in the implementation stage of our sustainability journey all up and you really highlighted a key part. So Scope 3 why are we fixated on it? Why is something that you need to know about? Scope 3 for Microsoft is around 96% of our overall emissions.
Brett Iversen: Wow.
Melanie Nakagawa: The Scopes 1 and 2, around 4%. So Scope 3 is ultimately our decarbonization challenge and our opportunity. It necessitates the co-evolution of the best practices from businesses and technology and policy among thousands of global stakeholders. When we made the carbon negative commitment in 2020, it wasn't just a challenge to support sustainability of our business, it was also an invitation for the world to participate in this journey, translating this into action and then action into impact. A major roadblock in our way is what we call the hard-to-abate emissions sectors. This is like materials. Think concrete.
A prime example of this is our growing sources of Scope 3 emissions. One other example in addition to concrete is semiconductors, chips. The global manufacturing of semiconductors could probably account for, by some estimates, 3% of overall greenhouse gas emissions by 2040.
And one big challenge is that they are manufactured in countries such as South Korea, Japan, and Taiwan, where clean energy is just not abundant to service those types of manufacturing facilities. So decarbonizing semiconductors and other hard-to-abate sectors such as concrete and materials really depends on transitioning the world to more cleaner energy sources, cleaner grids, cleaner infrastructure.
In the future, the countries that build and operate the cleanest grids, we really believe are going to have a competitive advantage. Those economies will really be serving customers like us who really want to see our purchasing be tied to those that are aligned with our commitments. Those countries where Microsoft and others have ambitious climate commitments, those are the countries that we're going to want to invest in and procure from.
So, another way we're addressing this challenge with semiconductors and other materials, is through Microsoft's billion-dollar Climate Innovation Fund. This fund is investing in technologies and companies from all stages in renewable electricity to low carbon materials. We also invest in a fund called the Southeast Asia Clean Energy Facility. And this helps build renewable energy projects throughout the region.
Another company that's really exciting to highlight is called Prometheus Materials. They produce a microalgae-based concrete for buildings that sequester carbon during the manufacturing process. So this technology is estimated to reduce carbon intensity by up to 90% compared to traditional units.
Another similar company is called Carbon Cure. They produce low carbon concrete by injecting carbon dioxide into the concrete during the mixing where it mineralizes and strengthens the product and reduces the overall emissions. These types of investments you could see, you know, appearing in physical infrastructure and other types of infrastructure out there. And that just means when you compare that to traditional, you're actually choosing a more climate friendly, a lower carbon product potentially, you know, a better product at the end of the day. So these are the types of innovations we've been investing in and thinking about.
Brett Iversen: It's fascinating. Those are some cutting edge examples. It's interesting to hear about, you know, especially from somebody who spends his time on the finance side of the house. So thank you for that. You know, Natasha, similarly in your space, there's obviously huge interest. Every question since this calendar year started for our IR team has included AI as a topic. You know, given the recent advances in the generative AI that we are incorporating across all of our products and services, what responsible AI issues are unique to generative AI and how's the company working to address those?
Natasha Crampton: Well, first let me say that, you know, I'm very happy that we've come to this generative AI moment with the benefit of seven years of responsible AI work up until this point, because that has allowed us to mature our governance systems, our tools, our practices, to get ready for this moment. And there is much from our practice that we've been building up all of this time that's directly relevant to our generative AI work as well. So for our generative AI systems specifically, we do use state-of-the-art methods to make sure that we are identifying, measuring, and mitigating their risks. And as I said, this does build on this long experience that we've had of operating these large-scale enterprise and consumer services, and of course our deep partnership with OpenAI, as well. That's been really critical.
Perhaps it's easiest to illustrate what some of the unique considerations are here by just using our new Bing implementation as an example of the sorts of things that we do.
So as I mentioned, the first step in the process is to identify the risks of these new systems. For the new Bing, we did this through a process called red teaming. What red teaming does is it allows you to bring together a multi-disciplinary team to really push on the edges of the system and understand where the challenges are. Now the thing about red teaming is it's a bit like identifying an iceberg ahead. It's very important that you understand that that iceberg is there, but you don't really understand the size and the shape of what you're dealing with. So having engaged in red teaming on the core components of the system, especially the new model that came from OpenAI, we then moved to a systematic approach to measurement where we could start to measure what is the size and shape of those icebergs, which are really different types of risk surfaces that we can have for these new systems.
Then we move on to a mitigation phase. And the purpose of that phase is to really think about what are the layered approaches that we can adopt to mitigating the risk of these systems. And in many ways, we really use a defense in depth type of approach, similar to what you might think of in the security context. So we make sure that we not only bake safety right into the model at the core of the system, but we wrap safety around in different layers on top of that as well.
So some of the core issues that we addressed as part of this identify, measure, mitigate process included making sure that the responses that you would get from the new Bing would be grounded in Bing search results. Sometimes people refer to errors that generative AI systems make as hallucinations. For us, we think of those as being situations where we have not grounded the response in Bing's search results. It was a huge help to us here that we had the Bing infrastructure to leverage.
So we've taken a number of steps there. We've also made sure that using some of our past fairness related work, we have made sure that, you know, Bing is not producing responses that might be reinforcing stereotypes or otherwise being unfair. So those are just a couple of examples of both how we have leveraged our past work, but we've layered new mitigations and techniques on top of that. And I feel confident that we'll continue to build out those systems and harness those learnings. I think our incremental release strategies where we've carefully controlled the release of these new technologies have been extremely beneficial and that those will be a core part of our approach going forward.
Brett Iversen: Yeah, I love that in all of the different responsible AI sharing that you've given us, there's this cross-discipline element, you know, in what I've heard on most of these. You know, it's not something we're just doing on the side. It's embedded with all of the functions, and it's a very integrated and cohesive conversation, which I think is critical. So it's come out in all of the different topics. You know, given that, and I'm still just picturing us injecting concrete like I need to learn all this stuff. Given that we have the two of you together, which again I appreciate, what can you tell me about how you see, you know, AI and sustainability interacting in the future? We get questions on both, but as you think about the concepts together, you know, potential harms of rapid AI transformation to our, you know, environmental commitments or we're flipping it, what are the promises of AI? And maybe that's an accelerant for global sustainability. So how would the two of you help me think about that? And maybe Melanie, you can start us off on this one.
Melanie Nakagawa: Thanks, great. We believe AI is an essential part of building a net zero future. AI is already playing a role in improving sustainability management as well as measurement and accountability. When you think about there's this really kind of awesome potential of what the opportunity set is with using AI. So to pick up on a theme from earlier, concrete and cement. We are seeing how the use of AI can really accelerate rapidly the discovery of alternative materials to high carbon materials such as cement.
So if you think about it, AI can reduce the design and validation process for new materials by an order of magnitude and can lead to faster innovation in these materials such as low carbon concrete that can be used in building materials for things such as our data centers and reducing the overall environmental footprint and carbon footprint to those data centers. The AI provides an opportunity to go from the trial-and-error process to really rapid iteration.
We've also used AI and partner with it to create satellite imagery that produces first of its kind atlas, a living atlas, if you will, to map the world and measure the utility scale solar and wind installations on the planet, allowing users to evaluate clean energy transition progress in different parts of the world and different countries and track that over time. This tool is called the Global Renewables Watch, and it's a partnership between The Nature Conservancy, a nonprofit, and Planet. And it really provides unique spatial data on land use trends to help achieve the dual aims of environmental protection and increasing renewable energy capacity around the world.
While the prospects for AI to advance sustainability are tremendous, the second thing we must also think about is the other side of the ledger. So on one side you have the amazing potential for AI to accelerate our clean energy transition and sustainability goals. And on the other side we have to also recognize that these AI models use more energy for scaled compute. So we are committed to not only bringing on additional energy onto grids around the world, but also working to ensure that that energy is coming from carbon-free resources of electricity. Microsoft today is already one of the largest purchasers, corporate purchasers of renewable power, and we're helping to advance innovative commitments to carbon-free energy sources, like a contract we announced with a fusion energy company called Helion back in May.
We're also investing in research to measure the energy use and carbon impact of AI and working on ways to help make large AI systems more efficient through things such as the green software initiative, reducing the energy consumption through more efficient compute and network traffic.
Brett Iversen: Perfect. Natasha, can you add to this interesting fusion between these two concepts?
Natasha Crampton: Well, I think Melanie did capture the key points very, very well. I mean, I do think it's essential that we use AI to address the biggest challenges of our time, and of course that includes sustainability as well. And I'm also really grateful that Melanie's leading the charge for making sure that we are proactively addressing the energy needs of the compute demands for these new AI models.
That work to better define the questions and also undertake further research is also critically important. We do need to deeply understand the problem space here and make sure that we are working towards finding answers that are durable going forward.
One other dimension that I'll perhaps add is that in May we released our five-point blueprint for AI governance. And part of that blueprint was recognizing that as this technology moves forward, it's just as important to make sure that we have proper control over AI as it is to make sure that we are realizing its benefits. So one of the specific points that we recommended was that we thought that for AI systems that are controlling critical infrastructure, they should have safety brakes. So we can think of these systems, which the government would need to define a class of, as systems that might be controlling our electricity grids or our water supply. And similar to the safety protocols that we've put in place for other very important core technologies, we need to put these brakes in for AI systems as well. So you might think of the safety brakes that we've put in for elevators or school buses or high-speed trains. Conceptually, we need the same AI safety brakes for critical infrastructure as well.
Brett Iversen: Makes sense. We've covered a lot of ground in a short period of time, so thank you both. You know, as we close out, if you can each share a little bit about how you think about the future in each of your areas and, you know, thoughts for the road I would say for our group. And Melanie if you can start us off.
Melanie Nakagawa: Sure. Well, the future, so we are optimistic. I'm an optimist by nature, but we really feel good about the businesses that we are investing in, that we're purchasing from. And we're confident in our strategies and remain committed to reaching our 2030 commitments. And we're going to continue to help build the foundation for 2030 by doing a few things. So first we're going to be advancing public-private partnerships, advocating for favorable clean energy policy to bring more carbon free energy to the grid, and investing in countries that build and operate the cleanest grids.
Second, we're going to leverage our role as one of the largest corporate purchasers of clean energy and renewable energy. We're going to be building markets for the things that we need and procure from, and we're going to find opportunities to diversify and scale up supply, advance the development of sustainability markets for water replenishment, for carbon removal.
And then third, we're going to be improving corporate governance, sustainability and accounting to ensure that sustainability reporting readiness is really top of our priority list for the sustainability team. We want to make sure we're developing effective carbon accounting standards and initiatives that really underpin that reporting readiness.
And then lastly, we're going to be helping to realize the potential of this next gen AI to build innovative climate solutions that we need. We believe that innovation is a critical component to solving the climate crisis and that investment of capital plays an important role in accelerating the availability of these new solutions that may not be ready for the market today or commercially at scale. So through the Climate Innovation Fund, which is a billion-dollar investment fund, we are really looking to use the CIF to help advance and create some of those markets. We're going to be investing in, and we have continued to invest in, innovative technologies and business models that have the potential to produce really meaningful measurable climate impact by 2030. Since the founding of the CIF in 2020, Microsoft has allocated more than 600 million of that billion into around 50 investments. That includes sustainable solutions in energy, in industrial systems, in natural systems. And we're really looking forward, from a global perspective, to continue to advocate for public policies that confront and accelerate climate action, that we want to invest in mitigation and adaptation resilience and ensuring that this is a just transition for all that are involved.
We also want to make sure that we're aligning the international climate reporting and disclosure requirements that are coming out to ensure that they align with sustainability measurements and really are implementable. It's really critical that companies, non-governmental organizations, and governments all work together to ensure consistent, robust, and interoperable standards, especially around greenhouse gases. Those reporting metrics are really critical, and what's exciting is that digital technologies play a really important role in not only calculating those emissions, tracking those emissions, and ultimately reporting those in a digital ledger.
New sustainability reporting requirements are already on the way. So earlier this year, the European Commission passed the Corporate Sustainability Reporting Directive, the CSRD. And this directive is really significant for many companies, not just for Microsoft. And in the case of the CSRD, over 50,000 companies are likely being affected by this new reporting requirement. So really dedicated to helping manage our own reporting requirements, but also using our technology and data platforms to help scale ESG reporting and performance demands for our customers. Through the Microsoft Cloud for Sustainability, for example, we're delivering new data management capabilities to improve regulatory reporting, governance, and accountability, and drive sustainability progress from both the data management solution play to also AI integration reporting templates and automation going forward.
Brett Iversen: Nice. Natasha?
Natasha Crampton: For my part, I'm enormously optimistic about the future of AI and I'm confident that Microsoft is uniquely well placed to lead in this moment. We have spent significant time and resources to be ready to build and deploy AI responsibly, but we also recognize that we are just one company here.
We will do our part, but it will take a whole of society approach here. We need to come together to define those right guardrails, to secure those beneficial uses, but also to guard against the misuse of AI technology as well. We're engaged with governments around the world as well as civil society organizations, academics, industry partners, our own customers and partners here at Microsoft as well.
So I'm confident that our commitment at Microsoft to responsible AI is steadfast. If we keep humans at the center of our efforts and we are willing to be humble and to learn, we as a company can continue to move forward, nimbly and responsibly to really advance this transformative new technology.
Brett Iversen: Yeah, thank you both so much. I know how many questions we get from investors, and I can only imagine how many you're getting across all of the things that you do. So I know you're both very busy, so I appreciate the time and we thank everyone for watching. You know, ESG, as you've heard, is built into everything that we do as a company and we're glad we had some time to talk to you about it today. Thank you