Discover more from Elad Blog
Fireside chat with Satya Nadella, CEO of Microsoft
Video and transcript of our fireside chat at Stripe from Monday
Stripe hosted the below fireside chat between myself and Satya Nadella, Chairman and CEO of Microsoft.
Evolution of Microsoft from 1990s to today
AI & MSFT
Societal impact of AI (healthcare, education etc)
Growing Microsoft from ~$250B to $2.5 Trillion & how Satya thinks about his legacy
Thanks, everybody for joining today, and thanks again to Stripe for hosting. It's a huge pleasure and honor to have Satya Nadella with us here today. He is the goat of technology CEOs. So, it's like having LeBron here with me or something similar is sort of what it feels like.
Just for a little bit of history, you joined Microsoft, I believe, in 1992, ran a variety of different divisions, and took over as CEO in 2014. When you joined Microsoft, it was perceived as sort of the behemoth of tech. And then it went through a period in the 2000s where it was viewed as a company that was still very strong in enterprise and other areas but wasn't necessarily going to set the stage for the future.
And then I feel like under your tenure, it's transformed into one of the most exciting and interesting companies in all of technology. And that's through big moves around LinkedIn, OpenAI, gaming, just a variety of different areas.
It'd be great to just hear first, what was Microsoft like when you took over?
And how did you think about the culture, the directions, the strategy, and how you were going to impact that over time?
First of all, it's great to be with you, Elad. And it is true. I've now spent over three decades in one company, and it's pretty much all my professional career, and I have better language to describe it now. But I think companies, I mean, here there are lots of folks who are founders of companies, and I think companies also go through these refounding moments. I picked this up from Reid Hoffman, which I think is a fantastic frame. It's not talked about as much, although in 2014, when I became CEO, having grown up in the company as a consummate insider, I felt that we needed to essentially refound the company, ground ourselves back in what's the core sense of purpose and mission so that then we can pursue something new and bold.
Keeping that mission in mind. In fact, when I joined in 92, we used to talk about a PC in every home and every desk as our mission. In retrospect, it was easy. You could put it on a spreadsheet and calculate it and what have you, except that by the late 90s, we have more or less achieved that, at least in the developed world. And I felt ever since then a little like, okay, what's our mission? Is it mission accomplished? Is it time to return all the money to the shareholders?
And so that's why I went back all the way, quite frankly, to the very genesis of the company. After all, we were a tools company first. Bill built the basic interpreter for the Altair. And I said, god, there's more need for tools and platforms in 2014, 2023. I'm coming from OpenAI's Dev Day. That's who we are, right at the core, we build tech so that others can build more tech. Let's ground ourselves in that mission. And that has been very, very helpful.
The other one, though, is culture, is the other piece you brought up. I distinctly remember at Microsoft in the late 90s, walking around campus and there were just all the folks sort of saying, God, we must be God's gifts to mankind because we are so good now that the market also recognizes it, except that it was obviously the beginning of the end in some sense, right? The day hubris takes over and you're not grounded in what made you, first of all, successful.
That customer feedback loop, that hunger to learn, to experiment, to do things, which startups do. And so I picked up this cultural frame from Carol Dweck at Stanford around growth mindset, and I said, God, we don't want to be these know-it-alls, but we want to be learn-it-alls. And that has been a godsend, right? Because, one, it is not considered new dogma from a new CEO. It is something that I think spoke to people as humans. It helps us be better parents, better partners, better colleagues, neighbors, and leaders.
And so between these two things, that renewed sense of purpose, I tell you especially, you shouldn't. I mean, founders have this innate capability because they've created something from nothing. And then there's the cult of personality that carries forward. But at some point, if your company is going to outlast you, that mission has to be a lot more than a cult of personality. And so that's why I think CEOs in particular, or mayor model CEOs like me, being much more focused on what is the mission, what is the culture, is the one thing that I would say are two critical things. And of course, that gives you the permission to then make the right calls, but you have to make the right calls on strategy, picking, and execution, but at least it gives you better shots on goal.
How do you think about the ability to pursue strategy in today's climate?
So I know M&A is a little bit more challenging for a variety of reasons. There's a big wave of AI and other technologies sort of happening.
What advice would you give to founders in the audience or people who are running successful businesses in terms of how to think about their directionality?
But also, how to think about tools like M&A or organic growth?
We talk a lot about M&A but I think all of us fundamentally bet on organic growth, whether you're a startup or you are a large company. If I look at, even at Microsoft, I think a lot of what gets written about is our inorganic.
But when I think about most of the big hits and most of the big revenue generators are quite frankly partnerships. And organic partnerships are another thing that people don't talk about. The OpenAI thing is a great partnership. I grew up, in fact, the Gates Grove model is something that I love, which is Intel and Microsoft created the PC ecosystem. It was one of the most wonderful partnerships.
It's also important in partnerships to know what happens if one of the partners becomes too greedy, then it's unstable. But if you can cultivate great partnerships, that's another fantastic source of growth for companies. And so yes, inorganic partnerships and then strategic M&A all three matter for a company today.
I think we're going through a little bit of what I'd say is regulatory adjustment around the world is the best way I can describe it to understand whether we should allow M&A to big companies, just because they're big, shouldn't be acquiring, which I think will have a chilling effect. And it's not exactly good for quite frankly, more startup creation and more vibrancy. But let's say that settles. But I think yes, I would always look at all three of them as organic.
First focus on having a great organic plan because that's the one thing you can always control. Partnerships are something that I would say, don't think of it as a press release, but I think it can absolutely be. And long-term stable partnerships, where you win, I mean it's the three wins, right? The customer wins, the partner wins and you win. In fact, the best partnerships are when you start thinking, caring about the partner, really making sure that they're getting economic surplus.
Sometimes in our Silicon Valley culture, we have a little bit of excessive zero-sum stuff. There are very few zero-sum battles, actually, when you think about it. But we are very obsessed about everything as being zero-sum. And that's where I think a little bit of subtlety, and nuance would help.
So one of the big waves that's happening right now that Microsoft has really been central and seminal to is this change in AI. And that's transformer-based models and diffusion-based models have changed the trajectory of what we can do with these sorts of machine systems.
You've lived through some of the biggest technology waves of the last 30 years in a really central position. And that's everything from the Internet to the Cloud to the PC revolution, which you mentioned, and mobile.
How big of a deal is AI in your mind relative to all these other trends?
I think it's sort of as big as any one of the things that you mentioned. The way I try to come back to what exactly changed and how we should relate to it in terms of any business building, product building. I think there are two big changes.
One is I think we're going to think about application interfaces drastically differently. I mean, we've been talking about natural user interfaces forever, you could say, for 70 years of computing history, from Engelbart down, has always been about like, hey, how do we have the ultimate human-computer symbiosis? And I think we now have some new tools to rethink that. It starts with chat, it starts with text, but it goes quickly beyond that. It's multimodal and it's going to be very interesting for us to sort of apply.
So that's why in some sense it reminds me a little bit, at least in our history, of what happened. And I mentioned this to you backstage when I joined Microsoft in 92 just released Word or Excel or PowerPoint or Publisher. Each day there was a new app. And because Windows 3 was just happening, it feels like that, which is how I can rethink even existing categories with a new interface. But we also know that new platforms are not about just taking the old and just building a new UI.
But there is also, what is it this UI can create as a business that didn't exist. That's the thing that I'm most interested in. Mobile had a lot of things we did on the desktop, but it also created companies like Airbnb and Uber. And I think that that's something that we should think about on the UI side. What's the UI for the AI first app? I don't think we've yet cracked it, but I think we're getting close.
The second thing that I also think is more, in fact, the other technology that people don't talk about as much. I don't know why. When we talk about our big paradigms are relational databases. God, what a thing it was. It is like the other secular journey in digital technology was digitizing people, places, and things, right? That's another 70 years. All we do each day is we wake up and there are more places, people, and things that are digitized, and relational algebra and relational databases helped us reason over it in interesting ways.
I feel like we now have a new pattern recognizer or new reasoning engine in this, doing neural algebra on it. So I feel these two things, Elad, if you take an existing category, I have a new way to think about the user interface. I have a new way to reason about all the data I have and the knowledge in the world and do the join, so to speak. If you start thinking of applications that way and then building a new to the world business, then I think this would be as big as at least any one of the things that you mentioned.
A lot of the trends that people are talking about right now that at least strike me as a little bit early, but may be very interesting in the future, that captures both of the things that you mentioned is moving to an agent world or an agent-driven world. People talk about how eventually we'll have agents that will represent us, will represent apps we interact with, companies, governments, etc.
How much of a believer are you in this sort of future world of agent-driven action or agent-driven interfaces?
And what do you think the timeline is for that if it does exist?
I'm a big believer in that. In fact, at least in our world, the design pattern I love a lot, which we picked for our own product building, which we evangelize as a pattern for anybody, is this copilot.
First of all, there's a human in the loop. The human has agency and judgment of human matters. And so the first instances were things like GitHub Copilot, where the copilot is built into the app canvas, so to speak. There's a sidecar, and those things all work together to help you with your task at hand, which is to get your coding done.
And then we have now propagated that into knowledge work with the Microsoft365 copilot across all of our surfaces. But ultimately, when I think about what we did with Bing Chat, I think of Bing Chat as basically the web copilot. This M365 is the work copilot. So the web Copilot and the work Copilot could be sort of the universal agent, so to speak. That's kind of how at least I conceive of it.
But it needs one important capability, which is it needs to be able to talk to other agents, a customer service agent, a travel agent, to get work done. Some of it’d be interrupt-driven, in the sense that it'll be autonomous but it'll also bring back to you for decision-making. So I think that one of the key runtimes of our time will be that multi-agent runtime.
And we have a thing in open source called Autogen that is getting some good traction. So we are building some of the stuff similar to that underneath what is our copilot. You know, OpenAI launched a bunch of very interesting stuff with GPTs, which is sort of, I would say, early agents on top of chat, GPT itself. They even have an Agent API, all of that we will put into our copilot ecosystem.
So I think yes. So to your fundamental point, I think this idea that people will have agents, these agents will interoperate with each other, there will be some type of super apps that whoever cracks, there'll be a few runtimes where naturally people will gravitate to, which will be these multi-agent frameworks.
You mentioned as well that you're very excited about the business potential of a lot of these things and what's going to transform. And obviously, Microsoft has done some very innovative things through GitHub Copilot, through some of the other areas that you mentioned.
Are there any areas of the business world that Microsoft is not directly involved with that you think are most likely to transform via AI in the near term?
That's interesting. If I knew about it, I probably would be in it, but one of the things I am grounded on is I think the time has come for us as a tech industry to directly parlay what we celebrate as tech advances into broad economic growth.
Because I think there's this real critique, which some of it is, I think, well-founded about. Hey, you guys talk a lot about sort of all this tech, but where is the economic growth? I mean, last time we checked in the developed world, inflation-adjusted, we are probably at what, zero or negative growth. And so there is a real need for economic growth and economic growth that comes while keeping the planet healthy and more equitable in terms of the growth itself. So there are many other things as well, which I think are core responsibilities. But that said, we have to drive economic growth.
And that's where I got excited with GitHub Copilot, right? When we can take something like software development and one, bring joy back to software development. Man, what we had done to it, the fact that I've just got to copy-paste from a variety of places, and get distracted. Instead of that, let's just stay in flow focus and then see how you can bring back productivity to the software developers.
But the interesting thing about the more we study GitHub Copilot's effect on an organization is if you get software development to be faster, all the other functions around change. The workflow, when a salesperson does a pull request, that's kind of my thing about like, wow, that's a different org. And so now you can imagine an entire organization that's moving at a different pace.
So I think we are at the frontier of figuring out what it means for both having these productivity boosters inside of every function and then what is the workflow if you have it like you talked about, agents, I think we will have to discover.
If you remember in the mid-90s when we first started putting things like systems of record like CRM and ERP, we used to talk about something called business process reengineering. It was as much about a new methodology or a doctrine even of how you run businesses. Like you didn't have five finance departments or you didn't have manufacturing cogs, not in financial accounting. So some of the business practices will have to change.
So that's why I think what is probably interesting would be how people think about a vertical industry or a business process probably is going to be very different.
One of the things I think most people think about is “Can there be a large business created that only focuses on one vertical or one business process?”
I think there can be. It all depends on how much economic surplus you can create using this technology.
You mentioned partnerships earlier and how they're a key linchpin to Microsoft's ongoing strategy and how it's worked with different partners over time. One thing that I think has been very exciting is how you've both worked with OpenAI very deeply for their closed-source models as well as Meta around Llama and the open-source world.
How do you think about how Microsoft engages with open-source and closed-source AI evolves over time and those relationships with those partners in particular?
Yeah, I mean for us it's not an either or. GitHub exists primarily because of the permission to support the open source ecosystem, so it's sort of not something that we take lightly, and therefore we will always be as a company. In fact, it turns out I think most people don't recognize we are one of the biggest contributors to open source. We are probably today the biggest contributor to Linux as Microsoft. So it's sort of now fundamentally ingrained.
But that doesn't mean we don't have a bunch of proprietary closed-source systems and revenue streams as well. So, to us, it's pretty much part of the business. Then the question is, what's the best way to be able to meet the developers, meet the companies, and meet the organizational needs that we serve well?
And so that's why even when it comes to foundational models, obviously OpenAI is our lead partner when it comes to frontier models, but we have a lot of open source models, including ours. We're really excited about even our own contributions. Like I love these SLMs, the small language models with PSI. And so we're going to always contribute. We'll support and make sure that developers have a choice. And in our own products, you'll see a mix of use as well.
One of the emerging critiques around both open source models as well as advanced LLMs is concerns around safety. And I think people often mix three types of safety.
There's safety that I'd almost view as textual safety. It's misinformation, it's bias, it's hate speech, etc. There's physical safety, will the AI be used to derail a train or develop a virus or things like that? And then there's existential safety.
Do you worry about at some point having some sort of confrontation with an AI species or intelligence that's gathering resources or whatever it may be?
How do you think about those critiques? And how do you think about AI safety over time?
Yeah, I remember talking to you a few months ago and you had laid out this taxonomy, which I like. I think one of the things that we don't have is a well-grounded way to talk about these three levels. Because when we say safety, it could mean, wow, we are talking about an existential issue or election interference, deep fakes or what have you. So I think let's unpack them that way.
So my feeling is that the first thing we've got to really go and focus on is any real-world harm today, because of any AI deployment that sort of has not gone right. In fact, in democracies in particular, the thing that I worry the most about is, quite frankly, elections and our democratic process somehow being unduly influenced by some use of AI. I think that's the place where rightfully so, we'll be held to account if something goes wrong there. Because after all, that was the critique of what happened in social media. Everybody was excited about social media because of the Arab Spring, but it kind of nearly broke democracy. And so therefore, now everybody says, look, we're not going to let that happen again.
So the question is, what do we do there? What does the government do? What does the civic society do? What do companies do? And quite frankly, I think we somehow say that this is, at the end of the day, it's a societal choice. It's not any one company can do everything here. It's sort of a societal choice.
Like, for example, in the United States. After all, you have to be able to come together and say, how do we think about free speech? And what turns out to be free speech that is now bordering on election interference or what have you, or disinformation? These are tough things, and so therefore, this is not like a decision an AI can make or a decision a company's moderators can make. They're societal norms and decisions. And so that's the most complicated process we have to engage in, quite frankly.
The other ones, the existential risk one, there's time. Second, I think there are more engineering solutions to that in an interesting way. The fundamental thing is if we have a self-improving program that we've lost control over, the last time I checked, there are people in other engineering fields who know control theory and how to think about it. So there must be stuff we can learn from others and apply it. But I'm glad that the dialogue is happening on the existential risk at the same time.
The other one in the middle, which you talked about, which I like, which is before we deploy AI into the real world. Like, for example, if it's going to actuate stuff in the world, that might be a thing where you may want to think about more carefully.
And that's where risk-based regulation is also a helpful thing. It's like cars do get deployed, but they have regulations that are different than a lot of other regulations. And so we have in healthcare, in financial services, in auto safety, already existing regulations, and we can subject AI to the same regulations. So I think having maybe for the middle, more risk-based approach, more research funding, quite frankly, for the existential stuff. And then for the real-world harms today, whether it's bias or election interference, both: What are the societal norms that are as much about us as a democratic society, and especially in the United States? I hope we take that responsibility, and I really hope there will be an election in which a politician wins the election where they're able to articulate a vision. This is where it bugs me when tech and tech leaders are the ones who are talking about this. Right? I didn't elect a tech leader to speak for me as an elected official. So, I would rather have the representatives of the people win elections by having a real vision for what the norms are for how we engage.
Related to that, there's a lot of promise for AI in terms of global equity. And so when I look at the really big thrusts in terms of where it can impact things, one is in healthcare, and you see things like the Med-PaLM 2 model from Google, which has enormous performance on the healthcare side and the ability to have good output relative to health-related criteria.
And then education. And Microsoft owns Minecraft, which is one of the most popular tools that I've seen parents engage with. And when I think about the ability to impact education, it's going back to this agent-driven world of something that can customize or interact with your child in a rich way, can tutor them, can teach them.
I'm curious about your thoughts on either those areas of global impact of AI in a positive sense or other areas that you're most excited about.
Yeah, I think education, health, and perhaps financial inclusion, would be the three things.
The first one on the education side. I was in the UAE just last week and it was fantastic to see they have an AI minister in the UAE in their health department. They launched for the country an AI tutor built on one of the GPT family APIs. But also I met the founder who was building that and using a variety of different techniques to sort of ground that model. The beauty of having a personalized tutor for all the students of the world is absolutely imminently doable. When you think about the government transfers, and government subsidies that exist in education, all over the world, even in the developing world. And now to be able to have GPT4 turbo pricing, I think it's easy for us to think about delivering personalized tutors that are very, very capable. So I'm very excited about that.
Healthcare is another one. Like if you take even the US, what, 19% of our sort of GDP is in healthcare, we know we need the health outcomes to get better and costs to come down. And a lot of the cost is quite frankly, workflow cost, right? It's not even like we always go to some magical drug, this, that, and the other. Those are all fantastic. It's important. But the real cost is in care management. And that's why I'm excited about partnering with someone like Epic who can make a massive difference. Epic is everywhere. They're ubiquitous, they're building generative AI deeply. And I mean these things like they were telling me about all these use cases when you finish a shift and you want to hand off that process. Why don’t I have a summary? Why don’t I, when I am in the bedside, have a bot that I can interrogate to ask all the questions and not have anything lost in the inbox for the physician? With Nuance, we have this fantastic sort of way now to be able to really transcribe their doctor-patient conversation and reduce physician burden. And that technology, once built, the software once built can reach every hospital in the world.
And so those would be the two places I think that we can make a huge difference.
We talked a lot about AI. Microsoft obviously does an enormous amount of things in an enormous amount of important areas relative to B2B, consumer, etc.
What other areas are you excited about besides AI?
We just finished closing Activision Blizzard. We are very excited about gaming. You brought up Minecraft. So, gaming is another place where I think is a category for us. When I look back at Microsoft, there are three things that I think come naturally to us, which is in our, I would say DNA.
One is these platforms and tools. We will always be a platform developer tools company. That's sort of the core heritage. The second thing is productivity and communication software. I think that's the other one that we do. And the third is gaming. In fact, I think Flight Simulator was built before Windows was built. And so we will always be in all these three categories.
And also in gaming. What is exciting to me is we love the console, and we love PC gaming, but with some of the changes in cloud gaming, we can get gaming everywhere. And so that I think is going to be the place where AI as a platform for all of these would make a real difference as well.
So I think you've had one of the most successful CEO runs of all time in the history of capitalism. I don't say that lightly. And you took a company that was worth 250, 300 billion dollar market cap and at the time people thought that was an insanely high market cap. And now you're at 2.5 trillion. So you've added 2.25 trillion in market cap over the last eight or nine years.
What do you want your legacy at Microsoft to be? And how do you think about that going forward?
Or how do you think about it years from now and you're looking back, what do you want to have accomplished?
Yeah, I mean, the way I think about legacy perhaps, I'm very suspect of anybody who comes in and says, oh, I showed up at the job. The person who was there before me was all terrible and I changed it all and it's all me. I'm very suspicious of those people because at some level the job is to build institutional strength so that the people who take over from you are more successful than you.
In fact, I always think about the next person, and by the way, I always thought about this growing up in the company, that if the person who takes over from me in any role is able to succeed, then maybe you did something right. So quite honestly, in tech, there's no franchise value. We know that you have to sort of reinvent yourself. So that means some of the things you do, you have to also really, like somebody said, the metaphor of you got to make sure you leave enough energy in the system so that it can continue to renew it.
Just thinking of success by market cap sometimes is definitely not the way to measure things because you got to invest like long before it's conventional wisdom. You can't expect markets to always reward you. In the long run, they will. But, there will be periods of time, as Bezos would say, you got to be okay being misunderstood, which I think is right. So therefore I look at all of those as perhaps long-run indicators or something that went right, but not the only indicators.
And so therefore I'm much more focused on making sure that Microsoft is doing relevant things in the future. And if we have created enough institutional strength, cultural strength, we're not doing things out of envy. We're doing things that are useful in the world. We're doing things that sort of help the company succeed. I feel if the world around us is doing well and we're doing well, that's a fantastic sort of equilibrium to achieve.
Makes a lot of sense. And then the very last question because we're almost out of time. Thank you again for being so generous with your time.
When you look at other areas of technology that you consider very important, there are a lot of different things people are talking about now in terms of fusion or other forms of energy, self-driving cars, and how that's going to transform cities and transportation. There's a variety of these areas.
Are there any that you're watching most keenly or that you're most excited about the societal impact that they may bring?
I think energy. I mean if I look at it, even for us, it's pretty existential, right? If I look at our own energy needs, as we think about AI and our build-out, we definitely need it. I mean, at some level, the lucky break we have is it's all fresh-project starts. We're the largest purchaser of green energy today in the world. So that means we can stimulate even a lot of these new projects, which could be sometimes risky.
But I feel the biggest thing that we have to make is the energy transition. And the energy transition is a complex thing. I mean, I had never realized this effectively. I think what we are talking about is you have to take 250 years of chemistry and the entire petrochemical sort of value chain and sort of somehow compress it into 25 years. And I look at that and say, wow, that's the real challenge.
Even from an AI perspective. I'm excited about this AI stuff being sort of super useful in somehow synthesizing some new molecule that then helps with some new batteries, which then help with taking the abundant solar power and making it much more possible for us to tap into it even in a data center.
So those are the kinds of real problems that need to be solved quickly. So that's the one industry more so than anything and that entire chain that I think we have both an adjacency to it and a real dependency on it.
Well, fantastic. Thank you again for joining today.
Thank you so much Elad. It's been a pleasure.
Firesides & Podcasts