Fireside chat with Reid Hoffman on AI, Big Tech, & Society
Notion was kind enough to host a fireside chat between myself and Reid Hoffman on AI. Transcript and video link below.
Youtube link:
I did not edit the transcript, so there may be errors.
Elad Gil
First off, a huge thank you to Notion for hosting. They've been incredibly gracious to us in terms of having us in here for the last few talks. So thanks again for Notion, amazing company. If you're recruiting, go join them. And the real purpose of the event are sort of twofold. One is to continue to sort of build community in San Francisco and the Bay Area. AI has really been emerging as a very exciting sort of next gen area that I think will transform a lot of what's happening in startups, in society and everything else. And then second, just to have conversations on really interesting topics and kind of benefit from those conversations. We'll have about an hour talk, we'll open up for a few questions at the very end from the audience. And then unfortunately, Reid has to leave right after, but people are welcome to stick around for another hour to network and hang out and meet each other and then you'll get booted out at around to eight. So as a warning, that's when you'll be asked to leave. So, first off, it's a huge pleasure to have Reed here. He's done, I think, almost everything in Silicon Valley over the course of his career. He's co founder of LinkedIn, he's a legendary investor at Greylock, he was first angel check in at Facebook, he funded Airbnb, he's on the board of Microsoft, on the board of OpenAI. He's one of the very early proponents and funders of OpenAI. I remember actually, I think it was five, six, seven years ago, he'd actually invited me to a small event that he hosted on AI and its implications to technology and society. And so he's been thinking about this area for a very long time, and I think it's sort of one of the pioneers in terms of thinking about it very deeply. And so I think he's roughly done everything. And he's also starting a new company called Reflection, which is an AI company. So he's basically done everything. I imagine Read sometimes on Christmas Eve, getting into like a Santa outfit and flying into the air and distributing toys and stock certificates to children around the world or something. He's truly, truly done everything over time. So, first off, thanks so much for joining us today and for participating.
Reid Hoffman
Great pleasure. And you missed out investing in Mixer Labs too.
Elad Gil
But yes, Reid backed my now obscure first startup. So thanks for doing that. And that was a while ago. So first off, I just love to hear more about how you got interested in AI. When did you first get involved and interested in this area and what are some of the ways that you participated in the industry over time.
Reid Hoffman
So part of the funny thing is my major a bunch of you probably know this if you went through Stanford I was the 8th person to declare symbolic systems at Stanford. So the kind of artificial intelligence, the thinking about what is what our thinking beings, what is cognition, what is language, how do we learn, how do we discover truth has been something that has been my interest for a long time. And then I concluded that we were a long ways off. Got a little bit like maybe should study philosophy for thinking, studied philosophers, didn't understand thought that much better than anyone else. And so then went into entrepreneurship. And then part of what was happening is people started talking to me about oh, there's this resurgence in AI coming. I was like, oh yeah, I know about that. No you don't. And so I got persuaded to go and start talking to a set of the folks who were doing major things like deep mind as part of how inflection kind of came about and very quickly saw okay, this wave is coming. And obviously this year that wave is going to be picking up size. So a lot of people are going to be seeing it. But part of what we're seeing here is actually in fact the result of the fact that one of the major waves is coming is scale compute. And when you look at the various kinds of things that are under described under AI, they're all kind of versions of how do you apply scale compute in a way that you get this performance function with one exaflop. What performance function do you get with two exaflops? And if that's going up and you can go for a while with that, then you've got kind of some very interesting new kinds of capabilities and new things and the whole discussion about is it a tool or is it a creature and so forth. It's all very interesting to do, but right now that's like the main line we're seeing because the self place stuff that Alphazeer was doing isn't so much like AI as much as it's a different way of defining a fitness function. I think we're going to see a whole bunch of those. And so that's the thing that got me back into it. When I started, back when we did that salon, I wasn't really sure there were going to be startup opportunities. Now I'm obviously very condensed as a bunch of startup opportunities. It was like, well, is it only going to be massive scale compute that's only going to be available to a few, what's the things that are going to play out in this and all of that. My initial focus was kind of the question of well, how do you shape it for the benefit of humanity? What are the social implications? What are the ways that this IP and intelligence can be shared across the different efforts without doing any bad market effects or any other kinds of things as ways of doing that? And then as I got close to it. I said, oh, actually, in fact, the wave is going to be so big, there's just no way that even the large compute players are going to be able to do even more than a small fraction of this. And that gives a ton of room for startups, that gives a ton of room for even all tech companies doing something. And so then about, probably about four years ago or five years ago, it's probably around when helping Sam and Elon with open AI that I got much more intense on this.
Elad Gil
And then could you tell us a little bit more about Inflection? So I think this is the first company you've started or co founded since LinkedIn. And so how did it come about that you started a company again? Usually as somebody who started two companies, it's so painful that doing another one, I always do with dread, but also how to get going, who you're working with and what can you tell us about it today? And I think it's installed, so I don't know how much you can tell us, but I'm super intrigued.
Reid Hoffman
Yeah, so that's definitely the eventually you forget how hard it is and you get this little euphoria and you jump into it again. Because I started company Social Net, started LinkedIn, part of the founding team of PayPal, it just looks like, oh, I'm jumping into this again. So part of it is I had been kind of thinking about like, okay, where are they going to be roles for companies and the applications are going to be building their own large models. Where are going to be roles for other companies in this very murky future? Because it's all going to be changing very fast. Landscape, market, compute, competition, talent, available things, all these complex variables moving around. So it's very hard to see what the interesting fixed points are that you can build companies around or build longer stage projects around, but you started trying to do that. And so I was doing that. I was talking to a number of different folks. Mustafa, who from Deep Mind and Brain had obviously a driver's seat and some of the stuff and what are the kind of opportunities of what to do. And I can't say a lot about Inflection because we haven't been public about it yet. And so I was kind of, okay, look, I'll be chair of your board and I'll invest. And we were working out product and working on product stuff and working go to market stuff and all this. And he said, Well, I'd like you to cofound this with me. And I was like, you know, I do have a day job, right? And he's like, no, like a day a week, right? That's what I want to be doing. I was like, oh, yeah, I'll do that. And it's part because for the same reason all of you are here, the impact of AI across every industry and across society can be just stunningly, magnificent. There are challenges we need to work out as well. But one of the things that frustrates me about the general dialogue around this stuff is they go, well, Catch BD has a problem with factuality. You're like, okay, right, with the stuff we need to do there, but look at all the other stuff it does, and people say, well, it has hallucination problems. Like, okay, well, hallucination problem is also, by the way, creativity superpowers. What can we do with those creativity superpowers and that sort of thing. And so when Mustafa and I were working through that and thinking about what are the things to do, we think we have still a fairly unique, interesting approach that will be coming out before too long. And then I look forward to being able to talk about it more.
Elad Gil
That sounds great. It sounds super intriguing, and I think are you all hiring right now or.
Reid Hoffman
No, they are hiring, yes.
Elad Gil
So I guess there's a related question, which is where does value in the industry aggregate? And a lot of people are wondering what proportion of the successes in the industry go to the platforms like OpenAI or other APIs? What goes to the application layer? It'd be interesting to hear your viewpoint in terms of, like, what is this ecosystem look like in three to five years, both from the perspective of app versus platform, but also what's the natural evolution path on the platform side?
Reid Hoffman
Well, one of the natural ways to look stupid in the future is to make really concrete predictions in the present. So with that caveat and deep fear and hesitation. So I think there's fundamentally kind of two trends that are going on. One trend is on the scale compute thing, which includes large language models. I don't think that will be the only application of scale compute. I think the scale compute and size of compute will be driving one trend of progress. And part of that trend of progress is where does, for example, an extra 20% of performance matter? So if you think, well, okay, a virtual doctor, well, extra 20% really does matter. Maybe lawyer, maybe engineer, maybe go, okay, so on that large scale, like, okay, well, yes, we're innovating, and we've accomplished so much more with this smaller model. Well, what if you could do that smaller model now at scale on the things that really matter, where that cost curve of the large mail being the large model being super expensive is worth that either directly to what the large model folks are doing or as provisioning to startups and so on. And then I think the other trend will be highly tuned, more compact scale compute, whether they're large language models, foundational models, other things. And maybe they'll be tuned with very specific data. Maybe they'll be tuned for a very specific thing within kind of an operational cost. And I think you'll see progress and maybe some of those will be open sourced. I think we'll get into more of that later. Now in terms of aggregation of where is there interesting opportunities for building companies and where will the kind of the projects turn into great companies in three to five years and how much of it will be the large language model or scale model providers and others. There's definitely a lot of uncertainty there. I'm pretty sure that there will be at least multiple large model providers and I think that's good for the overall ecosystem in the industry. And I think that they will also in different ways like OpenAI is real thing as beneficial, AI is their top thing and so they don't have the thing to say oh, come build on the open AI platform and then we're going to go build that app. They have negative interest in that, less than zero in doing that kind of stuff. So that gives a lot of entrepreneurial freedom and ability to run an event which is I think is good, plus the multiple. So I think that's good. And I think the other area is to say these even like a 50 billion parameter, 100 billion parameter model or one XFL model kind of trained the right way, there'll be a bunch of these things that will be open source, that'll be good for developers, good for creativity and so forth. But we are going to have to be careful of this stuff, right? Because like for example, I was skeptical about the early releases of stability because of various forms of exploitative material or revenge porn or other kinds of things. Obviously, misinformation within the ecosystem is one of the things we have to deal with and we'll have to take responsibility for those kind of open models in various ways when you get these kind of supervisors. And so I think it's one of the things that it's important to important to track, but I think you will have a huge amount of generity and then I think it actually is kind of like the old school rules, like well, does your business have network effects? If your business has network effects then kind of like whatever you're provisioning in either of these two threats, that will be good if you're integrated to a lot of enterprises that integration is another form of persistence in business. If you kind of get a first to scale and you're in that first to scale all up blitz scaling, you're doing the aggregation of customers, the brand, the aggregation of talent, aggregation of capital and all the rest that could be it. And so all those are those old rules still apply here. Now the question of course is given so much interest and so much going on figuring out how to do them exactly well that's very challenging. But that challenging means is ultimately to the advantage of startups because in places where you can run experiments and you can run them without worrying about damaging your current brand or position or customers is one thing and you can run the experiments really quickly and change. You can try things in terms of well, maybe there's a big market here, maybe there's not, I can try it because different startups can go after different things. You can respond quickly, you can say, well, we tried this for two months and now we're doing something entirely different, which is one of the things that large companies can't do. I don't think there's any large company that can do that anyway. So that's where I think part of it is looking at now, that being said, to finish out an answer to your question, I think that one of the things that my partner Sam Motometi at Greylock and I wrote. In the fall because 100% certain of this is that within five years there will be equivalent of a copilot for every profession and define a profession as I process information and generate things that also have to do with information. Like a doctor generates prescriptions and diagnoses like all of the graphic designers generate kind of graphic designs. And I think there will be something for everything in five years I think is giving us generous time. Like I think it will be sooner than that and I think that is nearly certain and that range of impact and range of thing is part of why there is such amazing startup opportunity that even the current startups all going for it, look, they're going to pick some of them, not all of them.
Elad Gil
Yeah, that makes a lot of sense. And it also seems like if you start thinking about the split between startup and incumbent value for those things, areas that don't have a clear incumbent are probably great start up opportunities and ones that do maybe more mixed. So like accounting software maybe that's wide open, while in medical it's mixed depending.
Reid Hoffman
On who plays and regulation all the rest of it.
Elad Gil
Yeah, exactly. I guess going back to the platform question, because different people have different views of what the platform world will look like and by platforms I mean foundation models and APIs like what open AI are doing. And one view of the world is effectively everything becomes Taiwan Semiconductor Corp where there's one player who iterates through capital and process engineering and other things. You mentioned you think it's more likely to be at least an oligopoly market where there's multiple players competing. And one of the arguments that's being made in the industry right now is that with each subsequent model the capital and compute scale goes up dramatically. So if GPT-3 I'm making up a number is ten or 20 million a few years ago to train and then GPT four as I'm making up the number 100 million and GPT five is half a billion and GPT six is a billion or equivalent models A. Do you believe that that's the future at least for the foreseeable future and B, does that effectively prevent new entrants at some point because the cost of entering is so high that eventually the market just consolidates into a few players.
Reid Hoffman
I do think that's part of the reason why in the large model will be Oligopoly and as I know you think too like along the lines of cloud compute which is there aren't going to be that many end cloud compute providers because you just have to be doing a huge amount of real estate and power and kind of provisioning and all the rest of the stuff. And so there's going to be a limited end size of cloud compute. Well that cloud compute will parallel and part of the reason I'm optimistic about the Oligopoly is because all of the cloud compute will have a natural gravitas to saying oh, I should be a provider here too. And so that's the reason why I tend to think it won't be only one, but it will be N. Now that being said, capital, like the capital strain that you're going through, I don't think that capital is at all a problem for building new technology in the modern globalized network world. Even with the antiglobalization trends, a billion dollars is really not that much money. That's not the issue. The issue really will be compute availability, intelligence and integrity, kind of like handling it smart and safely around data, that kind of thing. Questions around the geopolitics and all the rest of that will actually be the things much more than the capital.
Elad Gil
So you mentioned that you think the cloud providers will have a natural sort of path into competing as platforms in this area. Could you extrapolate a little bit more in terms of what you think the role of various big tech will be in this industry?
Reid Hoffman
So the good news I think for society, for entrepreneurs, for markets is that I think it's very natural for like there's a set of players who are the giants and cloud compute now and the market is somewhat divided across them. There's a leader and there's chasers, but there's a heavy competition going on. There already there's new entrants trying to come and add into the pack all of them. Because when you get to what is going to be consuming a lot of scale compute is obviously a bunch of MLA functions and the engagement and the lens of seeing the future of Chat GBT, which is the reason why probably everybody in this room knows that I released the podcast interviewing Chat GBT already and that already captures that imagination that and sees all kinds of possibilities. Like obviously it has impacts on search, obviously it has impacts on education. Obviously some people go, oh, this is terrible, it should stop it because it's having impacts on education. And always the answer to that is how do we shape it so we can make education better? That's where the discourse should be and I'm going to do a podcast with Chat GBT sometime in the next week on that. Anyway, I think the good news about that is that all of them go, well, this is going to be a huge demand of compute. So we all need to be providers here and we're all competing with each other then for entrepreneurs and competition and society. Look, sure, product cloud provider one might be the best, but cloud provider two and cloud provider three and cloud provider four will be there and they will be competing on price and offering and quality and so forth and doing that. And that's why I have a very high belief that I oligopoly around these things. That is, we make oligopolies work. We have oligopolies and cell phone providers. We have a whole bunch of different things like we have oligopolies and tech platforms. We can make them work.
Elad Gil
What do you think? So if you look at the semiconductor industry as an analog in the least, each subsequent microprocessor that was released really merited an upgrade of your entire system, right? And so you had fabs that cost billions of dollars to make and they kept getting more and more expensive and then with each generation of chip everybody would switch over and then selling the prior generation of chip was a 10th of the cost. It was much cheaper and you could use it in all sorts of other applications. And you could argue that in the LLM or foundation model world something similar may happen where when GPT six equivalents for a billion dollars, GPT four is much cheaper and you can train a model like that for a fraction of the cost and then suddenly it's accessible for everything. Do you think that's the path that open source will take or do you think open source foundation models will be roughly equivalent in the near future to the cutting edge models?
Reid Hoffman
No, actually, I think your first is the right thing. And that was what I mean those two trails there's the scale and then there's the other one. And I think the other one will include some open source and some not open source, safety, other kinds of things as parts of that provision, protection of data, that kind of stuff. And then I think the large one will be I don't think those will be open source for a number of different reasons. For example, it didn't surprise me that the image diffusion stuff was open source because if you look at it, it's like, okay, it's a one to 2 billion parameter model. It's just a question. The only thing I didn't do when I wrote the essays around looking at how Daly would affect the world work and how to think about this and so on, was I should have put in there a prediction of within a month of the open source models because it's doable that way. Whereas when you're on these, these are super expensive, very compute intensive. Safety considerations are real and so I think it's much less likely that the open source will play on that threat.
Elad Gil
Do you have any extrapolation of the rate? Just like with Moore's Law you have some eventual asymptote that's just guided by physical reality in terms of line widths on a chip. Is there, like, Reed's Law or something? The Hoffman's Law of Asymptotic?
Reid Hoffman
Well, boy, I wish I had one that would be fun. And so what I would say is a little bit of the real thing you're looking at is measuring compute and it's a little bit less size and model these days. I think it's more compute. All of the Chinchilla paper and other areas is kind of ways of looking at this and I think that the fact is it doesn't even have to be linear increases with linear compute and linear performance. That was the reason I was gesturing it sometimes you say, okay, I had a two x increase in compute but only a 20% increase in performance. But sometimes that increase in performance is hugely valuable, right? Like if you say, well, that's a 20% increase in productivity for every programmer. That's worth the the two x to some at some point it becomes not worth it because two x four like law and you get physical laws. Now even Moore's Law, which as a law was kind of like well, this was a prediction of a network of innovation. To do Moore's Law they had to do lots and lots of different innovations and I do think we will continue to see lots and lots of innovations like how do you get the density of compute sufficient that you can continue to do more scale models? Because already we've got okay, power and cooling and network density and all these other things and we know that okay, when you get past to three nanometer chips, like what are you doing exactly? How does that all play on people? Then of course gesture to quantum and do a whole bunch of stuff. Maybe there's a bunch of interesting stuff going on quantum and for particular problems, very interesting. But I think that the density of compute I don't think we are near the Asymptote for that because there's all these different parameters that you can invent that aren't just going down nanometers of chips in order to make it work. And like for example, I think everybody in this room probably knows that network interconnect is one of the areas I think there's still a huge amount of upside in.
Elad Gil
Yeah, that makes a lot of sense. I guess it took AMD something like 30 years to catch up to intel just in terms of microprocessor generations and stuff. There's something similar here where you can just keep iterating up a curve that could last a very long time. And then if you look at the players in big tech and this will be my last big tech question google obviously had the team that have been the transformers, they have the data, they have the compute, they have the capital, they have the products, they have the human feedback. What role do you see them playing in the future of this market segment? And similarly, what role do you think Facebook is likely to play?
Reid Hoffman
So let's see, google obviously did a whole bunch of the work that helped generate all this, which is great. And they have some innovator's dilemma around search stuff. They have some worries around because there is general society, such a starting place of skepticism. They are very worried about safety and dialogue and other things and that causes them to adopt one path in doing this and that gives room for startups, which is great. But I do think that they are working hard and have a bunch of very smart people, they have scaled compute infrastructure. I think we'll see some interesting things out of them. I think Palm was interesting, but they still haven't really released imogen very much and I think there's stuff there. So I think there's kind of this question about Google figuring out what its identity and balancing these things are because as probably a lot of people know, there's a lot of dialogue about how to be responsible and Google takes that very seriously. And then for Facebook, I think they obviously have very early with creating fair. Yan Lakoon. Super smart. One of the people I talked to when I started digging into AI at the very beginning and there's a bunch of the human feedback loops that Facebook is a very natural place to. And while it's obviously very fashionable to just be critical of Facebook, when you have a billion down and a bunch of people using it, they use it for a lot of things they like. It doesn't mean I don't have criticism to do, I've even gone on television and done that and then had a conversation with Mark about why I did that. And I think they will also then play a serious role here. I think they will start kind of refocusing probably some from the metaphor stuff or including the stuff in the metaphor stuff. Because the thing is not just recommendation engines for your newsfeed, the thing is not just ads, the thing is how do these technologies now help amplify human beings and what should be the role within a social network? I would predict within a relatively short number of months facebook will start taking that more seriously.
Elad Gil
One of the things you brought up in a few different instances, both in the context of Google as well as open source is AI safety. And it seems like there's a variety of different definitions of safety. Everything from alignment AI someday as subsumas. There's bad content or bad outcomes that can occur through the use of AI. There's defense tech and then there's almost like political orientation or discourse. What does AI safety mean to you and what do you think is the right way to substantiate it?
Reid Hoffman
So it's too easy for a lot of people, including journalists, to go, oh, I'm doing AI safety. And what I'm essentially trying to do is say, until you're nearly perfect, you shouldn't release and you shouldn't be iterating. And that's a disastrous mistake on a lot of different fronts, on almost all the fronts, even one of the ones, some of the ones that are concerning like, potentially weapons and other kinds of things. And so you have to balance the we have to get engagement, we have to learn and what means that you have to then index to where are the major harms and where are the minor harms. And by the way, we have minor harms, you know, all the time. Like, for example, we have 40,000 deaths per year in car driving, right? And it's like, well, that's because we need the car driving. It's an important thing and it's important part of what we do. And I think there's a bunch of stuff that's actually extremely positive about the future that we can shape with AI and how do we get there across all these fronts. So what I tend to look at safety as is like, okay, for example, cybersecurity with AI, it's like, oh, a worm gets released that takes the grid down. Well, that would be really bad, right? And so you kind of go, okay, what are the things that have societal level impact of lots and lots of people? Then you say, okay, well, what if you're institutionalizing racism in parole decisioning was like, well, that would be bad. But by the way, one of the benefits of AI is we can study it, we can improve it, and we can actually, in fact, drive it out in a way that we can't drive it out of the system today as easily, right? And so getting on that path and figuring that out is actually, in fact good because we should be driving out racism out of parole decisioning, penalty, what the sentencing is, and other kinds of things, because it's shameful to have that sort of thing. So as another example, but that's like, get out there, be learning and doing it, but do be asking these questions and do be fixing it and do be figuring out how to monitor it. And do have kind of like part of what AI folks should be doing is saying, look, we heard these questions and here's what we're doing on them. Here's where we are today, and we're planning on being better here because that's where the dialogue should be versus we're perfect. For example, anyone can more or less figure out how to get a large language model to say something stupid because you say, okay, please pretend you're a Nazi. Now answer this question. You're like, okay, well, it's pretending you're a Nazi. If you go to search today. You can find this stuff, too. It's like the bar is not is it nearly perfect. The bar is like, what happens when you just type in and say, well, what's the history of Judaism? And if it gives you Nazi crap back, that's a problem. You say? What's? The history? Judaism. And it gives you, like, a really interesting, thoughtful equivalent of scholarly work and so forth. That's great, and that's the kind of thing he's doing. So you have to index, how is it that we are moving with all due speed towards creating the good futures? How do we make sure none of the catastrophic impacts happen? How do we make sure that the other impacts, they're minor and improving and fixing? That's what we should be doing. And too often, most of the people who describe themselves as AI safety people are like, well, I'm here to stop you from releasing. And you're like, well, but you can't learn. You can't engage. You can't learn. That's not it. It's like, how do we make the right release, and how do we make the release? Not so it's perfect, but so it's not catastrophic, has no real bad bads and is learning and improving and ultimately can get to a better place than we are at society today.
Elad Gil
One of the things that was very striking to me during COVID and we talked about this a little bit earlier, is the degree to which scientific discourse was suppressed or censored on the major social platforms. And if you look at the platforms that are building the main foundation models now, to your point, they may end up being overlapping in terms of the people who own the social products. How should we think about political bias models, scientific bias and models, or the lack of scientific accuracy? Because a lot of the training is going to be done by people, and people ultimately are going to have their own viewpoints on this stuff. So how do we think about that or how do we approach that?
Reid Hoffman
Well, I think it's a really important thing. Look, we have some kind of unfortunate various forms of erosion and politicization of science from both the left and the right, and I think we should resist both because I think we should try to say that, look, part of the thing is, how do you get to truth? And truth is sometimes difficult. Truth is I'm not a believer that truth is something that only white men can speak about. Truth for white men. I'm not a believer in that. I think lots of people can speak about truth there. I think you have to be more careful about it, because if I were to say, well, I understand deeply what the experience of being a person of color in modern American society is, be like, okay, well, just saying you're an idiot, and it's like, no, but I try to learn. I ask questions, and I can I can say, look, I think there's a horrible problem in the criminal justice system, and I will comment on that. Right. And so you want to go to what are the ways that we can be getting to truth and being cautious about it and careful about it? But that's what's made all the scientific proctors. That's what's made a lot of the technical progress. And how do we keep the discourse at that level even as we understand, like, some things might be complex. So, for example, you say, well, is IQ a good measure of intelligence or not? Well, it's at least a measure. I'm not saying it's a good measure or even the best measure. And is it kind of the questions of might there be some correlations between some genetics and intelligence? Like probably now if you start saying, well, but now we know it's this gene, you're like, okay, let's not be an idiot and let's not politicize it. Let's try to figure out how to understand it so that we try to make a society that's better for all of humanity in terms of doing that. And so I think it's super important when you're getting to these because you get it in search models, like search engines, like, how is that reflecting scientific truth? I think you're going to want large language models and other kinds of scale compute. You're going to want it to have auditable in various ways. You want to have the Audibility available in a way that we can have discourse about, you know, is it doing the right thing? And hopefully that discourse will be truth seeking.
Elad Gil
The other thing that people talk about a little bit from almost like a safety style consideration that's a little bit different or societal consideration is job displacement. And I remember, I don't know if it's six, seven, eight years ago, I used to be invited to these forums because a lot of people were investing in self driving car and self driving truck companies. And there's this meme of AI was going to displace all the truck drivers by now, right. Roughly this time frame. And I remember there'd be like senators who'd come and pull Silicon Valley and what should we do about all the truck drivers who are about to be displaced? And none of that happened. Right. Self driving has been a much harder problem than we thought. Where do you think is the biggest risk of job displacement and what's that time horizon? Is that five years away, 20 years away?
Reid Hoffman
Well, I think McKinsey did a pretty good analysis, and one of the mistakes is going. This job gets displaced, this job doesn't. What happens is the tasks and the capabilities and the tools and each job change and some change a lot and some change some. And that's a little bit of the point of saying kind of the copilot thing. So I think there's a whole bunch of jobs and it isn't just like the care jobs like doctors and nurses and teachers, but there's a whole bunch of jobs that we have nearly infinite demand for at a certain price point. I think engineering is like that. I think graphic design is like that, or design generally. Both my parents are lawyers, so I'm allowed to say this. I regret that. I think lawyers are in that. I think that there's a whole bunch of stuff there. And I think that even though we have the amplification, even though we have the job transferral, the changing stats of the job, maybe, by the way, your former million dollar year job is now in today's dollars, $200,000 your job or $150,000 year job. But fine, those are high class problems, and we've had some of that with 30 years ago. Doctors jobs were much more economically beneficial than they are today. They're still not bad relative overall, but they're not as comparison to other things. And obviously we'd all like doctors to be paid a little bit more, and maybe they could be amplified. I tend to think that it tends to be overstated. Now, that being said, it doesn't mean that there's zero. I think ultimately we will want to get rid of the 40,000 deaths and have climate change impacts and have efficiency and redesigns the city. I think we want to have autonomous vehicles. One of the reasons why I've invested in Aurora and invested in a variety of other companies in this, I think that will happen. But I think the question is, well, then the question becomes, how do we transition? Well, right. And that doesn't mean that it isn't painful. There aren't people that we need to be focused on. But specifically, I'm like right now, we have a huge shortage of trucker drivers. The question isn't with trucks and with human nursing care and so forth. The question may not be, oh my God, are the robots coming for the job? The question may be, oh my God, can the robots get here soon enough?
Elad Gil
I always thought that was very ironic, because if you looked at the average age of a truck driver, even back then, it was in the mid 50s or high fifty s, and a lot of people were retiring. And you basically saw the participation in that market dropping. And some people talk about all that displacement. You're like, well, that's good, because we don't have enough drivers right now. And it seems to your point that's true in medicine and a few other areas. Look at the areas you're most excited about to invest in in this area. I mean, again, you've been a legendary investor between Facebook, Airbnb, a variety of other companies. You've invested in AI for a long time now. You're sort of one of the first in it. What are you excited about now? And other specific start up ideas. You're looking.
Reid Hoffman
So by the way, all of the major copilot areas, very interesting. And I tend to think that it's do you have part of the thing is too often technologists kind of go, well, it's just like I have a really cool new technological idea. You really have to blend it typical kind of Ibanker VC advice is a business model. I'm interested in business models. You have a great one. I'm very interested. But go to market. What's your differentiation? How are you going to establish an ecosystem of what you're doing? Is it going to have network effects or is it going to have something where you have a compounding loop that when you succeed at the hard test and the hypothesis of what you're doing, you'll suddenly be on a roll that you could potentially create something that's industry transforming, that kind of brings what you're doing for customers, in the ecosystem to an entirely new level. And that's part of the reason why an additional inflection adept. We've also done Cresta and Snorkel and a bunch of other things as part of this. And it's part of the reason why I'm now responding to emails because I think this will be true for another month saying if it isn't genuinely interesting kind of scale compute and call it AI right, now I don't have time for it. I only have time for and it doesn't mean that you'd be doing your own large language model because I think there are some very interesting things that are beginning to happen on OpenAI and others. But it's a question of do you have an interesting conception of what your products and services and kind of where you're going with it. And this is all looking through a fog at night. Move fast, figure it out, adjust. Do you have the capabilities for that is also very important.
Elad Gil
One of the big questions that I get from founders who are building in this area is the degree to which a wrapper on OpenAI is interesting or the degree to which you need to build your own model. And so how important do you think that is?
Reid Hoffman
I think a little bit of it is to depend on what is the component as you're building the business. So if you think that the providers of the models won't be providing a model that helps you because of their own thing, then you have to build your own model. That's an obvious one. Another one is, okay, if I'm going to be using either a large or another model, do I have a dependency where that dependency will be a rug pool or something for me? And will they likely do that now? For example, will they be building larger models that have greater language capabilities? Yes, you should plan on that because they have a natural vector. They're not going to stop at GPD three and say, oh, we're done. So you should be planning on that and you should build your business and kind of strategy around that. But I think you can still use the APIs for that. And then the question is, well, if you're going to do your own one of the mistakes that's usually around technology, even with open source, is you kind of build it. Once you stop every piece of technology, you are constantly reinvesting it. So is it the thing that for your market, for your product, for your company, is that one of the things that you should be reinvesting in? We don't all build our own web servers. We use a variety of open source stuff because it's like, no, that's not where you should be investing. Ultimately, we have a similar question now, right now, because we're looking through a fog at night, running over uneven ground, there will be a temptation to build your own models, and sometimes that temptation will be right. So it's kind of a question. And that's part of the reason why the demand for open source and iterating on open source will be there. And so there's an answer that is clear as mud.
Elad Gil
I guess when I bucket the world of AI right now, I very unfairly kind of bucket it as like image gen and like the fusion models, the large language models, and then kind of everything else. And everything else, obviously, is a very, like mix of multiple markets, multiple technologies is everything from protein folding, which is largely moved to transformer based models on through to robotics and manipulating atoms in the real world. When do you think that latter piece will happen?
Reid Hoffman
One of the reasons why I've done very little robotics other than, like, neuro, and it's also part of self driving is because the benefit of being in pure software and like, example, being copilot to professionals is the software in the bits world is a lot easier than the atoms world. And even as you blend the two, when you blend in atoms, it gets a lot harder on a lot of dimensions. It's not just harder and one additional thing. It's harder and lots of things, many more easy ways to break. So, for example, they be embarrassed by your first product release. Generally good for software, generally good for Internet software, generally not so good for hardware, kind of as a spectrum. And so I think that the atoms part of it will start fitting in kind of in areas with constrained circumstances. Like, for example, one of the reasons why I'm a believer and a fan of the autonomous vehicle side is because you go, well, there's various places where this is actually a constrained space. Obviously, manufacturing robots would be a constrained space. Where are the constrained spaces? Because getting it to the unconstrained space, it will be solved, but a huge amount of difficulty. And part of the thing about being an entrepreneur is what's the simplest problem I can solve? That's hugely valuable, right? It's not, oh, I'm going to be a gold medalist that's solving this super hard problem because by the way that occasionally happens and we can all go, oh my God, that's really amazing SpaceX, really amazing, but lots of ways to die getting there and you want to actually in fact build. So it's the simplest problem that's really valuable and that's the reason why it's like well, for me I'm very careful when I move off kind of software. I'm like what's the way that you're constraining the problem within the physical bits. Now, the other thing I would say is, okay, I already made my disclaimer about predictions or look foolish in the future, but I think that nearly for certain within the next three years we will have another mechanism. I may still be heavily driven by transformers and all of the instrumentation that's being built around that, but I really think the real driver is scale compute and I think we will see other mechanisms that will really do the scale compute. And I gestured the self play games as an instance of one that's been there and I think we will see those also coming.
Elad Gil
Do you think that's just going to be evolutionary systems against some utility function or something else?
Reid Hoffman
Well, the macro view of it is how do you have a performance fitness function that you perform it? Call it one exaflop. When you perform it at two exaflops'what, does the performance function look like and is it sufficiently better? And then you go, okay, well, how about two, how about five? And then what are the mechanisms to do that?
Elad Gil
Are there other areas of AI research that you're most excited about right now?
Reid Hoffman
Well, I've been paying attention to some versions of Bayesian and other kinds of probabilistic programming because I think it might have something there I've been kind of paying attention to. I think one of the things that's interesting is the loop with the creation of synthetic data. I think that's interesting. I'm picking almost random examples out because what a time can be. We can create magic and there's just a ton of really interesting stuff going on.
Elad Gil
What's the direction that if AI became 100 x more performant you'd be disappointed in sort of the outcome of the world?
Reid Hoffman
Well, one of the things technologists generally feel that a lot of technology tends to be call it non centralizing, decentralizing and actually in fact, most technology tends to have centralizing elements. So, like, you create the Internet with a decentralized protocol, and then there's a set of companies that become the central anchors around that, and then nation states have certain things, and I think almost all technology, I think even cryptocurrencies as they will get more centrally adopted like it's supposed to be fringe thing actually will find that there will be lacunas of centralization even if your protocol is totally decentralized and all the rest because there is there's reasons why human beings gathered in cities and created okay, well, you're the blacksmith and you're the soldier, and it's a similar kind of centralization, and technology enables that. And so it isn't that we can fight the fact that AI will have some centralizing elements. It'll be, what are those centralizing elements and how do they do it? So, for example, democratic Western societies have centralized power and functions, have police forces militaries, but they're accountable to the people in various ways. So it would be like that's what you want for the kind of the new kind of power things. And there's various ways to do that. And by the way, it's not having, you know, the government go build AI. That like, good luck. I mean, you know, if it could do it well, great. But you know, the the when you look at moonshots from government driven projects, they're almost always an adjunct to some kind of war effort, right? And I mean, that's a little too broad. But like the major ones, apollo Cold War, right. The question is and look at where the space industry went after that arked down and required revitalization. My disappointment would be is it wasn't playing that kind of elevation of humanity side. There's a lot of ways you could use. Like, for example, when people are saying, well, okay, all these social networks and you can have these rebellions, but also the governments can study them and go and oppress specific individuals that they see in the pictures that are being done that, well, I want it to be used in the pro humanity way and not the anti humanity way. And that's one of the reasons why I'm such a strong advocate of that. We need to be building these technologies that reflect the kind of values that we hold dear and not slowing down to have them built in other places which may have values that we would have some challenges with.
Elad Gil
If you extrapolate some of these intelligence curves in terms of what's coming over the coming decade or two, when do you think AGI will happen or do you think it will happen?
Reid Hoffman
Well, I think it's almost certain that AGI will happen at some level. But for example, right now I look at this work with a scale compute as a progression of savants and more and more amazing savants. Now, you could say thesis is that actually that progression of savants gets you to AGI. That's a credible, intelligent thesis. But also it's a thesis, not a QED logical proof. And I think these savants and that's part of the reason why the human amplification and the human loop and all that, that's what I see in the visible future. That's why I wrote the Daly essay the way that I did back in June, July, whenever that was. Now with AGI, we ourselves are physical entities. There's all kinds of questions around how biological intelligence evolves. All kind of question about how biological intelligence, together with various kind of silicon intelligence, whether it's neurolink or other kinds of things, how those work. And there's all kinds of questions about how the ass have worked. There's no reason to think that AI cannot be fully general. Like a silicon intelligence can't be as fully general intelligent as a biological intelligence. Now then the question becomes to, okay, well, the brass tax is do you see line of sight right now? And the answer is low probability, right?
Elad Gil
What do you think it's missing?
Reid Hoffman
Well, I tend to think, with due respect to all the people who think that this progression is en route, I tend to think that we have a kind of a generalist flexibility that I just haven't seen in these systems yet. Is it just scale that gets it to it? That's why I kind of said savant progression, because that's kind of the difference of a savant and a generalist in a human being is that kind of flexibility. It's like, oh, Elad so much better than me at chess. How am I going to beat him at the chess game? Well, what I do is I put some really good wine on the table and I hope he gets drunk.
Elad Gil
I hope so too.
Reid Hoffman
Yes, but we change the game when we go, oh, we can't win that game, we try to change this. And that's what the flexibility that kind of nonsevantness is. Because we've had savant human beings before. We've had human beings who are incredible at math and sciences and then can't really navigate their way around the street, that kind of thing. And so the question is, on that general side, I tend to think in the current systems, I still haven't seen it. Now, maybe that's because I'm not one of the geniuses who's going through the iterations. But even with the increase, part of the reason I leaned all the way into this is even with if you said all we have is a progression of savantz, we are going to have magic, right? We already have magic, and it's going to be really stunningly good for industries and society and humanity.
Elad Gil
I think at this point we'll open up the questions of the audience. So we'll probably take about three questions and then, as mentioned, unfortunately, Reid has an engagement right after this. We'll have to take off, but everybody should feel free to stick around. So maybe we can take three questions from the audience.
Reid Hoffman
So I will also repeat the question because I suspect people in the back. So the question was roughly copilot for different businesses. Say someone's thinking about a copilot for business X, what would be the advice? So you have to kind of study what the dynamics of the business are. So one of the key one of the things I think is going to be still broadly true of most AI things is one of the things that I started kind of preaching in 20 years ago on mold, which is co invent your go to market with your product. Don't build your product and then say, okay, now I'm going to go to market. What's the combination of the go to market with the product? Now in consumer internet days that was frequently virality or SEO or something like that, fine, but be doing that as part of what you're doing. And maybe it's just sales, but part of what I think is interesting even in a transformation of the enterprise is kind of the ways that even enterprise models with slack or other kinds of things are changing the nature of how sales models work. And so be thinking about that and what you're doing as well. And then be thinking I wouldn't overly sweat like tam and so forth. Tam frequently looks small now, gets bigger larger later. Uber is kind of the canonical current example of that is like, oh, it's just black cars and it's like, no, it's redefinition of the transport network. Okay, much more interesting. And so don't overly sweat that. But do be thinking about like okay, and then what does that adoption speed and curve look like you'd want as part of the go to market? Look at something that would kind of have a fast adoption curve. You'd want to have something that has some naturally, if you've taken all the risk and done all the innovation, you have some natural moats in the business. Network effects has been banned about most people say it and don't really fully understand what it is because there are networks about network effects to really look into that, but that kind of thing. So those are the kinds of attributes you'd be looking at. It's a little bit kind of like don't forget some of the key lessons from the consumer internet startups and even the modern enterprise startups over the last couple of decades.
Question for audience
Generative AI has clearly captured the attention of early adopter crowd and kind of building on the GTM piece, what kind of excitement is it eliciting in like enterprise or business buyers today? If we're thinking about a sales motion, B to B, what are those things that are really resonating in that audience based on the conversations that you may have had with leaders of businesses and whatnot?
Reid Hoffman
Well, the business leaders read all the same news and hear about Chat GPT that all of us do. And so I've been getting tons of questions and they're looking for they don't want to be left behind. They understand that part of how your business dies is you miss the new wave of the market, whether it's the technological underpinnings, the way you engage with customers, whether it's marketing, sales, customer service, whatever else, what your supply chain looks like, how that all works. So they're like, okay, we get it, it's a new wave now. They tend to be kind of like, okay, well, they tend to want to either know that they should really flip over and invest or they're not as good. At experimenting. So that's part of the thing, and that's part of the reason why things end up literally. I was on the phone earlier with all of the leadership of Ford Motor Company because they're kind of thinking about what are the transformation on webex? And they're asking these questions, right? And they understand that when they think about, like, for example, chat GPT, they don't just think about like, okay, how does that affect how our company works? But also, like, well, what would the engagement with a car look like? And should we think about that? And those were some of the kinds of questions that would be coming out. I would say that everybody is super curious and looking at it and again, looking through the fog at night, they're kind of like, oh, gosh, okay, I can't do everything. I can only do a few things. What should I be doing? That's the dilemma that you're in and that go to market and you have to navigate the amplifying. The answer to the earlier question is part of it is choose an area. Like, if you're doing a copilot one, choose an area that you think you will get good adoption, that people won't just go and be very slow. Because even if you have an awesome product and the right transformation industry, that could still kill you.
Elad Gil
You've been through two big transformations before, or at least two, right? There's the first wave of the Internet, and maybe even the first two waves of the Internet in some sense, and then mobile. How does this compare in your mind to the interest in those areas versus today by these larger enterprises.
Reid Hoffman
At least as big? And the thing that's interesting about this is when you get like, there's a natural kind of hype cycles there. This one will be the biggest, one is bigger. And by the way, it might be true, but it might be true for a relatively banal reason, which is it's building upon the Internet and mobile and cloud and so forth. It's like, okay, so it's the biggest, but it's the same reason. The biggest where minus the pandemic, each new year, there's a new box office record for a movie. It's like, well, more people are watching movies and going to movies, so you have a new record. It's not necessarily that big of a deal, but I think it probably is biggest, but because it builds on all those, and those continue.
Elad Gil
We have time for a few more questions. Maybe we go to the back for a question or two.
Question from audience / Reid Hoffman
Yeah. A lot of the best data sets for fine tuning models or working on top of them are private rather than public. I'm curious what you think the runway is for parameters and publicly available data sets. And then kind of in the vein of if you're Ford Motor Company and you have this massive data set that has a lot of applications, how do you think about partnering with people to use that? So obviously we have a bunch of things as a society work out around data, where I think most of the general discourse around this is somewhat broken because it's like, well, who owns the data? And it's like, okay, it's really complicated. One of you takes a picture of me on the stage. Well, do you own the picture? Do I own the picture? Does notion on the picture? Does you have a little event thing on the picture? It's a complicated thing. And ownership stuff is it's much more of what positive things can be done, what negative things should be prevented right, as part of this. And so there's a whole set of things on that. But I do think that in various interesting ways this has been said, it's not original meat. Data is a new form of oil. And I think that oil would be important. And I think that organizations will realize that they have a bunch of oil that they both need to maintain the trust. Because one of the things, the problem I have resolved absolute trust, and I think part of that is because if you're using data that comes from some call it constituency, are you delivering some value for that data? People get a lot calmer when it's like, oh, you're giving me something for it, right? Like, I get something for it. I'm not really buying into there should be a data commons where everyone's getting like a 10th of a penny or whatever, because I think that's just too hard to do. But because, by the way, part of the whole thing about social networks and a lot of games is actually, in fact, giving me some joy is much more worth it than the penny that you might give me, right? As part of how you're doing. So the trade of we give you some services and you give us data is a very good trade. So I think it's how do you do the trust maintenance with it, and then how do you be giving something valuable as part of that trust maintenance back and solving that problem as part of what you're doing? And I think that's part of the reason why we're going to I think of the world as we go forward to be like, not only here's how we're using the data and here's how we're using the data to try to benefit the constituencies who are participating and generating the data and everything else. And this is why it works that way. And there's some part of that, I think that will be part of what will be necessary to navigate the modern world using this data. This is obviously a very 50,000 foot principle, and there are weeks and weeks of things to say about this.
Elad Gil
One framework that I think is kind of helpful for that too, is if you have a two by two matrix of cost of generating the data versus scale of data. You actually can start identifying pockets that are either interesting at a certain scale and valuable, but not very costly, and vice versa. So you can actually kind of segment the world that way and think about data in a deeper way because I feel like it's not all one thing and people often conflate it and people often think there's data modes when there aren't. And so I think you kind of have to ask how expensive would it be to generate this data from scratch and how valuable is it to use it and then actually drive how you think about your business model or what to build.
Reid Hoffman
And one of the things that suggested to an earlier thing when you asked my research is one of the reasons why I actually think the whole field of synthetic data generation will be actually super interesting.
Elad Gil
Applied intuition is right here in the front, the person in the back, just.
Reid Hoffman
Because Qasar was sitting here in the back.
Elad Gil
Yeah, I had nothing to do with with them in the back, in the middle there, please.
Reid Hoffman
So regarding AI safety, we obviously need a point between too safe and not safe enough. And right now in the debate, major players are commercially AI companies which benefit from keeping models closed. So do you think that might cause this point to be unfairly too closed? Well, one of the benefits of the fact that there will be a whole bunch of competition and an oligopoly does have really real competition, is that I think that the kind of market will to some degree sort that out. Although it may even sort that out to being a little bit too unsafe, maybe. I do think that the question around it's a natural conflation to say, well, it's great for my business to keep it closed and I'm going to claim safety. And of course you have to pay attention to both of those. It's not necessarily an evil thing to say, look, it's great for my business to keep this closed too, because I'm reinvesting in what I'm doing and all the rest. It's kind of say, well, my business has moats and I'm getting some good margins for that. I'm reinvesting and creating something that's really valuable, that's actually good for society. And so I think the real question where I've become much more active is when you are saying well, you're actually really making startups much more difficult. And that was part of the reason why I was saying, well, I think there's actually even with if it's through APIs, which have increased safety coefficient and a bunch of other things, I think there'll be room for a lot of startups and that will be fine. So this area doesn't bother me yet. I'm not saying it couldn't bother me because if it's like for example, huge generativity because of the internet with open systems, great mobile much more challenging because you have two major mobile OS's, both of which kind of quell certain kinds of innovation in a similar kind of thing. And so that bothers me because that does quell startups, whereas I don't think here it's doing that yet.
Elad Gil
And one last question, maybe from the front. Do you want to go?
Question from audience
Yeah. So you touched a little bit about robotics. I'm wondering, why do you think that we haven't seen mass adoption of consumer robotics if it's a function of there not being one single very definable pain point or just a function of the fact that the cost is still too high and with this new wave of AI platform shift, do you see that changing? And if so, what is the catalyst?
Reid Hoffman
So I think the basic challenge is that it's a little bit like I was saying earlier, between the bits and the atoms, which is when you get to Adams, it's a whole bunch more expensive. Not only do you have to do all the work to develop it, to make it super safe, supply chain, you have inventory, you have a high burn rate that's per month that you have to clear and you have to clear reasonably. You have to have investors that believe that that can happen and it will have a sufficiently valuable thing that they should be putting money into this in terms of instead of a software thing. So all of that kind of conspires. And so nevertheless, of course, people have tried consumer robotics and I tend to be one of those people who goes out and buys the new thing each time it comes out like, oh, an IBO, I'll have a robotic dog and I tend to be one of those people. But it's kind of like the okay, I played with it for a couple of hours and then went, that was so cool, I'm done. And so you have to get into something that's more than that and it's hard. And I think just generally speaking, it was like one of the things that I told the Ford folks that I really respect about like what they and other folks in their industry has done, like Ford has done a great job of anticipating climate need and doing all that stuff is working with hardware too, is frankly hard. And we want more of that because as a society we want more of this. And so we don't want capital just only flowing the software, but it becomes much harder. And when the capital markets are free, they tend to go to crypto because they go, oh, that's really easy. Not today, but really easy to make a bunch of money. And so you're like, okay, we want medical stuff, we want hardware stuff, we want this kind of stuff. So we have to kind of push in that direction some because it's a lot harder.
Elad Gil
Well, if you could please join me in thanking Notion for hosting Cynthia Gildea for all the hard work on this event. Read for attending, and then all of you for coming. And feel free to hang out for another hour or so. And then again, you'll be booted to eight. And thanks, everybody, for coming.
MY BOOK
You can order the High Growth Handbook here. Or read it online for free.
OTHER POSTS
Markets:
Startup life
Co-Founders
Raising Money
Old Crypto Stuff: