The $1 Billion Talent War: Why Enterprise AI Is Failing
Why are companies paying $100 million for 25-year-old AI experts if progress is driven by macro variables?
Key Takeaways
84% of workers are eager to use AI agents for productivity, creating a massive gap with executives who are overconfident in their non-existent AI strategies.
The primary blocker is technical: data fragmentation and quality, leading to organizational paralysis among 'Monks' and 'Mountaineers.'
Sequoia Partner David Cahn calls the $100 million pay packages for young AI talent a symbol of 'desperation in the ecosystem.'
Digest Info
The Paradox: Eager Workers, Overwhelmed Managers
An EY study reveals a profound disconnect: 84% of employees are eager for AI agents, expecting improved productivity and work-life balance. Yet, most organizations lack clear strategy, training, and management structures, leaving 63% of non-managers less inclined to pursue management in the new hybrid human-agentic era.
“84% of workers said that they were eager to embrace agentic AI in their role.”
For this weekend's Long Read slash Big Think episode, we have a new study from EY.
AI is so fast-moving that I'm always interested to see the latest data about how people's feelings about the technology are changing and evolving.
And this new study from EY deals with two themes that have been really pertinent across the scope of 2025 —
The first is, of course, the rise of agent deployments.
As we've seen from studies from another consulting firm, KPMG, between Q1 of this year and Q3 of this year, agent deployment nearly quadrupled from 11% back in the first quarter all the way up to 42% last quarter.
Agents are very quickly moving out of the realm of the theoretical into the realm of the real.
At the same time, throughout this year, companies have been dealing with a pretty serious communication overhang.
Last December, Writer.ai did a study where they surveyed 800 employees and 800 executives and found just a wild gap between how those two different groups thought about AI with regard to their companies.
In a huge percentage of cases, executives were much more bullish and much more convinced about the quality of their AI initiatives than were those employees.
On the question of whether their company's approach to AI was well-controlled and strategic, 73% of executives said that it was, as opposed to just 47% of employees.
When it came to a question of whether their company had a high level of AI literacy, 64% of executives said yes, just one-third, 33% of employees said yes.
on the very basic question of whether their company has an AI strategy.
A full 89% of executives said yes, while just 57% of employees said yes.
So that's the setting into which this study comes.
And the central paradox that they found was on the one hand, excitement and a readiness to embrace agentic AI.
And on the other hand, a very meaningful set of concerns about what it meant for their own future.
Now, obviously, this paradox is going to take a lot more than better technology to uncover, but let's take a step back and get more into the details of this survey.
From a methodology perspective, the survey involved over 1,100 US employees across six industries, including banking and capital markets,
wealth and asset management, consumer products, manufacturing chemicals and industrial products, oil and gas and technology, so a pretty cross-cutting range, with all of the people surveyed working for companies with an annual revenue of a billion dollars or more.
It was pretty evenly split between managers and supervisors and non-managers, 586 managers versus 562 non-managers.
There was a wide generational range.
It was fairly evenly split between Gen Z, Millennials, Gen X, and Baby Boomers.
And most importantly, this was pretty recent.
The survey was in the field between August 8th and September 3rd.
So between six and 10 weeks ago, which as these surveys go is pretty contemporary.
For example, all of this happened after the release of Jeep D5.
So let's talk about some of the findings.
The first is that there is a lot of enthusiasm when it comes to agents.
84% of workers said that they were eager to embrace agentic AI in their role.
Now, if you weren't paying attention closely, that might be surprising given how much there is a constant media drumbeat around agent-led disruption and worker displacement.
When you dig into what they expect to have a positive impact on, it starts to become a little bit clearer.
86% said that they expected it to have a positive impact on their productivity.
83% said that they expected it to have a positive impact on their work experience.
And 82% said that they expected agents to have a positive impact on their work-life balance.
So these are very clearly a set of folks who understand that these technologies could unlock greater efficiencies, not just for their company, but for the way that they interface with their own work.
By being more productive, by cutting out some of the drudgery, they expect to have a better time at work, getting more done more efficiently, and because of that, being able to win some of those gains back to a better balance between their personal and work life.
Also, for those who have started to dig into AI, they're getting more confident in their usage.
90% of those who said that they're already using agentic AI are confident in their abilities to use AI agents today.
So this is some of the good news.
And honestly, it is very foundational good news.
It provides a base that everything else can be built around.
But as EY puts it, workers are definitely feeling lots of contrary and frankly, sometimes contradictory emotions around AI and agents.
So let's talk about their concerns.
The first is, of course, job security.
Despite the fact that 84% are excited and ready to embrace agentic AI in their roles, 56%, including a huge overlap, are also concerned about job security.
And to me, this just reads like rational people.
People understand that these tools are too powerful and too useful to be ignored, and who understand in specific ways how they can benefit themselves, but who also understand that they are both powerful now and getting nothing but more so, that it could impact the way that their job prospects look in the future.
Among those who are concerned about job security, it is higher among non-managers than among managers.
48% of managers had job security concerns because of agents, while a full 65%, nearly two-thirds of non-managers, said agents made them concerned for their own job security.
The next set of concern, one that will be very understandable to many of you who have sought out a daily podcast to help keep you informed, is the feeling of overwhelm around AI.
61% said that they are overwhelmed by the constant influx of new agentic information.
Basically, there's always something new happening and it's very, very hard to keep up.
And what's interesting is that even among those who use tools, they are still overwhelmed by how much is happening in the field.
64% of those workers who are using agentic AI are also overwhelmed by the amount of new agentic tools that are being introduced in the workplace.
What's more, in addition to just being overwhelmed by the sheer amount of information that's happening, people also feel like they're falling behind their peers.
54% of respondents overall said that they felt like they're falling behind their peers on using agentic AI.
That included 48% of managers and 61% of non-managers, meaning that the vague and general overwhelm of so much information is translating into a more specific concern around falling behind relative to everyone else around you.
The next category of concern is around management.
And the short of it here is that people recognize that we are going into a fundamentally different era of management based on a fundamentally different era of how work happens.
As we move from purely human workforces, where software is simply a tool for assisting humans in getting work done, to hybrid workforces in which software in many cases does the work itself, that means we now have a new type of hybrid human-agentic workforce.
that brings up whole new categories of questions around how to manage that.
How do you navigate between what an agent should do and what a human should do?
How do you handle questions of handoff and complex workflows that involve moving from human to agent and back to human and back to agent and so on and so forth?
There's also other questions like role bleed.
When a much wider array of people have access to more capacity because of agentic tools, how do you keep people in their lanes rather than spilling over into one another?
Are you even supposed to keep people in their lanes?
Or should product people just become designers?
Should designers just become coders as well?
These are questions that it's going to take lots of time on tasks to figure out in practice.
They're not just going to be able to be theorized about.
And in any case, it's showing up in the numbers.
53% of managers that were surveyed were concerned that they're not good at integrating and managing a hybrid human and agentic workforce.
And 63% of non-managers said that they're less inclined to pursue management because of their concerns around managing agent human teams.
One of the things that I felt reading a lot of this was just sympathy and a desire to remind people that no one is an expert yet in these things.
There is literally no such thing as someone who is an expert in managing hybrid agent workforces.
There are only people who have spent more time on it than others who are doing their best to distill that experience into insight to help others get caught up to speed even faster.
For those managers who are concerned that they're not good at integrating and managing a hybrid human agentic workforce, even if they're right and they're not good at it, the gap between their skills in that area and the person who's the best in the world at that are radically lower than just about any other skills gap that they've ever faced because this is so new for everyone.
The Three Archetypes of AI Failure and the 'Opportunism' Fix
Head of Research Nufar details why technical readiness is the lowest score in enterprise audits, identifying three failure archetypes: Magpies (pilot hell), Monks (analysis paralysis), and Mountaineers (endless foundational projects). The solution is 'intentional opportunism': start with high-value use cases while gradually building foundational data blocks.
“Motivation and ideas and desire for AI outpace the willingness to fix the underlying infrastructure, meaning that the technical readiness scores are frequently the lowest.”
This week, we're plumbing into the single biggest barrier that we see, which is around data and technology.
Once again, we're trying to make this extremely practical, useful, and applicable, and I hope it's super useful for you.
If you want to dig in deeper and figure out how to discover what parts of this are relevant for your organization and how you might think about your AI plans going forward, shoot me a note at nlw.bsuper.ai and I will get you in touch with the right people.
For now, let's dig into part two of the agent readiness series.
Nufar, welcome back to the show for part two of this agent readiness series.
Thanks for having me.
You know, in the previous session of this series, we discussed the cultural readiness for agent adoption, and we emphasized that it's one of the most important and often under-treated aspects of agent readiness.
This time, we will focus on the data and the technology readiness for agent adoption.
Our data shows that there is one clear universal truth, and that is that motivation and ideas and desire for AI outpace the willingness to fix the underlying infrastructure, meaning that the technical readiness scores are frequently the lowest across all the dimensions of our audit.
The question is why most companies get stuck.
And sometimes it's around motivation, ideas, and FOMO that gets them not to take the right infrastructure actions, both from data and technology perspective.
And we try to categorize the main archetypes of companies or executives getting these decisions wrong.
And the first archetype that we identify is the magpie.
These are companies that are, after all the fun and games of building agents, they want to brag it in marketing and social media.
They just can't be bothered with the drudge work of sorting through data and systems.
And this is where companies get stuck in pilot hell.
And unfortunately, there are many, many companies that fall into this category.
The other archetype that we're sometimes seeing
Those are the ones that look at all the issues in a typical company, especially one with many legacy processes and systems, and realizing how much they need to do in order to fix the full-blown agent adoption infrastructure.
They become overwhelmed, sometimes to the point of getting stuck or even paralyzed in analysis paralysis.
And then the last one, the monk, like the mountaineer, they look at everything they need to do.
And they go about it in a very, very structured manner.
And this leads them to initiate long and sometimes very comprehensive kind of foundational projects in order to sort out the data and the processes as a gateway to even start the very first agent use case.
And the reality is that in today's market, there is no time to wait for data or infra foundational projects before you start working on a use case.
And by the way, these foundational projects didn't really work even when it was prior to the days of Gen AI.
So I didn't believe them then and I don't believe them now.
And it's very clear that these three archetypes will not get you far with the desired results for data and technology readiness.
And the question is, what will?
And I think that what you should do here is introduce an approach that does work.
And I call it intentional opportunism.
And for what I've seen working here, I recommend to take a
very pragmatic approach that blends both opportunistic kind of low-hanging fruit ROI projects with a more structured vision for the future.
So it's about starting now, but starting smart.
And the reason why this is the approach that I recommend is from one hand, I want you to go on the agent wheel sooner rather than later.
Because there is so much to learn just by doing it.
Also, if a use case is chosen well, the value to be realized fairly quickly will get you way more motivation.
And that's the best way for you to get all of your stakeholders on board and everyone to understand what the technology can and cannot do.
So for these first couple of use cases, what I can allow you is to get like a free pass or a waiver for serious foundational work and just get the use cases out the door, just start working.
And this is why I want you to choose the use case that is a good combination of enough visibility with enough value.
But in parallel, and as you start to accumulate more understanding of agents and your shortcomings, you should identify some areas where there is a good justification and clear line of sight to start building more foundations and reusable building blocks to get ready for
full-blown agent adoption.
So it's not either or, it's a both, but being very, very opportunistic at the beginning and then intentionally and gradually getting your infrastructure in order.
So let's walk through the key challenges and the solutions that are related to the data and the technology readiness.
And here I'm going to draw a lot on the audit findings, as well as my experience with multiple companies and many, many years working in this field.
And we're gonna start here with the data readiness.
The big issue here is that data fragmentation and quality and access are typically the most common and severe limiters to scaling AI and agents.
And this is where many initiatives basically die.
In literally all the companies that we've audited,
There is some version of this statement.
The internal systems rarely connect well to one another.
And it goes way beyond almost the cliche of garbage in, garbage out.
If your sales data appears in three different systems and there is no easy way to unify it automatically,
Not just because the systems are separate, but because the same entity has three different names, this can become a showstopper.
So there are two major challenges that I see in most companies.
The first is compliance and data privacy concerns.
Also, there are many risks because a new technology being adopted by the masses, often in our case, like a grassroots movement, present multiple challenges.
So the data security and privacy are among the most significant risks.
But on the flip side, many companies are so afraid of data leakage that they just won't approve anything.
And that also creates a problematic stagnation.
And moreover, in many cases, the customer contract will include a don't use by AI clause, making it also legally risky and very hard to navigate.
Another challenge that we often see with companies is that the business process know-how is a tribal knowledge that is being maintained by a selected few, leaving others extremely puzzled and dependent on them.
And with no documentation and the knowledgeable people often being too busy to share with others, we're left with a process we cannot teach an agent how to execute.
But don't despair.
I have a playbook exactly for you.
So these are my top five recommendations on what you should do as intentional opportunists to accelerate the data readiness.
And pay attention because these are all very important.
First and foremost, and fortunately for us, we can now use AI for many data challenges.
We can use AI to connect the data entities to a natural language.
We can refine and improve the data.
And we can also use or build augmented generation, aka RAG systems, and semantic similarity systems for fetching related pieces of data.
This is something that we didn't have prior to the days of Gen AI, and it's an extremely powerful way to overcome many of these challenges that I just mentioned.
Second is to help you tackle some undocumented data processes.
The best thing that you can do for your group or your company is to have the subject matter expert work in front of a recording with whichever device or method you want to record and just work regularly and narrate what they work.
Then what you do, you take this video recording of their screen and their narration into your LLM of choice, and you get it to output a standard operating procedure, SOP.
And then you give it back to the expert just to review.
By doing that, you can save significant amount of time of documenting the processes.
And I've seen multiple companies utilizing this process very well.
With a handful of hours being spent by the experts, they now have fully documented processes and a great starting point for any agent to work on.
Next, I want you to identify the common foundational data sources.
and focus on creating a dedicated solution and easy access, especially for them.
So don't go for the niche ones or the very complicated ones.
Identify the ones where there will be the highest ROI if you go in and make them accessible, sometimes even to a third party.
I'm seeing many companies taking these extremely important data sources into a unified third-party vendor and giving the access to that.
But focus on them.
You don't need to connect everything because it's just going to be a never-ending story.
And then I want you to invest in the data cleanup or the radical changes only if the ROI is extremely large.
From what I've seen historically, and I mentioned that most data-like projects failed since they tried to do everything, which is way too expensive versus what you're getting as a return.
But if you're very opportunistic here or selecting the ones that will yield the return on your investment of cleanup and processing, you should get it much more manageable.
And lastly, for the newer systems or companies, you can just build it right from the get-go.
Like, you know that agents are coming.
We're talking about it all the time.
If your company is new or your data source is new, or you just have the ability to do a clean slate option, just build it right such that agents will have access.
This means data is accessible, logically organized with a lot of metadata everywhere you can put it.
And ideally, with as many structured standard operating procedures such that they just will be able to work from that.
So these are the main things that you should do in order to be data ready.
Let's talk a little bit with regards to the security and governance that we just talked about.
And there are a few kind of non-negotiables when it comes to security and privacy.
The main one will be you have to be very, very clear and strict about the data access roles.
And in many cases, you don't have even an option but to anonymize the data and use it as best you can.
That's unfortunate, but that's the reality in many cases.
And finally, and we mentioned it before, the tool unlock is often to create a secure sandbox where employees know that they can experiment within well-defined boundaries without any risks to the company data.
And that's the best way for you to make sure that
Whatever you're building does not create an unexpected data leakage.
So that's the security part.
The $100 Million Pay Package: Desperation in the Ecosystem
Sequoia Partner David Cahn discusses his biggest miss: failing to predict the astronomical pay packages—up to $100 million for recent grads and $1 billion for brand names—offered to perceived AI experts. He argues this stems from desperation and a psychological bias to wildly overestimate the individual's 1% contribution to a potential trillion-dollar outcome.
“If you are a brand name that everyone recognizes your name, you can get a billion dollar pay package right now for a single individual. I totally did not see that coming.”
I think there were two big misses last year.
I think the first big miss was these like big talent acquisitions.
If you had asked me the probability a year ago that, you know, if you're a 25-year-old recent grad from an elite university who is perceived to be an AI expert, you can get a $50, $100 million pay package right now.
And if you are a brand name that everyone recognizes your name, you can get a billion dollar pay package right now for a single individual.
I totally did not see that coming.
And I think that you asked me a year ago to predict that I would have said you were crazy.
So sometimes I do think the beauty of AI is like reality is stranger than fiction.
And a lot of crazy things happen.
Do you think those scaled pay packages are justified?
I think they're symbolic of this sort of desperation in the ecosystem where it's like, we need to eke out progress.
We need to prove that all these investments are worth it.
And I think there's this logic that gets really abused in the venture world and in the tech world, which is like, hey, if I increase the probability of making a trillion dollars by 1%, that's worth a ton of money, right?
That's worth $10 billion.
sure, that's true, but it's very easy to overestimate the 1%.
Is it 1%?
Is it a hundredth of 1%?
Is it a thousandth of 1%?
Is it a 10,000th of 1%?
Our brains were very bad at reasoning about that scale of number.
I think to the extent that you believe that hiring this very impressive researcher increases the probability you win by 1%, I totally can see why you would justify a billion dollar pay package for an individual.
That said, I think we are psychologically biased to overestimate what that percent contribution is.
It may be the case that there's these broader macro variables, which we'll talk about, I'm sure, later in this discussion.
There's these broader macro variables that are actually driving progress in AI that are not a single individual can change.
I'm very upset looking at these pay packages that my mother didn't push me towards a more engineering heavy
Doesn't everyone feel that way?
I think that's probably the universal reaction to seeing these packages.
I'm like, Mom, you should have done better.
Bad parenting.
You encourage me to do English?
Really?
Come on.
War and peace doesn't quite make it, does it, when you're getting paid $3.5 billion by Zuck?
What was the second?
I think the second one, one thing we talked about on the podcast last year, I predicted that Meadow was going to do really well.
And I think that prediction was clearly false in a 12-month time horizon.
I thought that the vertical integration that Meta had was going to be an advantage.
And I think that Meta, these 100 million packages are coming in large part from Meta because they haven't performed as well as they thought they were going to.
The reason I thought Meta would do well is that it was vertically integrated and found a run.
I sort of continue to believe that in the fullness of time, it is possible.
And I think the dramatic actions that Zuck is taking represent this.
It is possible that I will be proven right in a longer time horizon, which is to say that
Zuck's going to fix the problem.
It's amazing what founders can do.
He's so focused on this.
He's spending all of his time on it.
But I think if you look back a year ago at the prediction that Meta would do well, I think you would say wrong.
Have you changed from a buy to a sell on Meta?
I think the dramatic action that Zuck's taking represents just how deeply invested in this he is.
And I think it also shows us what founder CEOs can do and why founder CEOs are different than non-founder CEOs.
I mean, there's all these studies of like, if you just invest in the basket of founder CEOs, you will outperform the basket of non-founder CEOs.
And I think what Zuck is doing represents that.
And so I remain optimistic about meta long-term.
You said about the vertical integration there being part of your thesis.
I totally agree with you and was probably shaped by hearing you, to be quite honest, David.
You said to me data center and model teams need to be coupled, kind of going to the vertical integration elements.
Do you stand by that?
How do you think about that when hearing that today?
And does OpenAI and Anthropic not having that vertical integration challenge that?
Well, I think the simple version would be OpenAI and Anthropic are now steel servers and power companies.
And that's like a big change that's happened in the last 12 months.
And so in many ways, OpenAI and Anthropic are becoming more and more vertically integrated every day.
You're seeing a lot of announcements around them developing their own chips.
Every day you hear Sam Allman talking about gigawatts of power and procuring his own power.
And so I think you will continue to see the big labs moving vertically down the supply chain.
And that's been one of the biggest trends of the last 12 months.
Do you think we'll continue to see that?
We saw PoolSight recently announce a two gigawatt data center that they're building out in conjunction with CoreWeave.
Do we think all model providers will need to be vertically integrated in this way?
I think that competitive pressures will push all of the model providers to spend more time on this and to have teams focused on this.
So I think the answer is yes.
I do think that this is a trend that is going to be durable.
When we think about where we are today, everyone says bubble.
You've heard it.
I've heard it.
Do you think we're in an AI bubble?
I do think we're in an AI bubble.
I also think, to your point, a year ago when we had our last conversation, it was a very contrarian thing to believe that we're in an AI bubble.
From Bits to Atoms: Why Data Centers Are the New Moat
David Cahn confirms the AI bubble but focuses on who will survive: consumers of compute. He reiterates his prediction that the physicality of AI—'steel, servers, and power'—is the true constraint. He notes that major model providers like OpenAI and Anthropic are rapidly becoming vertically integrated infrastructure companies, viewing construction ability as a competitive moat.
“I sort of had this sense that people were thinking very abstractly, sort of in a bits perspective about AI, but they should be thinking in an atoms perspective about AI.”
I do think we're in an AI bubble.
You can see the fragility.
Everybody can see the fragility.
The thing that I think is more interesting is who's going to survive the bubble?
Consumers of compute benefit from a bubble.
Because if we overproduce compute, prices go down, your COGS goes down, and your gross margin goes up.
The lesson that punches you in the stomach in venture is you can't make a company succeed.
How would you respond to Sequoia were asleep at the wheel when it came to defense, not being in Helsing and Andrel, the two clear market leaders in the category?
This is 20VC with me, Harry Stebbings, and one of the most downloaded episodes of last year was David Kahn at Sequoia.
So much has changed in the last year.
I wanted to have David back for a refresh.
I wanted to understand how he thought about where we were today.
For those that missed the last show, first it's a must, but David is a
partner at Sequoia Capital and one of the world's leading AI investors.
And before Sequoia, David was a general partner at Co2.
I loved this conversation today.
Let me know your thoughts, harry at 20vce.com.
But before we dive into the show today, I love seeing the team come together to make this show happen.
What I don't love is trying to keep track of all the information, the data, and the projects that we're working on across dozens of platforms, products, and tools.
That's why I
we use Coda, the all-in-one collaborative workspace that's helped 50,000 teams all over the world get on the same page.
Offering the flexibility of docs with the structure of spreadsheets, Coda facilitates deeper teamwork and quicker creativity.
And their turnkey AI solution, the intelligence of Coda Brain, is a game changer.
Powered by Grammarly, Coda is entering a new phase of innovation and expansion, aiming to redefine productivity for the AI era.
Whether you're a startup looking to organize the chaos while staying nimble, or an enterprise organization looking for better alignment, Coda matches your working style.
Its seamless workspace connects to hundreds of your favorite tools, including Salesforce, Jira, Asana, and Figma, helping your teams transform their rituals and do more faster.
Head over to Coda.io slash 20VC right now and get six months off the team plan for start ODA.io slash 20VC and get six months off the team plan for free.
Coda.io slash 20VC.
And speaking of tools that give you an edge, that's exactly what AlphaSense does for decision making.
As an investor, I'm always on the lookout for tools that really transform how I work, tools that don't just save time but fundamentally change how I uncover insights.
That's exactly what AlphaSense does.
With the acquisition of Tagus, AlphaSense is now the ultimate research platform built for professionals who need insights they can trust,
fast.
I've used Tegas before for company deep dives right here on the podcast.
It's been an incredible resource for expert insights.
But now with AlphaSense leading the way, it combines those insights with premium content, top broker research, and cutting-edge generative AI.
The result?
A platform that works like a supercharged junior analyst delivering trusted insights and analysis on demand.
AlphaSense has completely reimagined fundamental research, how
helping you uncover opportunities from perspectives you didn't even know how they existed.
It's faster, it's smarter, and it's built to give you the edge in every decision you make.
To any VC listeners, don't miss your chance to try AlphaSense for free.
Visit alphasense.com forward slash 20 to unlock your trial.
That's alphasense.com forward slash 20.
And if AlphaSense helps you spot the winners, AngelList helps you hire them.
If you're listening to 20VC, you know we have a really freaking high bar.
Well, AngelList is the modern platform used by the best-in-class venture funds, where over 40% of top endowments and banks are LPs.
Their customers include a top five venture firm, 20VC, and they now have, check this out, $171 billion of assets on the platform.
They combine an all-in-one software platform with a dedicated service team that moves as fast as you do.
One manager said this awesome quote, AngelList feels like an extension of my fund.
Another said AngelList gives me total peace of mind, the attention to detail, lightning-fast response time, and just real sense of ownership from the team are exactly what I need to stop worrying about back-office ops.
So if you're starting a new fund, don't be a moron.
They're incredible.
Head over to AngelList.com forward slash 20VC to learn more.
You have now arrived at your destination.
David, I love your writing.
Our episode last year was one of the most downloaded shows.
I had like the CMO of Meta.
tell me that it is the single show that he has forwarded to more people and sites more often than any other, not to make you nervous or set the pressure for this episode.
But thank you so much for joining me again, dude.
Thanks for having me, Harry.
You're always very kind.
Now, the year of the data center sounds wonderful.
We had an amazing discussion last year.
What did you predict last year, David, that happened and we are seeing in action now?
So we talked about last year this concept of steel servers and power.
And I think if you remember, you know, rewind to summer 2024, the big conversation at that time was compute models and data.
That's what everybody was talking about.
And I sort of had this view that everyone was underestimating the physicality of these data centers.
I'm on the front lines.
I'm talking to people every day.
You know, you talk to people, they're flying electricians to Texas and they're trying to buy out generator capacity and generators are sold out until 2030.
And so how do you get in line and how do you do that?
And so I sort of had this sense that people were thinking very abstractly, sort of in a bits perspective about AI, but they should be thinking in an atoms perspective about AI.
And I think that prediction came true in two ways.
The first way is the best trade of 2025 was the AI power trade.
A lot of Wall Street people made a lot of money betting on the fact that power was going to be the constraint and we're going to move away.
You know, you hear Sam Altman now talking about gigawatts every day.
He's not talking about dollars anymore, right?
So we're moving away from dollars and we're moving toward gigawatts.
And I think that transition has fully happened in the last year.
The second way I think it was right, and it's funny now, like a year and a half later, you see this on the cover of The Economist and the cover of The Wall Street Journal and the cover of The Atlantic.
The mainstream media has now really picked up on this narrative of the physicality of AI is what translates to GDP.
I mean, GDP is an imperfect metric, and it generally captures physical things more than virtual things.
And so GDP now is picking up all of this construction boom that's happening, all the steel that's getting created, all of the physical stuff that's happening in the AI data centers.
And you're seeing these stories, which I think are true, which is,
AI is now one of the biggest contributors to GDP growth in the United States.
And so I think that's the second way in which that prediction has played out.
Does this contribution to GDP growth go contra your $600 billion question in terms of where the revenue will come from?
Well, the $600 billion question, and maybe just to remind folks what that is, I mean, it's basically a very simple equation that says if we invest, and this was 2024 when I wrote this, if you invest $150 billion in NVIDIA chips, that's about $300 billion of data center investments, and to pay that back
The person using the compute needs to earn a 50% gross margin.
So there's about $600 billion of revenue that needs to get generated.
If you redo that analysis in the summer of 2025, it's about $840 billion.
So it's grown, but it hasn't grown dramatically.
And so the question behind the question was, is the customer's customer healthy?
We know that the customer is healthy.
We know that people are buying all these data centers.
We know that people are building these data centers.
We know that those stocks have all gone up.
We can see that.
But is the customer's customer healthy?
Is there actually an end user for this compute?
I don't think that's been answered.
The question last year, which was the valid question, was if everyone's spending all this money, it hasn't showed up yet because
that people haven't put the shovel in the ground yet.
I literally wrote a piece last summer called AI is Shovel Ready.
You know, the shovel is going to start hitting the ground.
And so now the shovel is hitting the ground.
We're mid construction on a lot of these projects.
One of the predictions I made last year, in addition to saying it was going to be the year of the data center, I said, hey, we're going to have these construction delays.
We're going to have issues now in building out these data centers.
And the information has done a very good job of reporting on this.
But I think we're at the beginning now of seeing some of that play out as well.
Are we going to see a mass proliferation of delays on data center construction, do you think?
I think we're going to see variability.
One thing I'm always interested in as an investor is like there's winners and there's losers and there's variability.
And I'm very skeptical.
Whenever anyone tells me like everybody is going to win or everybody is going to lose or everyone is going to do anything, like there's always variability.
Imagine a race, you have a track race, like there's somebody in the front and there's somebody behind and someone's faster than the other person.
And so I think with data center construction,
One of my core perspectives that I've been developing over the last 18 months of writing about this is that construction itself is going to be a moat.
The ability to build things is hard.
I think we underestimate that.
And I think we continue to underestimate that because we sort of say, oh, well, it's fine.
Like, everyone's going to do it.
The timeline is two years.
Okay.
But like, there's a lot of complexity that goes into that.
And by the way, the complexity compounds when everybody is doing the exact same thing at the exact same time and everyone is trying to buy from the same vendors.
And I've written a lot about the supply chain for that reason.
Because you really need to care about not only, OK, Meta and Google are both building a data center, but who's the guy that they're calling and who's the guy that he's calling?
And you've got to follow it all the way down the supply chain to get to the core of really what's going on.
MARK MANDELBAUM- There's so many things I want to unpack within those.
I do want to go to, what did you not predict or foresee that did play out that you were surprised by?
I think there were two big misses last year.
I think the first big miss was these like big talent acquisitions.
If you had asked me the probability a year ago that, you know, if you're a 25-year-old recent grad from an elite university who is perceived to be an AI expert, you can get a $50, $100 million pay package right now.
And if you are a brand name that everyone recognizes your name, you can get a billion dollar pay package right now for a single individual.
I totally did not see that coming.
And I think that you asked me a year ago to predict that I would have said you were crazy.
So sometimes I do think the beauty of AI is like reality is stranger than fiction.
And a lot of crazy things happen.
Do you think those scaled pay packages are justified?
I think they're symbolic of this sort of desperation in the ecosystem where it's like, we need to eke out progress.
We need to prove that all these investments are worth it.
And I think there's this logic that gets really abused in the venture world and in the tech world, which is like, hey, if I increase the probability of making a trillion dollars by 1%, that's worth a ton of money, right?
That's worth $10 billion.
sure, that's true, but it's very easy to overestimate the 1%.
Is it 1%?
Is it a hundredth of 1%?
Is it a thousandth of 1%?
Is it a 10,000th of 1%?
Our brains were very bad at reasoning about that scale of number.
I think to the extent that you believe that hiring this very impressive researcher increases the probability you win by 1%, I totally can see why you would justify a billion dollar pay package for an individual.
That said, I think we are psychologically biased to overestimate what that percent contribution is.
It may be the case that there's these broader macro variables, which we'll talk about, I'm sure, later in this discussion.
There's these broader macro variables that are actually driving progress in AI that are not a single individual can change.
I'm very upset looking at these pay packages that my mother didn't push me towards a more engineering heavy
Doesn't everyone feel that way?
I think that's probably the universal reaction to seeing these packages.
I'm like, Mom, you should have done better.
Bad parenting.
You encourage me to do English?
Really?
Come on.
War and peace doesn't quite make it, does it, when you're getting paid $3.5 billion by Zuck?
What was the second?
I think the second one, one thing we talked about on the podcast last year, I predicted that Meadow was going to do really well.
And I think that prediction was clearly false in a 12-month time horizon.
I thought that the vertical integration that Meta had was going to be an advantage.
And I think that Meta, these 100 million packages are coming in large part from Meta because they haven't performed as well as they thought they were going to.
The reason I thought Meta would do well is that it was vertically integrated and found a run.
I sort of continue to believe that in the fullness of time, it is possible.
And I think the dramatic actions that Zuck is taking represent this.
It is possible that I will be proven right in a longer time horizon, which is to say that
Zuck's going to fix the problem.
It's amazing what founders can do.
He's so focused on this.
He's spending all of his time on it.
But I think if you look back a year ago at the prediction that Meta would do well, I think you would say wrong.
Have you changed from a buy to a sell on Meta?
I think the dramatic action that Zuck's taking represents just how deeply invested in this he is.
And I think it also shows us what founder CEOs can do and why founder CEOs are different than non-founder CEOs.
I mean, there's all these studies of like, if you just invest in the basket of founder CEOs, you will outperform the basket of non-founder CEOs.
And I think what Zuck is doing represents that.
And so I remain optimistic about meta long-term.
You said about the vertical integration there being part of your thesis.
I totally agree with you and was probably shaped by hearing you, to be quite honest, David.
You said to me data center and model teams need to be coupled, kind of going to the vertical integration elements.
Do you stand by that?
How do you think about that when hearing that today?
And does OpenAI and Anthropic not having that vertical integration challenge that?
Well, I think the simple version would be OpenAI and Anthropic are now steel servers and power companies.
And that's like a big change that's happened in the last 12 months.
And so in many ways, OpenAI and Anthropic are becoming more and more vertically integrated every day.
You're seeing a lot of announcements around them developing their own chips.
Every day you hear Sam Allman talking about gigawatts of power and procuring his own power.
And so I think you will continue to see the big labs moving vertically down the supply chain.
And that's been one of the biggest trends of the last 12 months.
Do you think we'll continue to see that?
We saw PoolSight recently announce a two gigawatt data center that they're building out in conjunction with CoreWeave.
Do we think all model providers will need to be vertically integrated in this way?
I think that competitive pressures will push all of the model providers to spend more time on this and to have teams focused on this.
So I think the answer is yes.
I do think that this is a trend that is going to be durable.
When we think about where we are today, everyone says bubble.
You've heard it.
I've heard it.
Do you think we're in an AI bubble?
I do think we're in an AI bubble.
I also think, to your point, a year ago when we had our last conversation, it was a very contrarian thing to believe that we're in an AI bubble.
About this digest
Release notes
We remix the strongest podcast storytelling into a tight, twice-weekly digest. These notes highlight when this edition shipped and how to reference it.
- Published
- 10/28/2025
- Last updated
- 10/28/2025
- Category
- tech
- Chapters
- 4
- Total listening time
- 38 minutes
- Keywords
- the enterprise ai readiness crisis: data, talent, and strategy
