cover of episode AI Exchanges: Will falling costs drive new opportunities?

AI Exchanges: Will falling costs drive new opportunities?

2025/2/4
logo of podcast Goldman Sachs Exchanges

Goldman Sachs Exchanges

AI Deep Dive AI Chapters Transcript
#artificial intelligence and machine learning#investment banking#ai market trends#generative ai#large language models Topics
@Allison Nathan : 我提出了一个问题,即中国公司DeepSeek推出的低成本AI工具是否会影响大型科技公司在AI上的巨额支出,以及这是否意味着Jim Covello对高额资本支出的怀疑是正确的。市场对AI的构建产生了根本性的质疑。 @George Lee : 我认为DeepSeek的出现实际上回答了Jim Covello的一些主要担忧。他之前担心预训练基础设施的巨额资本成本,以及技术是否能足够便宜以在企业内部广泛应用。DeepSeek的定价表明,智能令牌的成本将继续大幅下降,趋向于边际零成本。这并不意味着之前的支出没有用处,相反,它可能标志着更多人以更低的成本参与预训练活动的开始。我对AI技术持乐观态度,并相信现有和计划中的基础设施将被充分利用。虽然长期来看资本成本的轨迹可能需要重新评估,但技术历史表明,价格下降会被使用量的增加所抵消。我对AI在各个领域的应用前景感到兴奋,并认为它将推动创新和效率的提升。

Deep Dive

Shownotes Transcript

Translations:
中文

Welcome to Goldman Sachs Exchanges. I'm Alison Nathan. This year, we've decided to look closer at the rise of AI and everything it could mean for companies, investors, and economies. So we're bringing you this series of special podcast episodes we're calling AI Exchanges, which I'll be hosting alongside my colleague, George Lee. George is the co-head of the Goldman Sachs Global Institute. He's the former CIO of Goldman Sachs.

And before that, he was co-chairman of the Global Technology, Media, and Telecommunications Group in our investment banking business. George, thanks for joining me for this series. Thank you, Alison. It's great to be doing this with you. Look forward to it. I am super excited about this. The goal here is to help listeners understand the impacts that AI is having today and the ways it could change in the years ahead, the implications, as I just said, for businesses and a

And George, there's really no better time to be having this conversation because, of course, we've recently seen some really fundamental questions about the AI build-out ripple throughout the markets quite powerfully. We all have observed the volatility we've seen in the markets this week. It's been well reported that a China-based AI company called DeepSeek has rolled out a low-cost AI tool, which is raising questions about AI infrastructure and making

about the massive spending on AI by America's biggest tech companies. George, this is right up your alley. You've been having a pretty active debate with Jim Covello, Global Head of Equity Research at Goldman Sachs, about whether the benefits of AI will justify what was expected to be an enormous cost

from these companies in terms of developing and supporting the technology. And that spending is well underway, as we all know. And you've been on the bullish side of that debate. How does the emergence of this low-cost competitor shift that discussion? I'm just going to ask you, does it mean that Jim was right to be skeptical about the huge CapEx spend?

spend all along? First of all, Alison, it has been great to have this running discussion with Jim. And it's in such a momentous and dynamic time, as you discussed. So it makes for really fruitful and interesting dialogue. I would say, though, that to your point precisely, I would say much to the contrary. I think this answers some of Jim's principal concerns, not all of them.

but some of them. Two of his understandable and well-founded concerns were the eye-watering capital costs of building this pre-training infrastructure, which you referenced.

And then his belief that the technology would never really get cheap enough to have broad utility inside the enterprise. And so this development, which may, I would underscore may, promise much more efficient ways of pre-training, may indeed deep into the future reduce the amount of capital we have to allocate to at least that part of this ecosystem. And further, DeepSeek's pricing measures suggest that we will continue to see very steep declines in per token costs.

that make the incremental cost of an intelligent token trend towards marginal zero, which is a very powerful concept.

But the question is, lots of companies have been spending lots of money. Do you think that spending is essentially not going to be useful spending? Oh, no. Again, to the contrary. First of all, this doesn't mark the end. Perhaps it marks the beginning of even more pre-training activity by more people who can afford to embark on this at lower capital cost. And so people talk about Jevons paradox as the price of something declines, abundance tends to increase.

Moreover, I think one important part of this debate is DeepSeq has performed some really interesting engineering hacks to address pre-training costs.

These models have already shifted towards much more dense and abundant computing at inference time, at test time. And so that itself, if the price of intelligent tokens decline, it breeds abundant new use cases. And a lot of the computation is at inference. I think all the infrastructure that we've sunk capital into now and in the next few years that are planned will be well used.

One could ask questions as you look farther out the horizon, years 3, 5, 7, 10, whether we'll need the same trajectory of capital cost. But that just comes down to a judgment on this idea of, hey, you've got a less expensive commodity. Will that breed more volumes that offset that price decline? And so the history of technology would tell you that indeed that will be the dynamic.

But again, I think this marks a really interesting advance in the technology. It addresses many of the concerns market observers like Jim have had and promises the abundance of these tokens that for places like Goldman Sachs will open up new horizons of use cases. Right. And more cost effective use cases. And essentially, could we see the applications and the adoption of them speed up?

Absolutely. And Jim cites in our dialogue a few applications where the technology is useful for us, and yet it remains prohibitively expensive relative to human capital. And this actually, I think, changes that equation. And again, this will embrace new use cases we can't even really imagine today.

Which will be fun and interesting. And again, this is part of a repeated history of the way that technology happens. This feels like a very discontinuous moment because the sharpness of the decline in potential capital costs and token costs. But if you scope back out and you view it in the history of Moore's law over the last 120 years, even spanning outside the Silicon Age, or this phenomenon in general,

I think it's a measurable but small blip in a steeply declining overall curve. That's really interesting. Let's bring Kim Posnett into this conversation. Kim is the global co-head of investment banking in our global banking and markets business and the former global head of the technology, media, and telecommunications group.

Kim, welcome to the discussion. I can't think of a better guest. Thank you for having me, guys. Do you think about this phenomenon the same way? Alice and I just had a fun discussion about the trade-off between price and volume and Jevons paradox. Do you see it the same way and are your clients thinking about it the same way? I do, not surprisingly, George.

If you put aside the global race for AI supremacy, that's a separate conversation. I think there's, and you and I talked about this over the past few days and weeks, this unambiguously good news. The cost of compute is coming down dramatically. The price per token is coming down dramatically. That means these models are becoming more cost efficient. It is great for the world that this will be cheaper for all of us.

And Jevons Paradox, you mentioned, I think that is absolutely in play where you see increased efficiency that will lead to increased adoption and consumption. And there's so many examples across the business landscape where you can see expanded use cases. We've all been talking about automating repetitive tasks, but imagine automating complex

processes. So legal assistance, financial services, scientific research. I was just with an AI researcher last week talking about modeling the immune system and modeling the brain. Think about the implications for the healthcare industry if we're able to achieve that. As an example, my favorite example, I think this, I want to know what yours is too, is ubiquitous conversational AI. So sort of like personal assistance for everyone in every context.

Personally, professionally, I do think you'll see that there's so many more use cases we could go into, but those are some examples. I agree. I think the conversational interface is very powerful and requires a shift in some ways in how you use the technology. And I find myself with my AirPods walking down the street talking to voice assistants and looking a little peculiar in the process. But it's a very powerful modality to get access to that intelligence technology.

for sure. Also, a parallel phenomenon that people are talking about, this got drowned out for just a minute by this whole DeepSeek episode, is the rise of agents and some of the new approaches there. What do you think about that? Are we early in that? Is that a whole new vector of improvement for these models? I think we are early days in AI agents. I believe also that they will be ubiquitous over time. Who knows what the timeframe is? You tell me, I think you agree.

I do. And I've spent the past weekend playing with cloud computer use and the new operator product from OpenAI. And it's very early. It's sort of a proto experience. Yep.

but it hints at something that's very powerful. Sorry, can I just ask, when we say AI agents, for people who are not that close to AI, what are we talking about? First of all, I'm so glad you asked that question because it's a very broad set of definitions around agents. I'll give you two. One broader, which is a system of models, computation, and resources that complete linked tasks that

and allow you to complete more multi-step complex tasks in business or personal life. So the canonical example is I want to take a trip to Phoenix, help me book the flight, help me book a hotel, help me book a rental car, and it's autonomously executed. The applications that we're talking about are more, for now, more consumer oriented and basically give the instructions to these applications to resupply something that you need for your house from Amazon and it brings up a webpage.

It takes a picture of the webpage. It's able to discern those pixels, identify text entry boxes, buttons. It takes a hold of your cursor and begins to execute on your behalf. And it's really extraordinary to watch. It is early. And by the admission of the people who are developing these capabilities, and they want to enroll people in refining it. One market observer thought a very funny characterization of it, which is...

You ask it for a task, it brings up a browser, it starts executing on your behalf in websites. And it's so slow and deliberate and herky-jerky while it's doing it, reminds you of teaching your grandparents how to use the web in 1997. But nonetheless, an inspiring direction of travel for the

But I think it's an important question because you've got this sort of near vertical advancement of these AI models. This is an example of that, which is driving this increased demand for, and you mentioned this earlier, Alison, scalability, efficiency, sustainability, and the increased scale and complexity of these AI models today.

require huge amounts of capital. Equally, they require huge amounts of energy, powered land, and data centers. And that leads to your question around capex spend. It's why I agree with George's answer on the capex spend today. Was that for not? I don't think so. I'll give you a data point, which is fascinating. If you look at the capex spend of-- and I'll just pick four companies--

Amazon, Alphabet, Meta, and Microsoft. Across the four of them, they spent over $116 billion of CapEx in 2022. That was the year that ChapGPT was launched to the public. Roll forward two years later, 2024, they spent just under $200 billion. That's almost double in a two-year time frame. And you've seen the recent announcements. Meta has announced that they'll spend $60 to $65 billion on AI-related products.

CapEx spend this year, Microsoft $80 billion. You saw the announcement, the Stargate AI infrastructure JV announcement just last week. So there's, I think,

a huge amount of CapEx spend that is appropriate right now, given this near vertical advancements. And I agree with George. The question is, in three, four or five years, as these models become more efficient, it's unclear what the CapEx requirements will be in the medium term. But we are hearing very low CapEx spend for this low cost Chinese based competitor. So are companies actually rethinking the amount of dollars that will need to go towards this technology?

Over the medium and long term, perhaps. And I think it relates directly to what we see on the efficiency curve of these models. And I don't think we know the answer yet.

I agree with that. So Kim, one of the other predicates, one of the ingredients you pour into the top of these models to create this intelligence in addition to power and data center capacity that you cited is data itself. And in your career as a banker, you've done a lot of very data centric transactions. You know that ecosystem well. Observers say that we're running out of human generated broadly available data very quickly, if not already. So that sets the mind, I think, of model makers towards

synthetic data generation or unlocking data that's behind firewalls or protected and proprietary.

Is there any chance that a sort of economy of data emerges around that? Yes, I think so. And my view on this has evolved over the past few years. What is the single greatest bottleneck to AI? Is it data or is it power? I think maybe a year ago I would have said data. I think today I would say power. So anyway, we can debate that. But I do think that the landscape of data economies is emerging and evolving. So there's new data marketplaces, as George alludes to, but there's also the reshaping of existing markets.

And so last year, you started to see new creative partnerships form and data licensing deals. So that was publisher partnerships. That was social media partnerships. That was stock photography partnerships. And I think you'll continue to see that because data has become so valuable. And then...

On whether we've run out of data, I think you'll start to see things like synthetic data marketplaces. So that's where AI generated data mimics real time data. Think like your medical records and you use that synthetic data to train models or personal data marketplaces where you can opt in and sell your own personal data to a business to

to train a model. It's a great segue to deal making and what you're seeing in Silicon Valley and around the world in capital formation and new companies and potential for IPOs and sales. There's a whole set of fellow travelers alongside these model providers and infrastructure providers

that give infrastructure and tooling and security. Are you seeing the emergence of a lot of those companies and are they growing faster than prior generations that you and I might have worked with over the years? Yeah, so if you talk to CEOs and I'll just focus on the U.S. across the U.S. corporate landscape, I think many would say over the past few years they felt headwinds to growth

from a monetary policy standpoint, from a regulatory standpoint. And if you ask them today what their perspectives are, I think there's a general tone of optimism and a belief that the monetary policy and regulatory environment will ease, which will allow them to be more forward-leaning on growth, on investment, on M&A, on IPOs, et cetera. So I think that the backdrop to dealmaking, especially in tech,

is quite constructive today. In the early days of this year, it's only been three, four weeks, you've seen a lot of strategic activity. I expect that strategic activity to continue and accelerate throughout the year. As it relates to AI specifically, I think of AI deal-making through sort of

two lenses. One is capital markets and financing, and the other one is M&A. And on capital markets and financing, we've already touched on a lot of the thematics that investors are focused on, which is CapEx spend, ROI, global supremacy, who will win, who will lose. I do think generally investors are bullish still on AI. There's emerging questions as last week

And then on M&A, I actually think you've seen already AI-driven strategic M&A. There's a bunch of examples from last year. So I think you'll see more strategic M&A specifically related to AI this coming year. And Kim, you briefly mentioned power as a constraint as well. So talk to us about what you've been learning about that and why you're more concerned.

As I said, I've debated in my mind what's the bigger constraint, data or power? I now think it's power. Do you agree with me that it's power? My answer to whether it's data or power would be yes. And I'm not sure how to balance them. That's quite clarifying. Thank you. But historically, we've seen decades, literally decades of sub-3% annual growth in baseload power demand in the U.S. as an example. And now you're seeing this unprecedented tectonic shift

in demand for power related to AI. And just to dimensionalize that a little bit, AI servers require, I don't know, 10x the amount of power as a traditional server, order of magnitude. And you're seeing these companies, the hyperscalers, build these AI data center campuses that are multi-gigawatt centers, OK?

to put it in context, that's what it takes to power entire cities. And so I just think we can't underestimate the amount of power needed to run these highly complicated and complex AI systems today. And it will likely be a source. I mean, one of the interesting parts about that is it in of itself will be part of a demand function for innovation and power delivery. And so scaling green sources, battery that allows you to store and use that in a less intermittent way, small modular nuclear fusion, it's

I think it's, again, it's innovation in an industry that's been relatively static spawned by this demand. Kim, maybe let's end with something that you and I talk a lot about, which is we focused on the steep improvements in this technology, the potential for it to be more broadly useful at lower cost in the world.

And yet we see a little bit of a lag of enterprise adoption. I am hopeful this is a year where we're going to see that inflect upwards. What are you hearing from clients? What are you observing? And then maybe last, do you have any interesting personal use cases of it that you'd like to share? Oh my gosh, I use it all the time.

Yeah, I think that last year people were still testing and learning and trying to understand applications to their own businesses. And I think this year is the year of true enterprise adoption and scaling. And so I think this will be an important year to see how much enterprise adoption there is across AI and love to hear your views. And yeah, I just think that there's so many examples I could use both personally and professionally about how I integrate AI into my life. I don't even know where to begin.

But I, like you, am walking down the street talking to my AI like my imaginary friend. It's funny because we laugh about it. And yet you have to recognize it was only two years ago that this capability was loosed upon the earth in the form of the initial launch of ChatGPT.

And we worry about the lag in enterprise adoption. We wrestle with the amount of capital and the costs associated with this. And yet you look around and there's a generational dimension of this too. I totally agree. You look around and people joining the workforce or in schools, the way that they fluidly use this technology and are just perhaps in small quanta to begin with, but a wedge opening up of

more productive, but a little bit smarter, a little bit more responsive. And so I think it's, again, it's just a glimmer of where this will take us, hopefully, in the enterprise and in our personal lives. STEPHANIE And can you telegraph anything to come around Goldman and enterprise adoption around AI? RICK Sure, yeah. Well, I mean, it's in the news. We launched our GS AI Assistant, which allows more people across the firm to get access to leading-edge models, to be able to use them in a safer, more reliable, and more compliant way, which

befitting our roles as a regulated financial institution is important. And again, early days, AI is in many ways prolific throughout the firm, but this is the broadest, most general purpose offering we've made. And I think it will be really interesting to see in the coming months what use cases emerge, what innovations, what inventions, what creativity is brought to bear, particularly by our junior people.

George, Kim, this has been a fascinating conversation. Thanks so much for joining us. Thank you for having me, guys. Kim, great to kick this series off with you. Couldn't imagine a better guest. Super fun. Thank you. Thank you. And George, if I take away anything from this conversation, it's that there is a lot of good news in these recent developments, even if the market's been very volatile around them. And we've said this for a long time now. We are in the early stages of this. So there will be many more evolutions to come.

I agree with you. Obviously, as you noted at the beginning, I do have a bullish take on this, but lest I be accused of being a perpetual bull, I think this is in some ways a source of optimism for the future trajectory of the technology. It also poses some questions about the fundamental economics for various participants.

As Kim said, it raises questions about longer-term capital spend and how companies and infrastructure builders think about it. But I think for the near term, this is pretty much good news, and it's part and parcel of fast scaling of an innovative technology and be incredibly fun to watch. Well, George, this has been fun. I'm looking forward to continuing the conversation in future episodes. Great. Thank you. This episode of Goldman Sachs Exchanges was recorded on Wednesday, January 29th. Thanks for listening.

Thank you.

is the property of the company to which it relates, is used here strictly for informational and identification purposes only, and is not used to imply any ownership or license rights between any such company and Goldman Sachs. The content of this program does not constitute a recommendation from any Goldman Sachs entity to the recipient,

Thank you.

which may vary. Neither Goldman Sachs nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this program and any liability, therefore, including in respect of direct, indirect, or consequential loss or damage, is expressly disclaimed.