The AI Bubble Question: ARK Invest Responds to the Sceptics
Every adviser has had this conversation by now.
Client sees their portfolio statement. Notice the AI-heavy tech names have moved 30%, 50%, maybe 100% in a year. They ask the inevitable question: “Isn’t this a bubble?”
And honestly? It’s the right question.
We’ve seen this movie before. The late 90s. Everyone was convinced this time was different. Companies with no revenue are trading at astronomical valuations. Then reality hit.
So when we hosted Thomas Hartmann-Boyce from ARK Invest for our latest webinar, we didn’t want the usual marketing pitch. We wanted the hard questions answered. The sceptical case examined. The uncomfortable economics dissected.
What we got was perhaps the most balanced discussion of AI investing we’ve heard—acknowledging both the massive opportunity and the legitimate concerns.
The Question Everyone’s Asking (But Few Are Answering)
Let’s start with the elephant in the room: economics.
One analysis that’s been circulating focuses solely on 2025 data centre spending—roughly $400 billion. Assume a 10-year depreciation schedule. That’s $40 billion in annual depreciation.
Current AI revenue across the industry? Somewhere between $15-20 billion.
Do the maths. Depreciation is literally double the revenue.
Now play this forward. To cover depreciation with a modest 25% margin, you’d need to generate $160 billion in revenue. That’s a 10-fold increase from current levels.
Factor in a 20% return on invested capital (what investors actually expect), and you need $480 billion in AI revenue.
To put that in perspective: Netflix generates $39 billion in revenue. Microsoft $95 billion. Both have plateaued in user growth.
And that $480 billion isn’t enough to cover all the world’s AI needs. It’s just to justify the 2025 capex spend alone.
So the question is stark: can there ever be enough revenue?
ARK’s Response: You’re Looking at the Wrong Numbers
Thomas’s first point was direct: “I’d probably question that $20 billion revenue figure.”
Just the top 2-3 AI labs are running at around $20 billion in current run rate. That doesn’t include the acceleration in cloud revenue from Azure, AWS, and Google Cloud—each showing double-digit percentage-point growth.
More importantly, it doesn’t account for the massive ecosystem building on top of these platforms.
Platform-as-a-service companies. Software-as-a-service integration. E-commerce transformation. Autonomous coding tools. The entire agentic AI layer that’s emerging.
If you’re only counting revenue from OpenAI, Anthropic, and a handful of model providers, you’re missing the forest for the trees.
But here’s the more interesting argument: revenue lags are normal. Expected, even.
Every Platform Technology Goes Through This Phase
Railroads. Electricity. The internet.
When transformative technologies are built, massive capital expenditures always precede productivity and monetisation.
We’re in the deployment and fine-tuning phase right now. The monetisation models are just starting to catch up.
Thomas shared a concrete example: Amazon built an internal coding agent called Amazon Q. In one year, it saved them $260 million in costs by automating repetitive development work.
That’s one company. One application. One year.
Scale that across thousands of enterprises, across dozens of use cases, and suddenly those revenue projections start looking less absurd.
The Innovation Platform Framework
Here’s where ARK’s approach gets interesting.
They don’t just look at AI as a single technology. They analyse it through what they call “innovation platforms”—technologies that meet three specific criteria:
1. Steep cost curve declines (the most important factor)
2. Impact across multiple sectors (not niche applications)
3. Ability to enable additional technologies (platforms feeding platforms)
The cost decline data is remarkable.
AI training costs are dropping 75% per year. Inference costs are down 85-90% annually.
When DeepSeek launched earlier this year, and the market panicked (remember the trillion dollars in value lost in a few hours?), ARK’s response was the opposite: this is precisely what you want to see.
Massively lower costs mean wider adoption dramatically. More players can participate. More use cases become economically viable.
This isn’t a threat to the AI ecosystem. It’s an accelerant.
The Value Chain Most Portfolios Miss
Here’s where the conversation got practical for portfolio construction.
Most “AI funds” or indices are just the Magnificent 6 with maybe a few other mega-caps sprinkled in. You already own Nvidia, Microsoft, Google, and Amazon ten times over in your portfolios.
ARK maps the entire AI value chain differently:
Hardware → The chip designers, manufacturers, and infrastructure providers
AI Platforms → The infrastructure-as-a-service layer (CoreWeave, for example)
Software/Applications → Tools like Palantir that help enterprises actually implement AI
Embodied AI → Autonomous vehicles, robotics, drones—AI in the physical world
Thomas’s point: if you’re only looking backwards at indices, you’re going to miss the names that become winners in each of these categories.
ARK’s active share vs the NASDAQ 100 index? 77%.
That’s meaningful diversification from what you already own, not just a concentrated bet on the same handful of names at stretched valuations.
The Embodied AI Opportunity (That No One’s Talking About)
One section of the webinar particularly caught our attention: robotics as “embodied AI.”
Most investors think of robotics as separate from AI. ARK views it as AI applied to the physical realm.
The biggest project? Autonomous mobility. A $7-10 trillion revenue opportunity.
Why so massive? Cost declines.
Today’s Uber charges $2-4 per mile (higher with surge pricing). Future autonomous ride-hailing could drop to 25 cents per mile, driven by lower vehicle costs, elimination of driver costs, improved utilisation, and, critically, better safety.
There’s a virtuous circle: as autonomous systems train on more data, they get safer. As they get safer, more people use them. More usage generates more data. Better models emerge.
Tesla’s Full Self-Driving rolled out commercially in Austin in June 2024. This isn’t theoretical anymore.
And it’s not just ride-hailing. It’s autonomous trucking. Drone delivery. An entire ecosystem of applications where AI enters the physical world.
For most portfolios, this exposure is essentially zero.
The Sceptical Questions (The Good Stuff)
The second half of the webinar featured our portfolio manager, James, playing devil’s advocate. Hard.
Question 1: Are LLMs Actually Useful?
The critique: Large language models are great at regurgitating information but fall apart when asked to do anything brilliant.
The famous example: You can train an LLM on every chess book ever written, and it’ll still make illegal moves or invent new pieces.
ChatGPT usage dropped 85% outside US school term times—suggesting it’s mainly used for homework.
AI agents can complete only 1.5% to 34% of actual business tasks, according to a Stanford study.
MIT reports 95% of corporate AI projects are failing.
Thomas’s response was measured: “If AI wasn’t working, you wouldn’t see 70% of enterprises embedding or seeking to embed AI into their workflows.”
The real examples are there:
Fannie Mae reduced fraud detection time from two months to seconds using Palantir.
City Bank cut new client onboarding from 9 days to seconds.
One hospital increased discharge efficiency by 2,000%.
His comparison: The MIT study showing 95% failure rates is more about pilot fatigue and poor change management than AI viability. You could have said the same about CRM or ERP implementations at one point—yet now we’re entirely dependent on them.
We’re in 1995 for AI. About 16% penetration in the global smartphone market. Early stages with massive room to run.
Question 2: Have We Hit a Scaling Wall?
ChatGPT costs $50 million to produce. GPT-4 costs $500 million. GPT-5 was supposed to cost $5 billion, and OpenAI delayed the release because it wasn’t sufficiently better to justify the expense.
Have we hit the limits of scaling?
Thomas’s counter: You’re only looking at one side of the equation.
Yes, these models are getting more expensive as they grow to trillion+ parameters. But the efficiency gains are staggering.
An equivalent model to GPT-4 that cost $500 million to train two years ago would cost less than $5 million to create today.
Incremental improvements in accuracy—going from 99.5% to 99.9%—are meaningful when you’re deploying in critical applications like defence or healthcare.
More importantly, we’re not running out of data. We’re entering an era of unprecedented data generation.
Example: We can now sequence individual cells in the human body. Each person has 37 trillion cells. Healthcare alone will generate orders of magnitude more high-quality training data.
The scaling wall argument assumes static efficiency. That’s not what’s happening.
Question 3: Is This a Bubble?
Even the Bank of England is warning about an AI bubble.
Nvidia’s round-tripping deals with industry partners raise red flags.
Should advisers be worried?
Thomas made two points worth noting here.
First, the macro backdrop is more supportive than people realise. The “One Big Beautiful Bill” in the US includes provisions that allow immediate expensing of R&D, software, equipment, and manufacturing facilities. This effectively drops corporate tax rates from 21% to 10-12% for growth companies.
That’s a massive tailwind that’s being underappreciated.
Second, the “round-tripping” concerns are overblown.
In every industry, vertical specialisation is normal. Nvidia designs GPUs. TSMC manufactures them. Oracle builds AI-native clouds. Palantir creates enterprise platforms. OpenAI builds models.
These companies working together and coordinating capex isn’t nefarious—it’s essential to prevent infrastructure bottlenecks.
Comparison: Boeing and GE align on jet engines for fleet expansion. That’s not a red flag.
That’s how complex industries function.
As long as you can measure how AI is actually being incorporated into businesses (which ARK does continuously), the interconnectivity is healthy, not suspicious.
How We’re Using This at Fusion
Our approach has been deliberate.
We’ve added the ARK Artificial Intelligence & Robotics ETF to our Optima MPS range—portfolios constructed using ETFs rather than mutual funds.
The allocation sits in our alternatives bucket at a small satellite weight. Not core holdings, but meaningful exposure to capture the momentum in AI themes.
Why alternatives? Because ARK’s portfolio shows genuinely low correlation to traditional growth strategies. Negative correlation to value, actually. Low correlation even to growth.
That 77% active share means you’re getting exposure you don’t already have ten times over.
But here’s the protection piece: we’ve paired it with a VIX position.
It’s a straddle approach. If AI continues winning, we capture outsized returns through ARK.
If the bubble bursts and markets sell off, the VIX position provides downside protection.
We’re not making a binary bet. We’re positioning for both scenarios.
The Honest Assessment
Look, we don’t know if we’re in a bubble.
Anyone who tells you with certainty either way is lying or delusional.
What we do know:
The cost declines are real and accelerating.
The technology is being deployed at scale across enterprises.
The revenue is starting to follow, though with the expected lag.
The opportunity set spans hardware, platforms, applications, and physical robotics—not just a handful of mega-caps.
The risks are substantial: high valuations, massive capex requirements, execution challenges, and the ever-present danger that adoption doesn’t scale as quickly as hoped.
For IFAs building client portfolios, the question isn’t whether to ignore AI entirely (that ship has sailed). It’s about getting exposure in a way that’s proportional to the risk, diversified from existing holdings, and protected if things go sideways.
ARK’s approach—mapping the whole value chain, maintaining high active share, focusing on cost curve dynamics—offers one framework.
Our approach—small satellite positions paired with downside protection—offers another layer of risk management.
But the conversation needs to move beyond “is it a bubble?” to “how do we position appropriately for both outcomes?”
That’s the discussion worth having with clients.
Watch the Full Webinar
This post covers the highlights, but the whole 58-minute discussion goes deeper on:
- Detailed cost curve analysis across AI technologies
- The convergence thesis (how AI, robotics, EVs, and genomics feed each other)
- Specific examples of AI driving productivity today
- Valuation discussions on names like Palantir
- The agentic commerce revolution
- Round-tripping economics explained in detail
For advisers who want to understand AI investing beyond the headlines, it’s worth the hour.
Fusion Optima MPS range uses ETFs to capture thematic opportunities while maintaining risk management through protective positioning. To discuss how we’re navigating the AI opportunity in portfolios, contact our Investment Team.
Contact our investment team here
For more updates and future episode announcements, follow us on LinkedIn