
Equity
AI's inflection point
Back to all
While markets sold off, AI fundamentals strengthened meaningfully. Investing is ultimately a process of Bayesian updating, where each new data point shifts the probability distribution of outcomes. In Q1, several high-signal data points emerged that materially increased the probability of a much larger AI opportunity. The magnitude and consistency of recent progress suggest we are earlier, and the outcome is likely bigger than previously assumed.
AI coding reaches escape velocity
AI coding has crossed an important threshold. Tools like Anthropic's Claude Code are now fully agentic and capable of building production-grade software with minimal human input. Andrej Karpathy, OpenAI co-founder and historically sceptical about AI coding capabilities, noted that coding flipped from "80% manual and 20% machine" to "80% agent and 20% manual edits", and that he is now "mostly programming in English".
This is the first large-scale example of AI directly performing high-value cognitive tasks that were previously human-dominated. However, the implication is better understood as a productivity shock rather than a simple labour substitution story. Historically, productivity shocks lower marginal costs, expand output, and increase real incomes, ultimately leading to more economic activity rather than less labour demand. Early data already points in this direction, with software engineering demand remaining robust even as AI tools proliferate. With 20–30 million developers globally at approximately USD 100k fully loaded cost, this represents a USD 2–3 trillion labour pool to be augmented.
There is also a natural economic boundary to full automation. Training and inference require substantial semiconductor capacity, data centres, and energy. Scaling AI to fully displace large pools of white-collar labour would require orders of magnitude more compute than exists today. As automation expands, demand for compute rises, pushing up its marginal cost. If the marginal cost of compute exceeds that of human labour for certain tasks, substitution will not occur. In that sense, compute becomes the governing constraint, anchoring the pace and extent of automation.
The rise of agents
The emergence of agents, exemplified by OpenClaw, marks an important inflection point. Unlike chat interfaces, which are request-response driven, agents execute tasks such as writing code, managing workflows, and interacting with APIs continuously. This represents a shift from AI as a cognitive tool to AI as an execution layer embedded in the economy. Chat compresses information; agents generate output. Instead of being consulted intermittently, agents persist, monitor, and act.
The implication is a step-function increase in compute demand. Agents are iterative and recursive. They plan, execute, verify, and refine across multiple steps. A single task can involve dozens of model calls. If chat increased compute consumption linearly, agents increase it exponentially. Equally important, agents expand the number of "users" of software. As Jensen Huang noted, both human and agentic usage will grow, effectively multiplying demand across tools, APIs, and compute. Compute demand is no longer tied to seat count, but to the number of active processes running in parallel.
Monetisation
The consumption model is evolving in a way that dramatically increases monetisation. Historically, AI was sold via "all-you-can-eat" subscriptions. That is now shifting towards usage-based pricing, effectively "pay by the drink". This is a critical change. It enables price discrimination across users based on intensity, willingness to pay, and latency sensitivity. Power users, particularly those running agents continuously, consume orders of magnitude more tokens and are far less price sensitive.
Early evidence suggests developers are already spending tens of thousands annually on tokens, with companies providing engineers with annual AI token budgets equivalent to roughly 50% of their base salary to boost productivity and as a recruiting tool.
This shift has two implications. First, it materially expands revenue capture for model providers, as pricing now scales with value delivered rather than seat count. Second, it addresses the "circular financing" concern often raised by bears. If usage is metered and demand is elastic to productivity gains, then revenue growth is not artificial, as it is directly tied to economic output. The result is a clearer and more durable monetisation loop.
Inference disaggregation improves economics and extends asset lives
A less appreciated but equally important shift is occurring at the infrastructure level. Inference is being disaggregated into two phases: prefill and decode. Prefill, understanding the prompt and its surrounding information, is compute-dense, while decode, generating tokens, is memory-bandwidth intensive and latency-sensitive. This separation allows workloads to be optimised across heterogeneous compute, with older GPUs being repurposed for prefill, while newer or specialised systems handle decode.
The consequence is a significant extension of GPU useful life. Rather than becoming obsolete in 4–5 years, GPUs may remain economically productive for closer to a decade. This has two direct financial implications: a lower depreciation burden and reduced capital intensity per unit of output. For hyperscalers, this translates into higher returns on invested capital over time, particularly as infrastructure shifts from training, a non-revenue-generating phase, to inference, a revenue-generating phase. In the end, the system is becoming more efficient exactly as demand is accelerating.
More broadly, AI is increasingly about the transformation of energy into economically valuable tokens. Efficiency gains, both at the hardware and algorithmic level, are expanding effective supply even as demand accelerates.
Frontier models prove monetization at unprecedented scale and efficiency
Anthropic reported a USD 30 billion annualised revenue run rate as of 7 April, up from USD 9 billion just three months earlier, adding USD 21 billion of ARR in a single quarter. OpenAI is seeing similarly strong momentum, growing from roughly USD 20 billion ARR at the end of 2025 to over USD 25 billion by early March. To put this in perspective, Anthropic's quarterly ARR expansion alone exceeds the combined ARR growth of the top 10 software companies, excluding Microsoft, over the same period.
What is equally striking is the efficiency of these businesses. Both Anthropic and OpenAI operate with roughly 3,000 employees each, compared with around 30,000 employees at Alphabet Inc. when it was generating a similar revenue scale. These are not just fast-growing companies; they represent a fundamentally different operating model, where software-like scalability is paired with infrastructure-like monetisation, resulting in an unprecedented level of revenue intensity per employee.
This is the clearest evidence to date that large language models can monetise at scale. Importantly, this growth coincides with the pricing transition described above. As usage-based pricing proliferates, revenue growth is expected to remain tightly coupled with token consumption, which is driven by agentic workloads and enterprise adoption.
We are also now firmly in the inference phase of AI. Training dominated the early cycle, a capital-intensive phase with delayed monetisation. Today, inference is driving revenue, and the economics are improving as utilisation increases and infrastructure efficiency rises.
AI for the physical world
A further development worth watching is the extension of AI beyond the digital domain and into the physical world. What is changing now is the convergence of three elements: large-scale models, massive data flywheels, and full-stack infrastructure. In domains such as autonomous driving, systems are evolving from rule-based pipelines towards end-to-end models that interpret entire scenes, anticipate behaviour, and directly map perception into action.
This is increasingly visible in real-world deployment. Waymo has scaled weekly paid trips by approximately 10x in under two years, from around 50k in 2024 to roughly 500k today, targeting approximately 1 million by year-end.
This is the equivalent of a "ChatGPT moment" for the physical world. Models no longer just recognise objects; they reason about dynamic environments. The implication extends beyond better humanoids or autonomous vehicles to the emergence of a general intelligence layer for physical systems, adaptable across industries from logistics to manufacturing to mobility.
This is a very steep S-curve
In prior compute cycles, infrastructure preceded application adoption. That pattern is holding, but the slope is steeper than expected. The infrastructure build-out has been rapid, but the application layer, particularly enterprise AI, is now inflecting sharply. We estimate the enterprise AI market could reach USD 3–5 trillion, and we are likely still below 10% penetration. The adoption pathway is becoming clearer, moving from chat to AI integrated into workflows, then to task-specific tools, followed by fully autonomous agents, and ultimately orchestrated networks of agents.
Crucially, macro uncertainty may accelerate rather than hinder this process. In a weaker economic environment, the incentive to reduce labour costs and improve productivity intensifies. AI becomes a deflationary tool. Companies facing pressure will adopt it faster. For instance, this is exactly what was observed in 2008–09 during the Great Financial Crisis, when digital advertising took off at the expense of traditional media. It is during difficult periods that enterprises are most willing to break the inertia of old paradigms. Early signs are already visible in hiring trends and cost optimisation initiatives.
Conclusion
AI remains one of the most compelling areas for investment today because we are still early in a market that is simultaneously improving its product, expanding its monetisation, and deepening its economic value. Usage is expanding rapidly, frontier labs are shifting from flat subscriptions towards usage-based pricing that better captures return on investment, and inference economics are improving as the stack matures. At the same time, AI is no longer just a technology story; it is becoming a productivity, labour, and capital cycle story, with adoption spreading from chat to coding to agents that can perform real work. That combination of stronger demand, improved monetisation, longer-lived compute assets, and a clearer path to durable returns is highly compelling. While many transformative technologies eventually become over-owned, AI still appears underpenetrated relative to the scale of the opportunity. In our view, this remains one of the rare moments where both the fundamental growth and the investment case are unusually attractive.
Avis de non-responsabilité
Communication marketing. Investir comporte des risques.
Les points de vue et opinions contenus dans le présent document sont ceux des personnes à qui ils sont attribués et ne représentent pas nécessairement les points de vue exprimés ou reflétés dans d'autres communications, stratégies ou fonds de DPAM.
Les informations fournies dans le présent document doivent être considérées comme ayant un caractère général et ne prétendent en aucun cas être adaptées à votre situation personnelle. Son contenu ne constitue pas un conseil en investissement, ni une offre, une sollicitation, une recommandation ou une invitation à acheter, vendre, souscrire ou effectuer toute autre transaction sur des instruments financiers. Ce document ne constitue pas non plus une recherche en investissement ou une analyse financière indépendante ou objective ou une autre forme de recommandation générale sur la transaction en instruments financiers telle que visée à l'article 2, 2°, 5 de la loi du 25 octobre 2016 relative à l'accès à la fourniture de services d'investissement et au statut et au contrôle des sociétés de gestion de portefeuille et des conseillers en investissement. Les informations contenues dans le présent document ne doivent donc pas être considérées comme une recherche indépendante ou objective en matière d'investissement.
Investir comporte des risques. Les performances passées ne garantissent pas les résultats futurs. Toutes les opinions et estimations financières reflètent la situation au moment de l'émission et peuvent être modifiées sans préavis. L'évolution des circonstances du marché peut rendre les opinions et les déclarations incorrecte.