- Michael Allison, CFA
- Jun 29
- 3 min read
Updated: Jun 30
By Michael Allison, CFA

The Monetization Dilemma: What's the Business Model for AI?
The Chart of the Week shows something quite astounding. In just 26 months, ChatGPT has amassed over 1 billion monthly active users—a growth curve that has outpaced Facebook, Instagram, TikTok, and Twitter by a wide margin. And yet, despite this historic adoption, a fundamental question remains unanswered: what exactly is the business model?
This is no trivial matter. Alphabet (Google), Meta (Facebook), TikTok, and Twitter all leaned heavily into advertising from day one. Their business models, at least their initial business models, were clear: attract users, harvest attention, sell ads.
AI, and ChatGPT in particular, is different. It’s not a social platform or a marketplace. It’s a productivity tool, a research assistant, a tutor, a co-pilot for coding and writing, and perhaps someday, a personal agent embedded in everything from your smartphone to your refrigerator.
But “everything” is a difficult thing to price.
So what business model can support the extraordinary capital requirements of AI development?
Let’s start with the obvious: subscriptions. ChatGPT Plus is currently $20/month. At one billion users, even a modest conversion rate would be meaningful. If just 5% of users subscribe, that’s $12 billion a year. But that’s still a fraction of the multi-tens-of-billions being invested annually by OpenAI, Microsoft, Google, Anthropic, Meta, Amazon, and others. We’re not talking about SaaS-level margins here. We’re talking about training and running models that cost hundreds of millions per iteration.
Another possibility is API-based monetization: sell access to the model for app developers and enterprises. This is akin to cloud services—selling compute and intelligence on demand. It’s a solid path, and it’s already happening. Microsoft has integrated GPT into Office and Azure. Salesforce, Notion, and Adobe are embedding LLMs into their enterprise suites. But here again, the margins are constrained by GPU availability and inference (operating) costs.
Advertising could creep in, but it’s a poor fit for most current AI use cases. You don’t want banner ads in the middle of your code assistant or search hallucinations influenced by ad revenue. Trust is the product, and ads very likely erode that.
But…
What if we’re thinking about this the wrong way?
Maybe AI isn’t a product. Maybe it’s infrastructure—a foundational technology like electricity, Bluetooth, or WiFi. We don’t pay directly for TCP/IP or GPS. We pay for the services built on top of them.
AI might follow that path: an enabler of massive productivity gains, new consumer interfaces, and a layer of intelligence embedded in every app, device, and system. If so, the value accrues indirectly.
That would suggest that returns will not accrue to “AI” per se, but to those who own the workflows.
Microsoft may benefit not because it owns the best model, but because it controls Outlook, Word, Excel, and Teams.
Amazon might benefit via drone-based deliveries and AWS.
Apple via on-device AI that makes the iPhone even more indispensable.
Verticalized AI—think legal tech, pharma, financial advisors—may yield more defensible business models than generic horizontal AI.
The challenge, then, is matching exponential user growth with a durable monetization engine. History says attention is monetizable. But AI is utility, not entertainment. And utilities require scale, regulation, and pricing power. We’re only beginning to grapple with what that means for investors.
Sources: Coatue Management, Sam Altman, Microsoft
Interested in reading more of Mike's weekly newsletters? Click below to view The Sunday Drive.
Comments