Data, play, love
What comes after efficiency gains
At some point last year, one of my advisors asked me: “What does business intelligence look like in the age of AI?”
You know the scene. Room full of smart people, drowning in dashboards, reports stacked to the ceiling, precogs on speed dial. And just reams of data. But the moment someone needs to make an actual decision, the whole room goes quiet. “What does this actually mean?” Nobody knows. No one is quite sure—or worse, willing to bet their career on an innovative choice. That’s what business intelligence looks like in 2026.
The question stuck with me. It crystallized something I’d been thinking about for a while, that the real bottleneck in modern organizations isn’t access to data. It’s the ability to make sense of it at the speed decisions require.
The timing feels significant.
Just this week, Valve amended its AI disclosure policy on Steam, signaling a shift in how the industry’s largest PC platform views the technology. First spotted by the relentless Simon Carless, Valve now only requires developers to disclose AI use if it generates content that ships with the game, such as assets, art, sound, narrative, and localization.
But for “efficiency gains” in the development process, disclosure is no longer mandatory. As Valve puts it: “We are aware that many modern games development environments have AI powered tools built into them.”
It’s a pivotal moment.
For one, it follows a broader pattern. Valve has allowed the vast majority of AI-assisted games on Steam since January 2024. Research from Totally Human Media shows that 7 percent of games on the platform now disclose the use of generative AI, up from just 1.1 percent the year before. Major publishers like Nexon and Krafton have fully embraced the technology. And Epic Games CEO Tim Sweeney has argued that platforms shouldn’t be labeling AI-created projects at all, predicting that “AI will be involved in nearly all future production.”
The underlying suggestion is that rather than contaminating interactive entertainment with slop—as we’ve seen everywhere else (including in the writing of industry professionals and my students, oh god)—AI may yet afford creatives the efficiencies they need to safeguard their independence.
With Valve making this critical distinction, the industry and its ravenous userbase are entering an era in which AI is more common and accepted. It shifts the conversation past the question of whether AI will reshape the games industry and focuses it on how and where the leverage will be greatest.
But let’s not confuse adoption with progress. The generative AI wave didn’t solve the data quality crisis in gaming. Rather, it amplified it. Now, instead of just bad research, we have bad research at scale. ChatGPT can hallucinate market sizing with remarkable confidence. Consultants armed with LLMs produce reports that sound authoritative but rest on the same shaky foundations I criticized years ago: estimates based on estimates based on estimates. The tools got faster. The thinking didn’t.
It explains the current fashionable pessimism about AI. Some of it is warranted. But the conversation has become weirdly binary: either AI replaces everything, or it’s all hype. Both positions miss what’s actually happening.
Business Intelligence in the Age of AI
You’ll forgive me if I look at this development through my own lens: data.
Traditional business intelligence was built for a different era. It assumed that data was scarce, that analysts were the interpreters, and that dashboards were the delivery mechanism. But in 2026, data is abundant, analysts are overwhelmed, and dashboards sit unread. The infrastructure we built to support decision-making has become a bottleneck to it.
As consumers and creatives grow more comfortable with AI in game development, the data-heavy nature of contemporary publishing faces a crossroads. Small language models, compact alternatives to the massive foundational models, are proving increasingly capable, offering a competitive edge for those willing to use them.
Unlike the massive, general-purpose models that dominate headlines, SLMs are compact, efficient, and trainable on proprietary data. They don’t try to know everything. They try to know your domain—deeply, accurately, and fast. When fine-tuned on curated datasets, models with fewer than 10 billion parameters now match or exceed larger models on domain-specific tasks, at a fraction of the cost.
This isn’t about chatbots or “BI via chat.” It’s about embedding expert-level strategic insight directly into decision workflows, supporting planning cycles, competitive benchmarking, and scenario analysis at the point of action. In effect, it gives decision-makers a reasoning partner, not a search bar. The affordance isn’t about having the biggest model but about controlling the decision context.
I expect 2026 to surface a new generation of intelligence tools across entertainment. The ones I'm most excited about are those building deep, predictive capability at the title level, forecasting whether a game will connect with its audience before it ships. That’s different from measuring how brands and franchises perform culturally across platforms. But rather than being at odds, these divergent approaches are complementary. And I predict the most interesting outcomes will come from partnerships that combine both.
Interactive entertainment presents an ideal environment where data is abundant, the stakes are high, and current solutions fall short.
In 2025, the global games industry generated $250 billion and catered to more than 3.3 billion consumers globally. And yet, decision-making remains constrained by incomplete insight, delayed reporting, and siloed data sources. The consolidation of mobile data into a single provider only made things worse. Publishers subscribe to a dozen vendors and still can’t calculate basic metrics. Critical markets like China remain opaque. The real competitive signals (e.g., Discord sentiment, TikTok virality, Twitch viewership patterns) live in spaces invisible to conventional analytics.
Gaming is also where the cost of failure is highest. AAA development budgets have grown from $47 million in the early 2000s to over $440 million today. Grand Theft Auto 6 reportedly cost $2 billion. I have no doubt that it will be successful, but what about every other release? The margin for error has collapsed, but the tools for avoiding error haven’t improved.
It offers an environment where domain-specific AI can prove its value. If it works here, in a market defined by fragmentation, velocity, and high stakes, it can work anywhere.
It’s also something I’ve been chewing on since we sold SuperData to Nielsen in 2018: what comes next? Over the holiday break, I finally put it all in one place.
The white paper I’m releasing today lays out this thesis in detail. It covers the structural limitations of traditional BI, the technical case for SLMs, and the specific dynamics of the gaming industry that make it an ideal proving ground. It includes data on vendor fragmentation, cost escalation, and the gap between media time and ad spend that represents billions in unrealized value.
It proposes a shift in how we think about intelligence infrastructure, from centralized analyst teams to distributed, AI-native decision support.
I’m sharing it because I believe this shift is coming, and I want to be part of the conversation about how it should happen. If you’re working on similar problems or think I’m wrong, I’d welcome the dialogue.
The full white paper is available for download below.



