radical Insights.

Weekly Research and Commentary on the Future of Business and Technology.

Speculative Platform Risks & the AI Future.

Dec 5, 2023

Most of the ample attention paid to last month’s borderline-surreal crisis of leadership at OpenAI seemed to focus on two big questions: (1) Just WTF was actually going on over there? and (2) What might the fallout mean for the larger industry shepherding the development of this transformative technology?

A third question – one concerned with what the potential self-immolation of OpenAI might mean for all of the companies that had built services, products, or entirely new businesses dependent on OpenAI’s powerful tools & API – received less attention in the moment but might be of the most lasting interest now that the coup has been put down and Sam Altman, restored to (even more) power.

Leaders of such companies – particularly the executives of startups designed to leverage LLM-empowered capabilities at the core of their businesses & value propositions – spoke candidly and with considerable alarm in interviews during and after the OpenAI crisis about the risks they faced. This made sense, but the more I read, the more curious I became about just how deeply such leaders were or weren’t thinking about the uniquely fraught situation of Big AI-dependency today.

Most of the leaders speaking on the record (and most of the writers doing the recording & reporting) addressed the concern as a pretty standard sort of platform risk – a minor variation on a theme we’ve known well for the past 30 years, be it dependency on an e-commerce platform, a social media app, an app store itself, or an operating system. And of course there IS a pretty clear platform risk of that sort for companies that have warmly embraced ChatGPT in the last year, but these easy comparisons to the platform risks of yesterday don’t come close to capturing the more complex risk of today – and tomorrow. This thing is a horse of an entirely different color.

For starters – and as the events of last month surely demonstrated, OpenAI is an incredibly peculiar company and a genuinely opaque one at that, despite the gesture toward openness in the name. No one outside the company knows the precise volume of API calls, etc., but we can safely assume it’s considerable. And the very idea of a significant number of firms building a dependence on an organization with a genuinely novel governance structure and a vague mission to serve “humanity” first should strike you as… a little bizarre when you take a step back. I’ve been thinking about this for weeks and can’t come up with a real historical analogy.

Setting aside the weirdness of OpenAI (and/or assuming that it’s perhaps settling into something more familiar, which is also to say more predictably profit-driven), there’s actually an even weirder and larger and more unpredictable platform risk that’s not unique to OpenAI. It’s something that might be characteristic (and in a feature-not-a-bug type way) of reliance on Big AI in general and building around LLMs in particular. The technology is developing at a remarkable rate, and with seemingly every major release, it gains functionality that might wipe out the value of companies that were built around the previous generation of AI tools. This situation is made even more challenging by the fact that those functionality gains aren’t really predictable (even to the accelerationists and Kurzweilians for whom the eventual development of AGI is an article of faith). Some of those new capabilities appear to be emergent or at least incidental to targeted advances and growth in sheer model scale.

This, friends, is something new and deeply strange – companies making critical bets on radically evolving, poorly understood third-party tools that might spontaneously develop a completely scalable new capability to more or less obviate the company’s value proposition overnight. And all of this is to say absolutely nothing about the eventuality of meaningful regulation of AI systems in the future… at some point.

If you’re not enthralled, a bit nervous, and perhaps baffled at times, you’re not paying attention. And if you’re not watching the development of open source AI with interest, you’re probably not doing your due diligence – and definitely not doing your future self / firm a favor either. More on this, surely, to follow. (via Jeffrey)