A couple of months ago, we suggested here that the major LLM players were rushing us into a “move fast & break things” era of generative AI. Not the boldest of prognostications, but one that seems to be holding up pretty well. You can find your scorching hot takes on the near future of AI elsewhere this week; instead, I’ll try to offer a few more commonsensical positions that feel likely to prove durably correct.
Working out from that starting point, it’s quite clear that the field is moving fast (and accelerating) and likely to break some important things, but it remains remarkably – and somewhat fascinatingly – unclear exactly what it is that will be broken. Web search? The larger information ecosystem? Democracy? The global economy?
How about the established cost structure of creating new software? As analysts at SK Ventures argued last month, there’s a distinct possibility that systems increasingly capable of generating viable, explainable code in response to natural language prompts will radically reduce the cost of (initially, low-level) software engineering work. That work is highly structured, grammatical by definition, and predictable when dealing with well-understood problems – precisely the kind of work that LLMs are poised to do most effectively and, eventually, reliably.
Two interesting upshots here: The first is that we might then see the cost of software engineering collapse in a way reminiscent of what we’ve already seen with computing, storage, and networking. The second, as Pascal pointed out in one of our community exchanges recently, is that systems that might (relatively) soon be able to generate the code for most any app would seem for present and future startups to be a slippery foundation upon which to build new digital products and services.
Taken together, all of that would also further argue for the absolutely central importance of holding unique, high-value data as a/the core asset of the business when the other costs of digital innovation are continuing to collapse – and with them, technical barriers to entry. It will be interesting to watch the Bloomberg example for lessons.
One more while we’re here. I heard my friend Alexandre Nascimento (who, unlike me, actually IS an AI expert) tell a roomful of executives in Brazil last week that while AI systems won’t take their jobs, executives who effectively use AI systems just might. That struck me as a nice formulation and another prognostication likely to hold up pretty well. (via Jeffrey)