radical Insights.

Weekly Research and Commentary on the Future of Business and Technology.

Prompt Injections in Generative AI: The New SQL Injection, Only Worse.

Oct 17, 2023

Ah, the wonders of technology! Just when you think you’ve seen it all—along comes GPT-4V, sprinkling the multi-modal fairy dust and making digital pumpkins appear grander than ever. But, oh wait, there’s a catch. There’s always a catch, isn’t there? This remarkable feat of ingenuity seems to have overlooked a little thing we like to call security, specifically in the realm of prompt injections. We have a term for this in the biz: “Technology’s Law of Unintended Consequences.” It’s sort of like Murphy’s Law, but with more silicon and less common sense.

But before we jump into the nitty-gritty of why multi-modal prompt injections are basically the SQL injections of yesteryear—only worse—let’s take a step back. The multi-modal aspect of GPT-4V allows it to accept both text and images as input. This is spectacular. The possibilities are as endless as a Peter Thiel quest for eternal youth. However, as the article from Simon Willison lays bare, it opens up multiple new vectors for prompt injection attacks. Prompt injection attacks? Yes, it’s precisely as it sounds: injecting malicious instructions into the conversation that the AI then executes, obediently like a well-trained lapdog.

Let’s relate this to SQL injections. In the good old days, all you had to worry about were some script kiddies trying to dump your database by exploiting vulnerable query strings. It was laughably easy to thwart. Sanitize your inputs, and you’re practically immune. Try doing that with Generative AI! The complexity of natural language understanding and generation makes sanitizing inputs a labyrinthine task. Not to mention, these models are designed to be gullible. They have to be; otherwise, they’d be as useful as a steering wheel on a mule.

SQL injections were, let’s face it, child’s play to mitigate. You patch your code, update your systems, and Bob’s your uncle. But prompt injections in Generative AI? Good luck. The issue is intrinsically tied to the model’s fundamental design. It’s as if we’ve coded a digital Pandora’s box—exceptionally brilliant, but gullible enough to unleash havoc if given the wrong set of instructions.

The increasing deployment of these AI systems in public-facing roles—think customer support, emergency hotlines, and the like—only exacerbates the risk. Just imagine the catastrophic implications if a prompt injection were to leak confidential data, or worse. Our dependency on AI is growing, and the security holes in these technologies are no longer just tech’s dirty little secret; they’re everyone’s problem.

Johann Rehberger’s experiment in the article is particularly unsettling. He was able to get the model to assemble an encoded version of a private conversation and ship it off to a server he controlled. Data exfiltration, friends, at its AI-assisted finest. Remember the days when you needed complex malware to pull off such stunts? Ah, nostalgia.

So what’s the takeaway? As we continue to expose more surface area to these AI systems, we’re also expanding the opportunities for prompt injections. We’re trading convenience for security, a Faustian bargain in an era where data is more valuable than gold. For those developing products atop these LLMs, it’s not just a ‘be aware’ note; it’s a five-alarm fire. Stay aware? More like, stay on your bloody toes.

And there you have it. Another day, another groundbreaking technology that is as incredible as it is flawed. Welcome to the future—just don’t forget to bring your bug spray. (via Pascal)