As we stare down the end of 2024 and being to look ahead to 2025, now is surely not a moment when you need to be convinced that we’re living in a time of deep (and deepening) uncertainty. I think the case of late has pretty well made itself.
So let’s do this – as we’re beginning to feel our way into the new year and all it might hold: Let’s unpack a few simple practices for sitting with uncertainty and still managing to get sh*t done. I have a sneaking suspicion that these will only become more useful in the years ahead.
1) Don’t get stuck at the extremes. Our brains are spectacularly good at catastrophizing – envisioning worst-case scenarios and fixating on them. Don’t do that. The worst-case scenario is a possible outcome, but it’s not necessarily a particularly probable one.
So, instead, when you find yourself circling a worst-case scenario, take a breath, and force yourself to envision the opposite extreme – the best possible, rosiest of all scenarios. But don’t get stuck there either: With the extremes defined, now push yourself to envision three possible outcomes that would exist somewhere in between those extremes.
Now, take a look at this spectrum of possible futures. Which ones actually feel most probable? Which ones are worth investing in or preparing for, and which ones might feel a little less likely with a fuller set of outcomes on the table?
2) Spend your time, energy, and resources on the right kind of uncertainty. Both statisticians and philosophers differentiate between irreducible uncertainty and reducible uncertainty. You should do the same.
Irreducible uncertainties include all of the things that we simply cannot know. There’s no number of sleepless nights you can spend or hackathons you can convene or white papers you can read that will give you a definitive answer on what the AI future will hold several years from now. This, effectively, is an irreducible uncertainty for the vast majority of us.
By contrast, reducible uncertainties include all of the things that we could know – or at the very least approximate knowing – with sufficient research, experimentation, and exploration. You could very well reduce the uncertainty around what your work would look as an AI-augmented executive decision maker. You could develop a set of contingency plans addressed to the possible courses an exacerbated US - China trade war could take.
Focus your time, creative energy, and resources on identifying and exploring those reducible uncertainties.
3) Seek (and build around) relative certainties in an uncertain world. For all we don’t know, there’s much that we do know. There are things we know about the world in the second half of this unfolding decade: The world is warming. Extreme weather events are becoming more common. Insurance subject to those events is becoming more costly. The costs of some key technologies (lithium-ion batteries, compute, utility-scale solar, etc) will continue to collapse due to powerful learning curves. Information and narratives are increasingly de-localized and ambient. Demographic cohorts will continue to age. All of that ain’t nothing.
Add to it the certainty that even if AI tools were to not improve at all beyond today’s state of the art, the slow-rolling impact of 2024’s innovations would still be pretty transformative when realized at scale across the global economy.
And add to all of this that there are things you can (and should) know about your customer and business. What durable need do you fulfill? What do your customers depend on you for now that they will need / want / require – in some form or fashion, from some product or service – for the foreseeable future? Build around that. And if that’s an uncertainty, reduce it! And then build around it.
A certainty on our end: Your friends at be Radical will surely be going deeper on practices for navigating this uncertain world in 2025. In the meantime, this starter kit can get you rolling on building resilience and a kind of purposeful efficacy regardless of what the future holds.
*@Jeffrey@