Strategy Ages in Dog Years

ChatGPT was released on the 30th November, 2022, less than three years ago. People used it to write poems about cats.

AI is now inside your email, your social networks, your recruitment system, your customer service, your marketing tools, your newspaper, your BI dashboards, and your kid’s homework. You might also be using it as a therapist (1). 800 million people use it every week (2).

It drafts, summarises, analyses, recommends, and rewrites. Before long it will also decide and act.

If you wrote a strategy 3 years ago today, on the 26th November 2022, 4 days before you'd heard of ChatGPT, how much of this did it assume?

Did it treat AI as:
a central organising force that would reshape roles, skills, culture and power?
Or as:
“a promising emerging technology we will explore over the next decade”?

Sadly if it's the latter, and you didn't actually update your strategy, then I'm sorry to say that AI is not a rainforest you can go off and 'explore' with a clipboard and come back and share your findings anymore. It's the environment we're living in, even if we don't like it. AI is here (and everywhere). (3)

A lot of “strategy” is not actually strategy.
It’s nostalgia.
It’s a glossy document whose core message is:
“We would like the future to be an upgraded version of the past we were good at.”
It rarely gets updated when the context changes dramatically.

It contains bold words like “transformation” and “innovation”, but structurally, it assumes that change will be linear, roles will remain recognisable, skills will have a decent shelf life, workers will continue to tolerate corporate BS like ‘people are our greatest assets’ despite all evidence to the contrary, and technology will politely wait for your governance framework to be ready.

It’s tempting to plan for a future that looks suspiciously like the past you were good at, but nostalgia is a terrible innovation strategy.

Instead, we can write strategy that faces the plausible futures we may be heading into and still build something humane, ethical and worth living in. We can design work that is better, not just faster. We can choose vision over nostalgia, adaptability over rigidity, and courage over comfort.

This is a dizzying and occasionally terrifying moment in time where we don't know for certain what the future will look like - but it’s also the moment when thoughtful strategy can really make a difference to shape that outcome for the better. But only if it’s informed by the reality we're living in and the futures we’re heading towards, rather than the nostalgia we've laminated.

1. Based on this research: https://lnkd.in/gSWGRzus.
2: https://lnkd.in/gcsu8f7K
3: I've noticed that any time I write anything about AI nowadays, I inevitably get at least one comment that is essentially "but AI is bad so we shouldn't use it", and I genuinely do share concerns about AI, from the environmental to the ethical to the existential. Those aside, I still think you need a position on how you're going to approach AI, even that position is to thoughtfully differentiate from it. There's a big difference between "this is bad so it's not going to happen" and "this is bad so here's what I'm going to do instead".