Sir Keir Starmer wants AI to boost productivity. Here’s how it can work.
Today the Prime Minister announced the government’s long-awaited plan to turn the UK into an AI superpower. The AI Opportunities Action Plan, among other things, aims to transform UK productivity, especially in the public sector. That includes, hopes the government, using AI to drastically cut down admin costs, speed up planning, and delivering better and more personalised services.
Making AI work matters for the whole UK economy. As I’m sure you all know, we’ve been stuck in a low-productivity rut for over a decade. Everyone seems to agree that adopting new technology is the way out. (The NHS’s very survival may even depend on it.) One major new study reckons AI could unlock around £120 billion in productivity gains a year – for large UK firms alone.
We agree – but only if we do it right. Like a lot of technological change, it could also go very wrong.
Following this new Action Plan, across the UK businesses and public services will be trying to figure out how they can make the most of this opportunity. For at least two years now, the question I get asked the most is ‘how can my firm make the most of AI’?
All organisations realise that AI is coming; and that it’s going to be transformative. And there’s definitely an appetite to learn too – the number of learners booked onto our AI courses has increased by 372% over just the past six months.
We’ve seen where it works and where it fails. The answer, as always, isn’t just about the tech itself. It’s no good dumping AI on staff – yes, that includes providing QA training! – and assuming the benefits will naturally follow. If AI is going to transform how we do business in the way we all hope, we need to change how we think about technology: how we teach it, who uses it, and even how an organisation feels about it.
This is because AI is different to nearly all other technologies that have come before it.
First off, it isn’t even an individual technology. It’s a family of capabilities that share similar characteristics. ‘Machine learning’ is brilliant at spotting patterns. Natural language processing parses and analyses text. Generative AI – the current vogue – generates human standard content.
I think of AI as a human force-multiplier. It does things we can already do, but at a scale, speed, and precision previously unimaginable. If, as Steve Jobs said, the personal computer was the ‘bicycle of the mind’, then surely AI is the motorbike.
It was obvious what the Xerox machine could do: speed up the task of copying paper documents. This was extremely useful for those members of staff who needed lots of paper copies of things. (Which probably wasn’t that many, until the Xerox machine came along). People could be easily trained on which buttons to press to complete that very precise job. It was task-specific and confined.
AI forces us to ditch that mindset. Its potential uses are not confined like a Xerox machine.
It could be the chief data scientist who uses AWS SageMaker to build a model that can parse customer spending habits. Or the head of IT using Copilot for Security to quickly assess vulnerabilities in the company’s attack surface.
But here’s what’s different.
It could also be the ‘non-techie’ HR assistant who realises they can use Google Gemini to prepare briefing documents for new recruits.
Or a new marketing exec who uses free generative video software to produce cheaper content for campaigns.
Or the head of sales who asks Microsoft Copilot to provide salient data points about her industry; and to prepare briefing about potential new markets.
I’ve worked in technology ever since the arrival of the web. I’ve seen how easily ‘tech-ghettos’ form: companies where half the staff ‘do tech’, and zealously embrace all the latest systems and machines. Meanwhile the other half are sick of the constant carousel of new systems, new IT training, new certificates. And the two sides barely speak.
That must not happen here. You do not need to be a ‘machine learning’ engineer. Anyone can work out ways to make AI work in the context of their own job.
It’s early days, but already a handful of recent academic papers show that not only can generative AI dramatically increase productivity if done well – it can also make staff happier and more engaged. Especially junior and less ‘technical’ staff. The layman’s explanation? AI can let staff learn, and do, interesting new stuff.
This simple but profound change has encouraged us at QA to re-imagine how we think about training. AI as a multi-purpose force-multiplier means learning is about more than teaching people how to use kit: it’s about creating an organisation-wide culture of learning-by-doing which encourages trying, testing, and experimenting from the CEO to the new apprentice.
The organisation that will thrive is not the one with the fastest machines or the ‘best’ AI engineers. But the one where silos – especially between ‘tech teams’ and ‘the rest’ – no longer exists. Where anyone and everyone within an organisation feels like AI is something they could and should use. It’s a mindset.
How can an organisation develop this AI-ready-mindset? It’s not helped that so much public discussion about AI is dystopian: the robots are coming for my job! It’s the end of life as we know it! Little wonder people come to AI distrustful and worried. And there are plenty of practical issues relating to privacy, data use, and copyright, which my colleague Vicky writes about here. It can all feel a little overwhelming.
Too often organisations don’t quite know how to make it work for them. Many are, in IBM’s words, ‘stuck in the AI sandbox’. They see the potential, the theory. But the route to practical action remains unclear.
So my advice is simple: make a start on something that is in your sphere of influence, so that it’s applied and you see the results. When you start modestly with an eye on application, it gets traction. People learn a skill, let’s say it’s using a ‘large language model’. They give it a go, and see it slash the time it takes to write an annual report. They become confident rather than fearful.
This becomes self-perpetuating. A culture of constant experimentation takes hold. The staff actually enjoy it. And they want to stick around, too. Embedding those skills.
Microsoft recently surveyed people using its AI assistant Copilot. The second most common use of time freed-up by Copilot? Attending more meetings. That is not the AI-mindset – it’s the old mindset applied to new technology. But when there is a culture of constant experimentation, staff don’t attend more meetings: they start to look for other uses for AI. They come to learn its weaknesses (like the fact it is only as good as the data used to train it) and stop seeing AI as a rival, and reimagine it as an assistant. They start talking positively about it. There is no AI-ghetto.
It's not easy. In the past, ‘technology transformation’ has too often over-promised and underdelivered. I think this time will be different. This revolution isn’t really about technology – but the people who use it. And that’s always where real change starts.
Want to learn more about upskilling your organisation in AI and Machine Learning?