← All Articles · AI Skills Gap

AI Skills Gap 2026: What L&D Leaders Need to Know

By Jay Johnson · ·AI Skills Gap UK·L&D AI Upskilling 2026·AI Training Needs Assessment

The AI skills gap is widening faster than most organisations are moving. WEF, McKinsey, and CIPD data on what's at stake in 2026, and a 3-step needs assessment to help L&D leaders close it.

The numbers are no longer projections. The AI skills gap is here, and it is widening faster than most organisations are moving.

According to the World Economic Forum's Future of Jobs Report 2025, 39 per cent of workers' existing skills will be disrupted or made obsolete by 2030, with AI and machine learning cited as the primary driver. The Chartered Institute of Personnel and Development (CIPD) found in its 2024 People and Technology survey that fewer than one in three UK organisations had any structured AI training in place. McKinsey's most recent global survey on AI adoption reports that while AI usage has doubled across industries in the past two years, the majority of employees using AI tools have received no formal training on how to use them effectively.

For L&D leaders, this gap is both the challenge and the opportunity. It is the challenge because it is arriving faster than most training departments were prepared for. It is the opportunity because the organisations that close this gap now will compound an advantage over competitors who are still waiting for the technology to stabilise before investing in capability.

Get the free AI Training Roadmap

5 steps to launch AI skills training in your organisation. No spam, just practical guidance.

What "AI Skills" Actually Means for Non-Technical Professionals

One of the reasons the AI skills gap is so persistent is that most organisations do not have a clear definition of what AI skills they are actually trying to build. They know they want their teams to be more AI-capable. They are less clear on what that looks like in practice.

This matters because the training market is full of programmes that teach the wrong things: vendor demos, prompt tip collections, generic awareness sessions that generate high satisfaction scores and almost no behaviour change.

The CORE framework, developed through years of training non-technical professionals at organisations including the World Bank Group, Bloomberg Media, and Adobe, identifies four skill categories that consistently drive measurable productivity improvement:

These are not technical skills. They do not require coding knowledge or a background in data science. But they are learnable, and they are the skills that separate professionals who get genuine leverage from AI from those who use it occasionally and feel underwhelmed.

For L&D leaders designing training programmes, this framework provides a useful structure: stop asking "how do we teach people to use AI?" and start asking "how do we build these four specific capabilities in our workforce?"

How Leading Organisations Are Closing the Gap

The organisations making the most progress on AI upskilling are not the ones running the most training. They are the ones running the most targeted training.

At Bloomberg Media, AI training for editorial teams was built entirely around their actual content pipeline: research aggregation, source synthesis, and first-draft production. Rather than teaching abstract AI principles, sessions focused on the specific workflows where AI could save time without compromising editorial quality. The result was a 25 to 35 per cent reduction in time spent on research aggregation, with no reported drop in output quality.

At the World Bank Group, training for analytical teams concentrated on research synthesis and policy brief writing: the two highest-volume, most cognitively demanding tasks in the workflow. The training was designed around real documents, real briefs, and real constraints, not fabricated practice scenarios. Adoption rates were significantly higher than the organisation's previous generic AI awareness programme.

At Adobe, the focus was on the creative brief workflow: how AI could accelerate ideation, sharpen brief interpretation, and reduce the number of revision cycles on initial drafts. This translated into measurable margin improvement on a per-project basis, straightforward enough to demonstrate to senior leadership.

The pattern across all three cases is consistent: specificity drives adoption, and adoption drives ROI. Generic training produces generic results. Role-specific, workflow-embedded training produces the kind of behaviour change that L&D leaders can actually measure and report back to the business.

A 3-Step AI Training Needs Assessment

Before designing any AI training programme, L&D leaders need to understand where their organisation currently sits. The following three-step process provides a structured starting point.

Step 1: Map high-volume workflows, not job titles

AI capability is workflow-specific, not role-specific. Two analysts with the same job title can have completely different AI training needs depending on what they actually spend their time on. Begin by identifying the ten to fifteen highest-volume, most cognitively demanding workflows across the organisation: the tasks that consume the most time and that do not require judgment AI cannot replicate.

For each workflow, ask three questions: Is this structured enough for AI to assist? Where would AI involvement save the most time? Where does human judgment remain essential? This analysis typically takes two to three days of structured conversations with team leads, and it is the single most valuable input into programme design.

Step 2: Assess current capability honestly

Most organisations overestimate how much their workforce already knows about AI. A brief anonymous survey asking people to rate their confidence with AI tools and describe how frequently they use them typically reveals a wider spread than leadership expects: a small group of enthusiastic early adopters, a large middle group with occasional use, and a significant portion who have barely engaged.

This distribution determines training architecture. A single programme cannot effectively serve both the enthusiasts and those still forming their first impressions. Segmenting by capability level, or at minimum designing for the middle group rather than the outliers, significantly improves outcomes and adoption rates.

Step 3: Align stakeholders before training begins

AI training initiatives fail most often not because of poor content, but because senior stakeholders were not aligned before the programme launched. L&D leaders who secure explicit commitment from relevant leaders early, including protected time for training, a clear definition of what success looks like, and clarity on who owns adoption follow-through, consistently report higher programme impact than those who treat stakeholder management as a post-design step.

This is not political manoeuvring. It is the minimum infrastructure required for behaviour change to take hold. Training without this context produces events, not capabilities.


The Window for Acting Is Now

CIPD data shows UK organisations lagging behind international counterparts on AI upskilling investment. McKinsey's research suggests the productivity gap between organisations with structured AI training and those without is already measurable and growing. WEF modelling indicates the skills disruption curve is steepening through 2026 and beyond.

L&D leaders who have been waiting for clarity before committing to AI training programmes now have enough data to make the business case confidently. The gap is real. The training that closes it is well-understood. The organisations that move now will be reporting results when their competitors are still debating whether the time is right.

Ready to build your organisation's AI training programme?

I work with L&D leaders and senior teams to design AI training that connects to real workflows and delivers measurable behaviour change. If you're planning an AI upskilling initiative, let's talk about what will actually work for your organisation.

Book a 15-Minute Call →

Found this useful? Get the free AI Training Roadmap

5 actionable steps to build AI literacy across your team or organisation. Used by teams at the World Bank, Bloomberg, and Adobe.

JJ

Jay Johnson

Enterprise AI training consultant. Jay has delivered AI workshops for teams at the World Bank Group, Bloomberg Media, and Adobe. He helps organisations build genuine AI capability, not just hype.