← All Articles · AI Training Programme

How to Build an AI Training Programme for Your Team (Step-by-Step Guide)

By Jay Johnson · ·Team AI Upskilling·Corporate AI Workshop Planning

A practical step-by-step guide to planning and running corporate AI training: from needs assessment and pilot groups to measuring what actually changes. Based on experience with World Bank, Bloomberg, and Adobe.

Most organisations decide to run AI training and immediately ask the wrong question: "Which vendor should we use?" The right question is: "What does our organisation actually need to change?" Get that second question right and the rest follows. Get it wrong and you'll spend a significant budget on a workshop that changes nothing.

This is a practical guide to building an AI training programme from scratch. It covers the three phases that actually matter: understanding where you are, running a focused pilot, and measuring whether anything has changed. It's based on what I've seen work across teams at the World Bank Group, Bloomberg Media, and Adobe, and what I've seen fail everywhere else.

Phase 1: Needs Assessment

Before you design a single training session, you need to understand your organisation's specific AI readiness. This is not a generic "survey your staff" exercise. It's a structured analysis of where AI can create the most value in your specific workflows.

Get the free AI Training Roadmap

5 steps to launch AI skills training in your organisation. No spam, just practical guidance.

Map your workflows, not your job titles

AI capability is workflow-specific, not role-specific. Two people with the same job title can have completely different AI training needs depending on what they actually spend their time on. Start by identifying the 10 to 15 highest-volume workflows across the organisation: the tasks people do repeatedly, the outputs that consume the most cognitive effort, the bottlenecks that slow down everything downstream.

For each workflow, ask three questions: Is this task repetitive and structured enough for AI to assist? Does it require judgment that AI cannot reliably replicate? Where would AI assistance create the most time or quality improvement? This analysis typically takes a few days of structured conversations with team leads, but it's what separates a useful training programme from a generic one.

Assess current capability honestly

Most organisations overestimate how much their teams already know about AI. A brief anonymous survey asking people to rate their confidence with AI tools usually reveals a wide spread: a small group of enthusiasts, a large middle group with occasional experience, and a significant portion who have barely started.

Knowing this distribution matters because it determines your training architecture. One programme cannot effectively serve both the enthusiasts and the beginners. You need to design for different capability levels, or at minimum, segment your pilot group to reflect the majority rather than the outliers.

Identify the decision-makers who need to be aligned

AI training fails most often not because of bad content, but because leadership wasn't involved. If senior leaders haven't bought into why this matters, they will not protect time for training, they will not model AI use themselves, and they will not hold teams accountable for adoption.

Before you run a single session, get explicit commitment from the relevant leaders: a protected block of time for the pilot, a clear statement of what success looks like, and agreement on who owns the follow-through. Without these three things, training remains a one-off event rather than a capability shift.

Phase 2: The Pilot Group

Do not roll out AI training to your entire organisation at once. Run a pilot first. This is not timidity: it's efficiency. A well-designed pilot tells you what works before you spend the full budget, and it creates internal champions who make the wider rollout easier.

Select the right pilot group

The ideal pilot group is 15 to 30 people who share enough in common (similar workflows, similar seniority levels) that you can tailor the content meaningfully, but who are diverse enough to surface different use cases and adoption challenges.

Avoid two common mistakes. First, do not stack the pilot with the most enthusiastic people. You need to understand how AI training lands with the sceptics, the time-pressured, and the people who are not intrinsically motivated to change. If your pilot only includes converts, your results will be misleadingly positive. Second, do not make the pilot voluntary. Voluntary pilots attract self-selection bias. Assign the group deliberately.

Design for behaviour change, not just knowledge transfer

A good pilot session has three components: conceptual framing (what AI can and cannot do, where judgment is still required), hands-on practice with workflows that actually resemble the pilot group's real work, and a structured commitment to try one specific behaviour in the following week.

That final component is the one most training programmes skip. Without a specific, low-friction commitment, the insights from training evaporate within a few days. With it, you create a behaviour change trigger that the facilitator can follow up on.

When I ran a training session with a Bloomberg editorial team, the follow-up commitment was simple: use AI for research synthesis on one story before the end of the week, and report back on whether it saved time or not. Simple, measurable, low-stakes. The follow-up conversation was more valuable than most of the training session.

Build in structured follow-up

Run a follow-up session two to three weeks after the initial training. This is where the real learning happens. People will have tried things, hit barriers, found workarounds, and discovered use cases you hadn't anticipated. That feedback is essential for refining the programme before wider rollout.

The follow-up does not need to be long. A 60-minute structured debrief covering what worked, what didn't, and what support people need is sufficient. But it needs to happen: without it, training is a one-time event rather than the start of a capability journey.

Phase 3: Measurement Framework

If you cannot measure whether your AI training programme has changed anything, you cannot justify continued investment and you cannot improve it over time. Most organisations skip measurement entirely, or track the wrong things.

Measure behaviour, not knowledge

Post-training surveys that ask "Did you find this useful?" or "Would you recommend this to a colleague?" measure satisfaction, not impact. They feel like measurement, but they tell you almost nothing about whether behaviour has changed.

The metrics that actually matter: How often are people using AI tools in their workflows, compared to before training? On which specific tasks? Has output quality changed? Has time-to-completion on target workflows changed? These are harder to measure, but they're the ones that justify the investment.

Define your baseline before you start

You cannot measure change if you don't know where you started. Before your pilot, collect baseline data on the workflows you're targeting: current AI tool usage frequency, self-reported confidence with AI, and if possible, time-on-task for two or three representative workflows.

This does not need to be a rigorous research study. A short pre-survey and a brief interview with three or four team members will give you enough of a baseline to assess meaningful change after training.

Track 30-day adoption, not just immediate response

Behaviour change takes time to embed. The true test of a training programme is not what people do in the week after the session; it is what they are doing 30 days later. Build a 30-day check-in into your measurement framework as standard. If adoption has dropped off at the 30-day mark, that tells you something important about friction in the environment, not just the quality of the training itself.


Putting It Together

A well-designed AI training programme is not a workshop. It is a process: assess where you are, run a focused pilot with structured follow-up, measure what actually changes, and use that learning to scale intelligently.

The organisations that get this right do not spend more money than the ones that get it wrong. They spend the same money more deliberately. They start with questions before they start with solutions. And they treat training as a capability investment, not a box to tick.

Need help designing the programme?

I work with enterprise teams to run needs assessments, design role-specific training, and build measurement frameworks that demonstrate real impact. If you're planning an AI training programme, let's talk about what will actually work for your organisation.

Book a Call →

Found this useful? Get the free AI Training Roadmap

5 actionable steps to build AI literacy across your team or organisation. Used by teams at the World Bank, Bloomberg, and Adobe.

JJ

Jay Johnson

Enterprise AI training consultant. Jay has delivered AI workshops for teams at the World Bank Group, Bloomberg Media, and Adobe. He helps organisations build genuine AI capability, not just hype.