Two AI training programmes. Six hundred employees. Less than one in five completing either of them.
That was the starting point when a mid-sized asset management firm first approached me about a third attempt. They had executive sponsorship, adequate budget, and access to widely used learning platforms. By most conventional L&D measures, they had done the right things. The results disagreed.
What followed was a complete redesign. Twelve months later, completion was at 71% and 58% of participants had demonstrated measurable workflow change at three months. This post explains what changed and what the pattern means for L&D leaders navigating similar pressures.
Get the free AI Training Roadmap
5 steps to launch AI skills training in your organisation. No spam, just practical guidance.
Check your inbox! The roadmap is on its way.
Something went wrong. Please try again.
Two Programmes, One Structural Flaw
The firm's first programme was a vendor-led rollout built around a flagship AI productivity platform. The second used a generic AI literacy course available through their existing learning system. Both were professionally produced. Both had senior visibility. Neither reached 20% completion.
The failure was not motivational. Employees at this firm were not resistant to learning. The failure was structural.
Both programmes made the same fundamental error: they were designed around the tool, not the person using it. Modules walked participants through software features. Completion meant finishing a course. There was no meaningful connection to the actual daily workflows of an analyst, a portfolio manager, an operations lead, or a client services associate. The content was largely identical whether you worked in fixed income or middle-office functions.
This approach assumes that if you show people what a tool can do, they will work out how it applies to their job. In a high-stakes regulated environment like asset management, that assumption consistently fails. People disengage not because the technology seems irrelevant in principle, but because nothing in the training connects it to the specific decisions they make on a given Tuesday morning. Generic AI training consistently delivers the weakest ROI of any upskilling investment and the asset management case illustrates precisely why.
Rethinking the Design From First Principles
The brief for the third programme was straightforward: make it actually work this time.
The redesign was built around four core principles drawn from the framework I use across AI training engagements with organisations including the World Bank, Bloomberg, Adobe, Amazon, and Meta. I call this the CORE framework. It does not start with software.
Role-specificity over broad coverage. Every cohort was built around a functional role cluster rather than mixed audiences. Analysts worked through use cases mapped to data interpretation and report preparation. Operations staff practised AI-assisted document review and workflow triage. Client services teams focused on synthesis and communication tasks. This sounds straightforward. In practice, it is the step most training programmes skip because it requires more design time upfront.
Thinking before tools. The programme opened not with software tutorials but with a structured introduction to evaluating AI outputs critically. How do you know when to trust a generated summary? How do you interrogate a model's assumptions? Participants needed to develop judgment before they practised capability. The sequence matters far more than most training designers account for.
Horizontal accountability. Rather than individual e-learning paths with manager-reported completions, the programme ran in small functional cohorts with structured peer checkpoints. Accountability sat between colleagues rather than between employee and line manager. In a culture where performance visibility carries real professional stakes, removing the evaluative pressure from early-stage learning changed participation dramatically.
Visible application, not self-reported confidence. Each module ended with a documented workflow application: a specific task the participant had modified or improved using AI. These were shared within the cohort, not submitted upwards. Making application visible without making it evaluative shifted the dynamic from compliance to genuine practice.
None of this required new technology or a larger budget. It required a different design logic, applied consistently from the first module to the last.
The Numbers at Three Months
Completion reached 71%. That is the headline figure, and it matters: it represents a programme that employees actually finished, which the previous two did not.
But the more meaningful outcome is the 58% workflow adoption rate at three months. Not self-reported confidence levels. Not satisfaction scores. Observable, documented changes to how work was being done, with processes that now routinely incorporated AI where they had not before.
For an asset management firm where consistency and precision directly affect client outcomes, that is a substantive result. It is also the figure that ultimately shaped how leadership discussed the investment. Completion tells you the training worked. Workflow adoption tells you it transferred.
What This Means for L&D Leaders
Three patterns stand out from this engagement that apply well beyond financial services.
Diagnose before you design. Both of the failed programmes skipped a proper diagnostic stage. Understanding which roles are most exposed, which workflows are most tractable, and where the genuine friction points are: that work should precede any content or platform decisions. Building an AI programme without that foundation is one of the most predictable ways to waste the budget.
Completion is a proxy metric, not a goal. When completion is the primary success measure, training gets designed for completion. Clicking through modules is a very low bar and it is entirely possible to hit it without any behaviour change following. Design for what you want to see at three months, and completion tends to follow. The reverse is rarely true.
The tool-first instinct is almost always the wrong starting point. Enterprise AI training programmes most commonly break down at precisely this juncture: when the programme is structured around demonstrating product features rather than developing applied judgment. Tools change. Vendors update their interfaces. Thinking capabilities compound.
What Comes Next
If your organisation is planning an AI training rollout, the most useful question to ask before selecting a platform or scoping a programme is: what does success look like at three months? If the answer centres on completion rates or awareness scores, it is worth going back to the design brief.
The principles behind this redesign — role-specificity, thinking-first sequencing, peer accountability, and visible application — form the basis of the CORE framework. For teams and individual professionals who want to apply the same approach to their own AI development, the AI Thinking for Non-Technical Professionals course covers these principles in depth.
Ready to build a programme that actually delivers?
If your organisation is planning an AI training rollout, I work with L&D leaders and senior teams to design programmes built around real workflows and measurable outcomes.
Book a Call →Found this useful? Get the free AI Training Roadmap
5 actionable steps to build AI literacy across your team or organisation. Used by teams at the World Bank, Bloomberg, and Adobe.
Check your inbox! The roadmap is on its way.
Something went wrong. Please try again.