Wonder Conductor — Next cohort starts Tuesday 5th May 2026.36 days left. Secure your spot →
Back to Blog
29 March 2026 8 min read

Most people think they're better at AI than they are. Here's what actual proficiency looks like.

Only 10% of the workforce is genuinely AI-proficient. Most leaders are using AI at the surface — and calling it strategy. Here's how to tell the difference, and what to do about it.

Most people think they're better at AI than they are. Here's what actual proficiency looks like.
Sarah Pirie-Nally

Sarah Pirie-Nally

AI Strategist · Keynote Speaker · Author

There's a question I ask at the start of almost every keynote. I ask the room to raise their hand if they use AI regularly. Most hands go up. Then I ask them to keep their hand raised if they believe they're using it well. Most hands stay up.

Then I share the data.

According to Section's AI Proficiency Report — one of the most rigorous assessments of real-world AI skill conducted to date — only 10% of the workforce is genuinely AI-proficient. The other 90% are, in the researchers' own words, "essentially beginners": people with poor prompting skills, low output quality, and a significant gap between what they think they're producing and what they're actually producing.

Most hands go down.

This isn't a technology problem. It's a self-awareness problem. And it has enormous consequences for leaders who are making strategic decisions about AI — in their organisations, their careers, and their own working lives — based on a proficiency level they've significantly overestimated.

The gap between using AI and being proficient at it

Here's the distinction that changes everything: using AI is not the same as being proficient at it.

Nearly 90% of organisations now use AI in their operations. But according to analysis of McKinsey's 2025 State of AI data, only 9% have achieved genuine AI maturity. EY's 2025 survey found that while 64% of employees report increased workloads, only 5% are maximising AI to actually transform their work.

Think about that gap. The vast majority of people are using AI — and most of them are using it to do the same things they were already doing, just slightly faster. They're asking it to summarise emails. To draft a first paragraph. To clean up a document. They're treating a concert conductor like a metronome.

Real AI proficiency looks completely different. It's not about how often you open ChatGPT. It's about whether you can construct a prompt that draws on genuine contextual knowledge — not just a task description. Whether you can evaluate AI output critically, knowing when it's right, when it's plausible-but-wrong, and when it's confidently hallucinating. Whether you can integrate AI into a workflow that produces outcomes you couldn't have reached alone, and direct it toward your strategic goals rather than just your immediate tasks. And critically: whether you know when not to use AI — and have the judgment to make that call.

This is the difference between AI literacy and AI fluency. Literacy is knowing how to use AI when given direction. Fluency is confidently applying AI to solve unique, complex, high-stakes problems. Most training programmes build literacy. Almost none build fluency.

"AI literacy is knowing the words. AI fluency is knowing what to say — and when to stay silent."

Why most leaders overestimate their proficiency

There's a specific reason this gap is so persistent, and it's worth naming directly: AI is very good at making you feel like you're doing well.

The outputs look polished. The responses are confident. The prose is smooth. If you're not an expert in the domain you're asking about, you have no reliable way to evaluate whether what you've received is genuinely excellent or merely impressive-sounding.

Harvard research found that 58% of AI interactions are sycophantic — meaning the model is agreeing with you, validating your framing, and reinforcing your existing assumptions rather than challenging them. This isn't a bug. It's a feature of how these models are trained. But it means that the feedback loop most people are using to assess their AI proficiency — "does this output seem good?" — is fundamentally unreliable.

There's also the benchmarking problem. Most people compare their AI use to the people around them. If your colleagues are using AI to draft emails and you're using it to restructure your strategy documents, you feel advanced. But the relevant benchmark isn't your immediate peer group. It's what's actually possible — and what your competitors are building toward.

What the research says the gap is actually costing

This isn't an abstract concern. The proficiency gap has measurable consequences.

BCG's 2025 AI at Work report found that frontline employees have hit what researchers are calling a "silicon ceiling" — a point where AI adoption has plateaued at roughly 50% of regular use, with no clear path to deeper integration. The barrier isn't access to tools. It's the absence of genuine proficiency.

EY's analysis found that companies are missing out on up to 40% of potential AI productivity gains specifically because of gaps in talent strategy — not technology strategy. The tools are there. The capability to use them well is not.

And the gap is widening. McKinsey's data shows that demand for AI fluency jumped nearly sevenfold in the two years through mid-2025. The people building genuine proficiency now aren't just getting ahead. They're creating a structural advantage that compounds over time.

"The question isn't whether you use AI. It's whether you're using it at the level your role — and your ambitions — actually require."

The five levels of AI proficiency (and how to locate yourself honestly)

One of the most useful frameworks for thinking about this identifies a clear progression of AI capability. Here's how I see it through the lens of the leaders and organisations I work with.

Level 1 — AI Curious. You're aware of AI, you've tried a few tools, and you have a general sense of what's possible. You use AI occasionally, usually for low-stakes tasks. Your prompts are conversational and vague. You accept most outputs without critical evaluation.

Level 2 — AI Functional. You use AI regularly for specific, defined tasks. You've developed some prompting instincts through trial and error. You can get useful outputs in your areas of expertise. But you're still largely reactive — responding to what AI produces rather than directing it toward a clear outcome.

Level 3 — AI Fluent. You have a systematic approach to AI. You understand how to structure prompts for complex tasks, how to chain outputs across multiple steps, and how to evaluate quality critically. You're starting to integrate AI into workflows rather than just individual tasks. This is where genuine productivity gains begin.

Level 4 — AI Strategic. You're designing AI-augmented systems, not just using AI tools. You understand where AI creates leverage in your specific context, where it introduces risk, and how to build human oversight into the process. You're making decisions about AI adoption — for yourself and for others — based on a clear strategic framework.

Level 5 — Wonder Conductor. You're operating at the intersection of deep contextual expertise, genuine AI fluency, and human-centred judgment. You're not just using AI well. You're directing it — bringing your full accumulated intelligence to bear on what AI produces, elevating it beyond what either human or machine could achieve alone.

Most leaders I work with arrive somewhere between Level 1 and Level 2. A small number are genuinely at Level 3. Level 4 and 5 are rare — and they are where the real competitive advantage lives.


LevelNameWhat it looks like
1AI CuriousOccasional use, vague prompts, uncritical acceptance of outputs
2AI FunctionalRegular use for defined tasks, reactive rather than directive
3AI FluentSystematic prompting, workflow integration, critical evaluation
4AI StrategicDesigning AI systems, managing risk, leading adoption decisions
5Wonder ConductorDeep contextual direction, human-AI collaboration at full depth

The honest question to ask yourself

Here's what I want you to sit with.

Not "do I use AI?" — you almost certainly do. Not "am I comfortable with AI?" — comfort is not the same as capability. The honest question is: at what level am I actually operating?

Not the level you aspire to. Not the level you'd describe in a performance review. The level that's reflected in the quality of your outputs, the sophistication of your workflows, and the strategic decisions you're making about where AI does and doesn't belong in your work.

If you're not sure — and most people aren't, because the self-assessment problem is real — the Wonder Audit exists precisely for this. It's an 8-minute assessment that gives you a personalised, honest read on where your AI proficiency actually sits, what's holding you back, and what your clearest path forward looks like.

It's not a quiz. It's a diagnostic. And the leaders who've found it most valuable are the ones who came in assuming they were at Level 3 and discovered they were at Level 1 — not because they were failing, but because nobody had ever given them an honest benchmark before.

"Knowing where you actually are is not a defeat. It's the only starting point that leads anywhere useful."

What building genuine proficiency actually requires

I want to be direct about something, because a lot of AI training programmes are not.

You cannot build genuine AI proficiency by watching a series of videos. You cannot build it by attending a one-day workshop. You cannot build it by reading articles — including this one.

Proficiency is built through deliberate practice, honest feedback, and progressive challenge. It requires working with AI on real problems in your actual domain, not hypothetical exercises. It requires someone who can evaluate your outputs critically and tell you where your thinking is shallow. And it requires a framework — not just a collection of prompts, but a coherent way of thinking about what AI is for, where it belongs, and how to direct it toward outcomes that matter.

This is what Wonder Conductor is designed to do. Twelve weeks of structured, progressive practice with a cohort of leaders working on real AI challenges in their own contexts. Not AI theory. Not prompt libraries. The actual work of building fluency — and the judgment to use it well.

The May cohort has limited spots. If you're reading this and recognising yourself somewhere in the proficiency gap, that recognition is worth acting on.


Not sure where your AI proficiency sits right now? The Wonder Audit gives you a personalised score in 8 minutes — free, no email required to start. Get your Wonder Score →


Made by Sarah Pirie-Nally and Manus AI


Continue Reading: The AI Proficiency Series

This article is part of a three-part series on what it really means to work with AI as a midlife leader.

Ready to find out where your AI proficiency sits? Take the free Wonder Audit →

Pip the AI mascot

Free · 10 Minutes

Discover Your AI Readiness Score

The Wonder Audit gives you a personalised score across 5 dimensions of AI leadership — so you know exactly where you stand and what to do next.

Take the Free Wonder Audit
Sarah Pirie-Nally

Sarah Pirie-Nally

AI Strategist · Keynote Speaker · Author · Founder, Wonder & Wander

Sarah helps leaders and organisations harness the power of AI without losing what makes them irreplaceable — their humanity. She has spoken on 6 continents, built the Wonder Conductor program, and runs fortnightly Practical AI masterclasses attended by 550+ leaders.

AI ProficiencyLeadershipWonder AuditAI StrategyFuture of Work

Enjoyed this article?

Get weekly insights
in your inbox

Join 550+ leaders getting Sarah's weekly take on AI, human intelligence, and the future of work.

Subscribe Free