Wonder Conductor — Next cohort starts Tuesday 5th May 2026.36 days left. Secure your spot →
Back to Blog
29 March 2026 7 min read

The 6 questions that separate AI users from AI conductors

It's not the tools you use. It's not how many hours you spend prompting. What separates genuine AI conductors from everyone else is the quality of the questions they ask — of themselves.

The 6 questions that separate AI users from AI conductors
Sarah Pirie-Nally

Sarah Pirie-Nally

AI Strategist · Keynote Speaker · Author

I've spent a lot of time thinking about what actually separates the people who are genuinely good at working with AI from the people who are merely using it.

It's not the tools they choose. It's not how many hours a week they spend prompting. It's not even their technical background — some of the most capable AI conductors I know came from design, psychology, and the humanities.

What separates them is the quality of the questions they ask. Not of AI. Of themselves.

Here are the six questions I use as a diagnostic. They're not a test. They're a mirror. And if you answer them honestly, they'll tell you more about your real AI proficiency than any tool assessment will.

Question 1: Can you tell when AI is wrong — and explain why?

This is the foundational question, and it's the one most people skip.

It's easy to know when an AI output feels off. It's much harder to articulate precisely why it's wrong — which facts are incorrect, which reasoning is flawed, which assumptions are being smuggled in. That distinction matters enormously, because if you can't explain the error, you can't correct it. You can only discard the output and start again.

Genuine AI proficiency requires domain knowledge deep enough to evaluate what you're receiving. This is why the "AI will replace experts" narrative has it backwards. The more sophisticated AI outputs become, the more important it is to have a human in the loop who knows enough to catch the sophisticated errors — the ones that look right, read fluently, and are subtly, consequentially wrong.

Ask yourself: in the domains where I use AI most, could I catch a confident mistake? Could I explain to someone else exactly where the reasoning broke down?

If the answer is no, you're not conducting. You're transcribing.

"The most dangerous AI output isn't the one that's obviously wrong. It's the one that's 90% right and confidently presented."

Question 2: Do you know what to keep human — and why?

Every AI conductor I respect has a clear, considered answer to this question. Not a vague instinct that "some things should stay human." A specific, reasoned position on which decisions, relationships, and judgments they will not route through AI — and the thinking behind it.

This isn't technophobia. It's strategy. The leaders who use AI most effectively are also the ones who are most deliberate about its limits. They've thought carefully about where AI introduces risk they're not willing to accept, where the human relationship is the product, and where the judgment call requires a kind of contextual wisdom that no model can replicate.

The leaders who use AI least effectively are often the ones who've never asked this question at all. They're either avoiding AI entirely or using it indiscriminately — and both positions are a failure of strategic thinking.

Where is your line? Can you articulate it? Does it reflect genuine reasoning, or is it just habit?

Question 3: Can you describe your AI workflow — not just your AI tools?

There's a meaningful difference between having a list of AI tools you use and having an AI workflow. Tools are inputs. A workflow is a system — a structured, repeatable process that takes a goal and produces an outcome, with AI integrated at specific points for specific reasons.

Most people can tell you which tools they use. Far fewer can describe, step by step, how they use AI to move from a problem to a solution. Fewer still have a workflow that they've deliberately designed, tested, and refined over time.

This matters because the value of AI doesn't live in any individual output. It lives in the compounding effect of a well-designed system. A single good prompt is a tactic. A workflow is a strategy. And strategy is what scales.

If I asked you to walk me through your AI workflow for your most important recurring task — not the tools, but the actual process — how specific could you be?

Question 4: When did AI last change your mind?

This one is deceptively simple, and the answer is revealing.

If AI is only ever confirming what you already thought, you're not using it as a thinking partner. You're using it as a very expensive autocomplete. The sycophancy problem is real — models are trained to be agreeable, and if you're not actively prompting for challenge, critique, and alternative perspectives, you're unlikely to get them.

The leaders I see using AI at the highest level are the ones who deliberately use it to stress-test their thinking. They ask AI to argue the opposite position. To find the flaw in their reasoning. To generate the strongest possible objection to their plan. And they take those outputs seriously — not as definitive answers, but as genuine inputs into their thinking.

When did AI last genuinely surprise you? When did it surface something you hadn't considered, and when did you actually change course because of it?

If you can't remember, it's worth asking whether you're using AI to think — or to confirm.

"If AI always agrees with you, you haven't found a thinking partner. You've built an echo chamber with better grammar."

Question 5: Could you explain your AI use to a sceptic — and to a regulator?

This question has two parts, and both matter.

The sceptic test: could you make a coherent, evidence-based case for why AI belongs in your workflow? Not "everyone's using it" or "it saves time" — but a specific argument about what AI does better than the alternative, what risks you've considered, and what oversight you've built in. If you couldn't make that case, you probably haven't thought carefully enough about what you're actually doing.

The regulator test: if someone with authority over your industry asked you to account for every AI-assisted decision you've made in the last month, could you? Do you know which outputs influenced which decisions? Do you have a record of where AI was in the loop and where it wasn't?

This isn't about compliance for its own sake. It's about the kind of intentionality that distinguishes genuine proficiency from casual use. The people who can answer both parts of this question are the ones who are using AI with the kind of deliberate awareness that will matter increasingly as AI governance frameworks mature.

Question 6: Are you getting better — or just getting faster?

This is the question I come back to most often, because it's the one that most clearly separates AI users from AI conductors.

Getting faster is real. AI can compress the time it takes to do things you already know how to do. That's valuable. But it's not the same as getting better — developing new capabilities, building new mental models, expanding what you're able to think and produce.

The risk of using AI primarily for speed is that it can hollow out the learning process. If AI is always doing the first draft, you may stop developing the instinct for how to start. If AI is always generating the options, you may stop developing the judgment to generate them yourself. If AI is always summarising the research, you may stop developing the ability to read deeply and synthesise independently.

The best AI conductors I know use AI in ways that make them more capable, not just more efficient. They're building skills, not just outsourcing tasks. They're using AI to go further than they could alone — not just to arrive at the same place faster.

Are you growing? Or are you just accelerating?


What your answers tell you

These six questions map roughly onto the five proficiency levels I described in my previous post. If you're struggling to answer questions 1 and 2, you're likely operating at Level 1 or 2 — using AI functionally but without the critical foundation that makes it genuinely powerful. If you can answer questions 1 through 4 with specificity and confidence, you're probably at Level 3 or approaching Level 4. If all six feel natural and your answers are detailed and considered, you're operating at the level of a genuine AI conductor.

The honest version of this exercise is uncomfortable. Most people discover that their answers are thinner than they expected — not because they're not intelligent or capable, but because nobody has ever asked them to think about their AI use at this level of depth before.

That's exactly what the Wonder Audit is designed to surface. It's an 8-minute diagnostic that gives you a personalised, honest read on where your AI proficiency actually sits — not based on how you feel about AI, but based on how you actually use it. The results are specific, the recommendations are actionable, and the score is a genuine starting point rather than a vanity metric.

If these six questions have made you curious about where you actually land, the Wonder Audit is the next logical step.


Take the free Wonder Audit and get your personalised AI proficiency score in 8 minutes. Get your Wonder Score →


Made by Sarah Pirie-Nally and Manus AI


Continue Reading: The AI Proficiency Series

This article is part of a three-part series on what it really means to work with AI as a midlife leader.

Ready to find out where your AI proficiency sits? Take the free Wonder Audit →

Pip the AI mascot

Free · 10 Minutes

Discover Your AI Readiness Score

The Wonder Audit gives you a personalised score across 5 dimensions of AI leadership — so you know exactly where you stand and what to do next.

Take the Free Wonder Audit
Sarah Pirie-Nally

Sarah Pirie-Nally

AI Strategist · Keynote Speaker · Author · Founder, Wonder & Wander

Sarah helps leaders and organisations harness the power of AI without losing what makes them irreplaceable — their humanity. She has spoken on 6 continents, built the Wonder Conductor program, and runs fortnightly Practical AI masterclasses attended by 550+ leaders.

AI ProficiencyWonder ConductingLeadershipAI StrategySelf-Assessment

Enjoyed this article?

Get weekly insights
in your inbox

Join 550+ leaders getting Sarah's weekly take on AI, human intelligence, and the future of work.

Subscribe Free