On Sunday, OpenAI announced multiyear partnerships with McKinsey, BCG, Accenture, and Capgemini. Not to advise on AI strategy. To deploy AI agents across enterprises. The consulting giants aren’t debating anymore. They’re building certified teams, embedding engineers, and restructuring their practices around agentic AI.

The same week, Salesforce reported 22,000 Agentforce deals and 11 trillion tokens processed in a single quarter. Microsoft’s AI chief predicted human-level performance on all professional tasks within 18 months.

I’m not sharing these headlines to scare you. I’m sharing them because there’s a gap forming — between where the industry is heading and where most individual professionals are standing. And that gap is growing every week.

Last week, I wrote about the fees falling and said: start training now. Chase your own gains. Stop expecting magic from a single prompt.

But I realize that advice, without a method, is just noise. “Start training” is like telling someone to “get fit.” Fine. But what does Monday morning actually look like?

This week, I want to give you something concrete. A workflow. Not a course. Not a certification. A practical method for your first week of real AI experimentation.

Step One: Pick Your Problem, Open Everything

Forget “learning AI.” That’s too vague. You’ll watch a webinar, read three articles, feel informed, and change nothing.

Instead: pick one real problem from your actual work. Something that slows you down, bores you, or eats time. The reconciliation you dread. The report you assemble from five sources. The policy document you need to digest before an engagement. The finding you’ve rewritten four times.

Now open everything. Not one tool. All of them.

Check what your employer gives you — you might be surprised. Many organizations already have enterprise licenses for Microsoft Copilot, ChatGPT Enterprise, Claude, Google Gemini, or some combination. If your company provides access: use it. Don’t wait for training. Don’t wait for a workshop. Log in.

If your employer provides nothing? Get your own subscriptions. Claude Pro, ChatGPT Plus, Gemini Advanced — they cost about as much as a streaming service, and the paid tiers are dramatically more capable than the free versions. Better reasoning. Longer context. File uploads. Tool use. If you’ve only tried the free tier of ChatGPT and concluded AI isn’t impressive, you’ve been forming opinions about a Ferrari based on a test drive in a parking lot.

Now feed the same problem to every tool you have access to. The same prompt, the same documents, across ChatGPT, Claude, Copilot, Gemini — whatever you’ve got. See how they respond differently. One might structure the output better. Another might catch nuances. A third might ask clarifying questions that sharpen your thinking.

This is day one. Not “explore gently.” Go hard. Push the boundaries. See where each tool excels and where it falls flat. You’ll learn more in one intense hour than in a week of cautious experimentation.

Step Two: Stop Being Polite

Here’s where most people stall. They type a two-sentence prompt, get a mediocre answer, and conclude: See, AI isn’t good enough.

That’s not a fair test. That’s laziness.

These models live on context. The more you give them, the better they perform. A two-sentence prompt gets you a two-sentence-quality answer. But when you give the model your full situation — your role, your objective, the background, the constraints, what good looks like, what you’ve tried before — the output transforms.

Don’t write “summarize this document.” Write “I’m an internal auditor reviewing this policy ahead of a control effectiveness assessment. Summarize the key control requirements, flag areas where the policy is vague or contradicts standard practice, and note anything I should probe during stakeholder interviews. Format it as a table with columns for control objective, key requirement, and potential gap.”

That’s not a prompt. That’s a briefing. And the difference in output will shock you.

Here’s a hack I use constantly: talk to it.

Open your phone’s voice recorder, or use the voice feature built into ChatGPT or Claude. Just speak. Explain the problem the way you’d explain it to a colleague over coffee. What are you working on? What’s the context? What are you trying to achieve? What’s tricky about it?

Then transcribe that and feed it to the model.

We’re naturally inclined to give another human the context they need to understand what we’re after. We add background. We explain why something matters. We flag the parts that are tricky. We do this instinctively in conversation — and then we sit down at a keyboard and strip all of that out in favor of a terse two-line prompt.

Stop doing that. Your AI coworker needs the same context a human coworker would. Give it.

Step Three: Don’t Know What to Ask? Ask It

This is the part that trips up most professionals. You sit in front of the tool, cursor blinking, and think: I don’t even know what to prompt.

So ask the model.

“I’m an internal auditor about to start fieldwork on a procurement process review. I have access to policy documents, a process flowchart, prior audit reports, and a sample of purchase orders. What questions should I be asking you to get the most value out of this engagement?”

The model will give you a list of prompts. Some will be obvious. Some will surprise you. Some will make you think about your own process in a way you haven’t before.

This isn’t cheating. This is using the tool to learn the tool. And it works across every profession — auditors, lawyers, consultants, analysts. The model knows what it’s good at. Let it tell you.

Step Four: Explore the Boundaries

One hour a day. That’s the commitment. But make it an intense hour.

Don’t just try one thing and move on. Push. If the first output is decent, ask yourself: can it be better? Can I give it more context? Can I ask it to critique its own work? Can I paste in a previous deliverable and ask it to match the format and tone?

Try reasoning mode for complex problems. Try the fast model for quick tasks. Upload a document and ask it to analyze it. Give it a spreadsheet and ask it to find anomalies. Paste in a framework and ask it to apply it to your specific situation.

The professionals who dismiss AI as “not good enough” are almost always the ones who tested it once, gently, with a vague prompt, and walked away. The ones who are building real fluency are the ones who spent hours pushing the tools to their limits — and their own assumptions along with them.

After a few days, you’ll start to develop instincts. This type of task works well with Claude. That one is better in ChatGPT. Copilot is useful when you’re already in a document. Gemini shines when you need to pull from your email or calendar. You stop asking “which AI is best” and start asking “which AI is best for this.”

That’s the toolbox mindset. And it only comes from using every tool in the box.

Step Five: Assess What Happened

At the end of the week, step back and ask three questions:

Did it save me time? Not “was it perfect.” Did the AI-assisted version take less time than doing it entirely by hand? Even if you edited heavily — was the starting point better than a blank page?

Did it improve quality? Did it catch something you might have missed? Did the structured output force a more complete analysis? Was the draft more consistent than what you’d typically produce at 4 PM on a Friday?

What would I do differently? What prompt worked? What context was missing? Which tool surprised you? What should you prepare before the next attempt?

Write it down. Not for anyone else. For you. This is the beginning of your personal playbook.

The Compound Effect

Most professionals are at zero. They’ve either never tried, or they tried once with a lazy prompt, got a mediocre result, and walked away convinced AI is overhyped.

One week of genuine effort puts you ahead of nearly all of them.

And then you do it again. Different problem. Same intensity. Each week, your prompts get sharper. Your instinct for which tool to use sharpens. You start seeing your own workflow differently — this part is pattern matching that AI handles well, that part requires my judgment, this other part I didn’t even realize was automatable until I tried.

After a month, you’ve attacked four real work problems. After a quarter, twelve. After six months, you have a personal methodology that no certification can teach you, because it’s built on your specific work, your specific domain, your specific sense of what good looks like.

That’s the compound effect. And it’s the same effect the big firms are chasing with their multiyear contracts and thousand-person practice groups. You’re doing it with a subscription and one hour a day.

My Own Trial and Error

I want to be honest about how I got here.

The orchestrator I built didn’t start with a plan. It started with frustration. I was staring at a stack of background documents for an upcoming audit and thought: there has to be a faster way to synthesize this.

I opened Claude. Pasted the documents. Wrote a bad prompt. Got a bad result.

Then I wrote a better prompt. Got a better result. Then I asked it to structure the output differently. Then I gave it the audit methodology as context. Then I asked it to identify risks against a framework.

Each iteration taught me something. Not about AI — about my own process. I started seeing which parts of my work were pattern matching and which parts required genuine judgment. I started understanding what context the AI needed to produce useful output, which forced me to articulate things I’d been doing on autopilot for years.

The orchestrator didn’t emerge from a strategy session. It emerged from a hundred small experiments, most of which failed, all of which taught me something.

That’s the method. There is no shortcut. The only way to understand what AI can do for your work is to try it on your work.

Why This Week

McKinsey didn’t sign a multiyear deal with OpenAI because they think AI might be useful someday. Salesforce didn’t process 11 trillion tokens because customers are experimenting. These are production decisions by organizations that have done the math.

The math is coming for every profession. Not as a single dramatic moment — as a steady compression of what the old way of working is worth.

You can’t control the market. You can’t slow down the technology. But you can control what you do this week. One problem. Every tool you can get your hands on. One hour a day of going hard. That’s the starting point.

Not because one week will transform your career. But because one week breaks the inertia. And inertia, right now, is the most expensive thing you can afford.

The Series So Far

Five weeks ago, I had my Oh Fuck moment — the personal realization.

Four weeks ago, I showed you the orchestrator — and connected it to the market’s $285 billion reckoning.

Three weeks ago, I argued that your auditee doesn’t need you anymore — the knowledge advantage is collapsing.

Two weeks ago, I showed that the fees are falling — and told you to start training.

This week: here’s how. One problem. Every tool. One hour a day. Go hard.


If you tried this and want to share what worked — or what didn’t — I want to hear from you. Get in touch.

Sources and further reading: