
A few weeks ago, the market wiped $285 billion off software stocks because Anthropic released a folder of prompts that could do legal work.
Thomson Reuters, down 18%. Wolters Kluwer, down 13%. LegalZoom, nearly 20%.
These weren’t random casualties. They were middlemen.
Thomson Reuters monetized access to legal knowledge. Wolters Kluwer monetized access to regulatory knowledge. They organized information, packaged it into tools, and charged professionals for access.
Then knowledge became abundant. And the market asked the obvious question: if the end user can go straight to the source, why pay the middleman?
That question isn’t just about legal tech. It’s about every profession built on knowing things that used to be hard to know.
Including mine.
The Abundance Shift
Matt Shumer, an AI researcher and entrepreneur, recently published a piece called “Something Big Is Happening”—a note to his non-tech friends and family about what AI is starting to change.
One data point he shared stopped me: there’s an organization called METR that tracks the length of real-world tasks that AI models can complete end-to-end without human help. A year ago, the answer was about ten minutes. Then an hour. Then several hours. The most recent measurement: tasks that take a human expert nearly five hours.
That number is doubling roughly every seven months. And it may be accelerating.
Shumer made a point that I haven’t been able to shake: AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics. But AI doesn’t leave a convenient gap to move into—whatever you retrain for, it’s improving at that too.
We’re not facing a skills gap. We’re facing an abundance of capability that makes the entire concept of “I know something you don’t” increasingly worthless as a value proposition.
The 80/20 Reckoning
I keep coming back to this ratio in my own profession.
As an internal auditor, roughly 80% of what I do is knowledge work. Reading policies. Mapping processes. Testing conditions against criteria. Documenting findings. Applying frameworks like COSO and IIA standards. This is the work that requires training, sure—but it’s fundamentally pattern matching. Compare what is against what should be. Document the gap.
The other 20% is what I’d call intelligence work. Judgment under ambiguity. Reading the room during an interview and knowing someone isn’t telling you the full story. Sensing that a control environment looks fine on paper but something is off. Asking the question that makes a CFO uncomfortable. Seeing the risk that isn’t in the data yet.
For thirteen years, I was paid for the full 100%. But the value was always concentrated in that 20%. The 80% was just the vehicle that got me there—the hours of documentation and testing that earned me the credibility and context to exercise judgment.
Now the vehicle is being automated. And the question is: does the 20% justify the role on its own?
I think it does. But only if we’re honest about what’s changing.
Your Auditee Is About to Become Their Own Auditor
Here’s the part most auditors aren’t ready to hear.
Your auditees—the departments you audit, the management teams you report to, the business units whose controls you test—they’re about to gain the same capabilities you have. Not in five years. Soon.
They will prompt their own tools. They’ll build their own agents hooked up to their own directories and data sources. They’ll have all the information at hand that you currently spend weeks requesting, chasing, and scavenging. They’ll run analysis—quick and dirty for their daily needs, or deep and structured when the stakes are high.
And they’ll get insights. Good ones. The kind that used to come only from an audit engagement.
I’m not speculating. I built an AI orchestrator that walks through the entire audit methodology in a fraction of the time. If I can do that as an auditor with zero coding experience, what happens when a COO with a budget and a technical team decides they want continuous visibility into their own control environment?
They won’t need to wait for the annual audit. They won’t need to request a special engagement. They won’t need us to tell them what they can increasingly see for themselves.
The need for external validation—for external assurance within the organization—will diminish. Not disappear. But diminish. The auditee becomes their own first line of assurance.
The Cocky Bunch
If we stay stuck in the old reality—where we believe we are the only ones who can provide assurance—we become a cocky, irrelevant bunch.
I’ve seen it. The audit teams that carry themselves as if their reports are the final word. The committees that treat internal audit findings as the only credible source of insight on control effectiveness. The professional bodies that define assurance in ways that assume information asymmetry will last forever.
That asymmetry is collapsing. The auditor’s traditional edge—“I see what you can’t see”—is eroding because the tools that let us see are now available to everyone.
If our identity is built on being the gatekeepers of knowledge, we’re in trouble. Because the gates are gone.
But if we remember what the profession is actually about—adding value to the organization, helping it operate within acceptable risk boundaries, strengthening governance—then the path forward is clear. It’s just different from the one we trained for.
From Knowledge Work to Intelligence Work
The value won’t come from the knowledge work anymore. It can’t. Not when your auditee has the same knowledge at their fingertips.
The value comes from the intelligence work:
Judgment under ambiguity. The AI flags an anomaly. Is it a control failure, an isolated exception, or a sign of something systemic? That determination requires context, experience, and the kind of professional skepticism that doesn’t fit in a prompt.
Organizational sense-making. The numbers say one thing. The culture says another. The politics say something else entirely. Navigating these layers—understanding why people circumvent processes, why certain risks get ignored, why the tone at the top doesn’t match the behavior in the middle—that’s intelligence work.
Question design. The most valuable thing an auditor does isn’t answering questions. It’s asking them. The right question, asked to the right person, at the right time, can surface risks that no amount of data analysis would reveal.
Forward-looking risk detection. Traditional audit is backward-looking: did we comply? Intelligence work is forward-looking: what are we not seeing? Where will the next failure come from? What assumptions are we making that won’t hold?
This is the 20% that justifies the profession. And when the 80% gets automated, this becomes the 100% of what you’re paid for.

The Auditor as Translator
There’s another role emerging that I think becomes central to what we do: translation.
As organizations adopt AI—not just for audit but across every function—someone needs to be able to explain what’s happening inside the machine. Not the engineering. The methodology.
Think of it as three layers:
Input translation. What data went into the model? What assumptions were made? What context was provided and what was missing? The auditor understands what should be fed in and can identify what’s been left out—because we know the business, the risks, and the control environment.
Process translation. What happened inside the black box? How did the AI orchestrate the analysis? What logic did it follow? A regulator won’t know. Management might not know. But someone needs to be able to explain and validate the methodology—not the code, but the reasoning.
Output translation. What came out, and what does it mean? An AI system produces results. But results without interpretation are just data. The auditor translates outputs into business language, into risk implications, into actionable recommendations. And critically, the auditor can validate whether the outputs make sense given what went in.
This translation role becomes more important as AI adoption increases, not less. Every AI-driven process in an organization needs someone who can explain the full chain: what went in, what happened, what came out, and whether the organization can demonstrate it’s in control.
A regulator will not always know what was processed in the black box. Management may not always understand the methodology their AI tools are following. But they need to demonstrate they’re effectively managing their risks.
That’s the auditor’s future. Not checking boxes. Translating between the human world and the machine world. Demonstrating that the organization is in control—even when the systems doing the work are autonomous.
Where This Leads
I want to be careful here. I’m not saying audit disappears. I’m saying it transforms so fundamentally that the people doing it in 2030 may barely recognize the profession we practice today.
The assurance model changes. Traditional assurance was built on scarcity—the auditor sees what others can’t. When that scarcity collapses, assurance doesn’t vanish. It shifts. From “I verified your controls” to “I can explain why your AI-driven processes are trustworthy.” From backward-looking compliance to forward-looking resilience.
The staffing model changes. You don’t need twenty auditors executing test procedures when AI agents can run them continuously. You need a smaller number of senior professionals who can design the systems, interpret the outputs, and exercise judgment where the machines hit their limits.
The skill model changes. The certifications that test your ability to recall COSO components become less relevant. The ability to design AI-assisted audit workflows, interpret ambiguous findings, and communicate risk to non-technical stakeholders becomes essential.
All of this is already in motion. The SaaSpocalypse was the market pricing it in for software and legal tech. The same repricing is coming for professional services. The question is whether you’re ahead of it or behind it.
The Series So Far
Three weeks ago, I had my Oh Fuck moment—the realization that I could build something that automates a significant portion of my own job.
Two weeks ago, I showed you what it does—and connected it to the $285 billion market reckoning that validated the same conclusion.
This week, I’m telling you where I think this leads: the middleman gets squeezed, the 80% gets automated, and the profession reconcentrates around intelligence work and translation.
I don’t have all the answers. But I’m not waiting for permission to figure them out.
If you’re a knowledge worker watching the same shift unfold—auditor or otherwise—I want to hear from you. Get in touch.
Sources and further reading:
- Matt Shumer: “Something Big Is Happening”
- What the AI Audit Orchestrator Actually Does — the SaaSpocalypse and what it means
- Surviving the Oh-Fuck Moment — where this series started