Enterprises are about to redesign every knowledge workflow for autonomous agents. The people who will run those workflows don't need a CS degree. They need a specific set of skills that no university curriculum covers — yet.
“To get the most out of the tools that have become available now, you have to remove yourself as the bottleneck. You cannot be there to prompt the next thing. You need to take yourself outside the loop. You have to arrange things such that they are completely autonomous. The more you can maximize your token throughput and not be in the loop, the better.”
— Andrej Karpathy, co-founder OpenAI / former Director of AI at Tesla
Karpathy is not describing a hypothetical future. He is describing how the most productive people in the industry already work. They are not in the loop. They are not prompting one message at a time and waiting for an answer. They write instructions once, deploy autonomous agents, and measure output in tokens-per-hour rather than responses-per-conversation.
The person who does this professionally — who configures, deploys, monitors, and improves autonomous AI agents inside real organisations — is the agent operator. It is the most important new role in enterprise technology. And almost no one is formally trained for it yet.
Most AI roles in enterprise today are either data scientist positions (build models) or software engineer positions (integrate APIs). The agent operator is neither. It is an operational role — closer to a process improvement manager or a systems administrator than to an engineer.
The agent operator does not write the model. They do not fine-tune it. They configure it, connect it to the organisation's tools and data, design the workflows it operates within, set the safety guardrails, and are accountable when it fails. They are, in the truest sense, the operator of a sophisticated autonomous system — the way a pilot operates an autopilot rather than designing the avionics.
This means the agent operator role is accessible to domain experts who are not software engineers. A marketing manager who understands the marketing process deeply is a better candidate for the marketing agent operator role than a software engineer who has never run a campaign. A paralegal who understands contract review is a better candidate for the legal agent operator role than a developer who has never read a contract. Domain knowledge is the competitive advantage. Technical fluency is a skill you can acquire.
None of these require a computer science degree. All of them can be learned in months, not years.
What it is: Model Context Protocol servers are how agents connect to tools — databases, APIs, file systems, web browsers. An agent operator understands what MCPs exist, when to use them, how to configure them safely, and what can go wrong.
Why it matters: Without MCPs, an agent is a chatbot. With them, it can query your CRM, draft a contract, book a flight, and file the expenses — autonomously.
What it is: Command-line interfaces are where agent configuration lives. Not writing shell scripts from scratch — reading them, running them, and knowing what the output means. Enough to troubleshoot when an agent fails at 2am.
Why it matters: Most enterprise AI failures happen at the deployment layer, not the model layer. An operator who can read a terminal output can fix most of them without escalating to engineering.
What it is: Agents that work autonomously need written instructions — not prompts you type in a chat box, but persistent, structured files that describe how the agent should behave across every session. AGENTS.md, CLAUDE.md, and similar files are the operator's primary tool.
Why it matters: Verbal instructions disappear at the end of a conversation. Written instructions persist across sessions, systems, and operators. The ability to write them well is the difference between an agent that works once and one that works reliably.
What it is: The agent operator sits at the intersection of AI capability and business process. They must understand what the business actually needs — not just what it asks for — and know when a proposed automation will create more risk than it saves.
Why it matters: Technical teams build what they're told to build. Business teams ask for what they think they want. The agent operator translates — and pushes back when the translation is wrong.
What it is: Before an agent can replace or augment a process, someone has to document that process precisely. What are the inputs? What decisions get made, and by whom, and on what basis? What are the exceptions? What happens when it goes wrong?
Why it matters: Agents automate the workflow you give them. If the workflow you give them is broken, the agent will execute the broken version faster and at scale. Garbage in, garbage out — at 10,000 instances per minute.
What it is: The agent operator needs to know what questions to ask before deployment. Does this agent have access to data it shouldn't? What happens if it hallucinates? Who reviews its outputs? What is the override procedure?
Why it matters: Governance failures in AI are almost never technical. They are operational. Someone gave an agent too much permission, or didn't build the review checkpoint, or didn't document the failure mode. That's operator territory.
The first wave of agent operator hiring is already happening, largely under other job titles. "AI Automation Specialist." "Prompt Engineer" (a title that will age as well as "Webmaster"). "AI Workflow Lead." The job description varies but the actual work is consistent: take a business function and make it run autonomously.
As enterprises formalize this discipline, agent operators will sit in dedicated AI Operations teams — analogous to DevOps teams, but for autonomous AI systems rather than software deployments. The demand for this role will be structural, not cyclic. Every enterprise function that involves repetitive knowledge work will need at least one.
There is no official agent operator curriculum yet. The people who are doing this work today built their skills by doing. Here is the structured path:
The AIDA examination tests applied PSF knowledge across all eight domains — exactly the gaps and strengths covered in this assessment. 15 minutes. No charge. Ever.