agent autonomy spectrum
on this page
quick read
agent definitions across law, psychology, complex systems, and ai most obviously diverge in how much freedom the agent has to choose goals and tactics. this page isolates that single autonomy axis: the left side contains delegated proxies, while the right side captures self-directed, tool-using systems that manage their own loops.
think about three plain-english checks when placing a definition:
- who sets the goal?
- who decides the next step?
- who carries the responsibility if things go wrong?
Selected definitions arranged from delegated proxies to self-directed systems.
quick reference table
| axis position | definition (discipline, year) | autonomy highlight | 
|---|---|---|
| delegated proxy | Restatement (Second) of Agency (law, 1958) | Agent must follow principal instructions; goals and liability stay with the principal. | 
| instrumental obedience | Milgram agentic state (psychology, 1974) | Individual suppresses personal goals and accountability, acting purely as an authority’s instrument. | 
| contract-bound delegate | Jensen & Meckling principal–agent model (economics, 1976) | Agent chooses tactics within incentive contracts but is evaluated solely on the principal’s payoff. | 
| self-regulating actor | Bandura human agency (psychology, 1989) | Person sets sub-goals, monitors progress, and self-adjusts inside social structures. | 
| adaptive assistant | Pattie Maes software agent (HCI, 1994) | Program learns preferences and initiates actions; user provides high-level intent only. | 
| perception–action planner | Russell & Norvig AI agent (AI, 1995) | Agent senses environment and selects actions via a performance-driven policy instead of waiting for commands. | 
| agenda-setting system | Franklin & Graesser autonomous agent (AI, 1996) | Agent maintains its own agenda over time and acts to influence future perceptions. | 
| learning loop owner | Sutton & Barto reinforcement-learning agent (AI, 1998) | Agent experiments and updates its policy from rewards without human step-by-step guidance. | 
| tool orchestrator | Anthropic Claude tool use (AI, 2025) | LLM chooses when and how to invoke tools, integrating outputs into its own reasoning loop. | 
| independent task finisher | OpenAI agents & ChatGPT agent release (AI, 2025) | Runtime plans, sequences, and completes multi-step tasks before reporting back to the user. | 
reading the axis
the axis treats autonomy as the compound ability to set agenda, select tactics, and own accountability. a definition slides rightward when the agent makes more of those calls without waiting for moment-to-moment direction.
- goal ownership – if another party decides what success looks like, the agent sits toward the delegated end.
- action discretion – agents that monitor context and choose their own next actions move toward the center.
- feedback responsibility – when the agent evaluates outcomes and adapts future steps on its own, it reaches the self-directed extreme.
notable examples from the dataset
the entries below are drawn from the chronological catalogue in agents-definitions.mdx, reorganized solely by autonomy. each summary keeps the original discipline and publication context.
1958 — restatement (second) of agency (law)
- autonomy level: low. the american law institute defines an agent as someone engaged to perform services on behalf of a principal, with decision authority delegated and revocable.
- why it matters: the principal sets objectives, the agent’s judgment is bounded by fiduciary duties, and responsibility flows back to the principal—classic delegated proxy behavior.
1974 — milgram’s agentic state (psychology)
- autonomy level: very low. milgram’s obedience research describes people entering an “agentic state” where they view themselves as instruments of authority.
- why it matters: the agent suppresses personal judgment, relies on external commands, and disclaims responsibility, anchoring the left edge of the spectrum.
1989 — bandura’s human agency (psychology)
- autonomy level: moderate. bandura highlights intentionality, forethought, self-reactiveness, and self-reflectiveness as core human capabilities.
- why it matters: individuals plan, monitor, and adjust their behavior even within social constraints, pushing agency into self-regulating territory.
1994 — pattie maes’ software agents (hci)
- autonomy level: moderate-to-high. maes envisions software that learns user preferences, initiates actions, and reduces information overload on the user’s behalf.
- why it matters: goals are user-driven, yet the agent decides when to intervene, what to prioritize, and how to execute tasks—clear discretionary control.
1995 — russell & norvig’s ai agent (ai)
- autonomy level: high. the aima textbook frames an agent as anything that perceives its environment through sensors and acts through actuators to meet objectives.
- why it matters: success is judged by a performance measure, and the agent’s policy—not external instructions—chooses the next move, cementing autonomy as default.
1996 — franklin & graesser’s autonomous agent (ai)
- autonomy level: very high. franklin and graesser require agents to pursue their own agenda over time, sensing and acting so as to affect future perceptions.
- why it matters: that definition centers self-maintained goals and ongoing feedback loops, hallmarks of rightmost self-directed systems.
1998 — sutton & barto’s reinforcement learning agent (ai)
- autonomy level: high. the rl framework depicts an agent interacting with an environment to maximize cumulative reward via trial, error, and adaptation.
- why it matters: reward functions supply direction, but the agent selects actions, updates strategies, and internalizes feedback, exhibiting sustained autonomy.
2025 — anthropic’s claude tool use (ai)
- autonomy level: very high. claude decides when to invoke tools, executes them, and incorporates results into iterative reasoning loops.
- why it matters: the system owns the perception-action cycle during a task session, coordinating tools without step-by-step human prompts.
2025 — openai’s “new tools for building agents” (ai)
- autonomy level: very high. openai characterizes agents as systems that independently accomplish tasks on behalf of users.
- why it matters: tool calling, planning, and execution are bundled into a self-managed workflow that only requires high-level objectives.
2025 — chatgpt agent release notes (ai)
- autonomy level: very high. the chatgpt agent mode handles multi-step online tasks, switching between reasoning and action while retaining oversight hooks for the user.
- why it matters: it exemplifies a self-directed loop that plans, executes, and hands back control, marking the far right of today’s spectrum.