agent entity frames
on this page
quick read
agent definitions often talk past each other because they quietly assume different kinds of entities. this dimension groups them by the substrate they privilege:
- human-centered: agency is about people’s intentions, duties, or moral responsibility.
- institutional/hybrid: agency belongs to socio-technical collectives—humans plus rules, contracts, or organizations.
- machine-centered: agency is assigned to computational systems that sense, plan, and act.
Entity frames behind agent definitions
Rendering diagram...
Three clusters of definitions based on the presumed agent entity.
comparison table
| frame | entity focus | sample definitions | hallmark questions |
|---|---|---|---|
| human-centered | individual humans and their cognition or morality | Anscombe (1957), Milgram (1974), Bandura (1989) | What intentions animate the person? How is responsibility assigned? |
| institutional / hybrid | humans embedded in contracts, firms, or socio-technical systems | Restatement (1958), Jensen & Meckling (1976), Epstein & Axtell (1996) | What structure channels decisions? How are incentives and authority shared? |
| machine-centered | autonomous software or machines managing perception–action loops | Russell & Norvig (1995), Franklin & Graesser (1996), OpenAI (2025) | How does the system sense, plan, and act? What loop keeps it going? |
notable examples
human-centered anchors
- Anscombe (1957, philosophy): explores intentional action as practical knowledge held by a person; agency is inseparable from human mind and moral evaluation.
- Milgram (1974, psychology): the “agentic state” captures how people respond to authority; the entity is still the human subject, even when they disclaim responsibility.
- Bandura (1989, psychology): human agency expresses self-regulation and forethought; emphasizes internal cognitive machinery.
institutional / hybrid anchors
- Restatement (Second) of Agency (1958, law): defines agency as a fiduciary relationship; the actionable entity is the principal–agent pair bound by legal duties.
- Jensen & Meckling (1976, economics): formalizes principal–agent contracts where incentives steer agents; the effective actor is “manager-with-contract.”
- Epstein & Axtell (1996, complex systems): models agents as rule-following individuals whose behavior is shaped by local interactions; the entity is a socio-technical simulation of many bounded actors.
machine-centered anchors
- Russell & Norvig (1995, AI): any system perceiving through sensors and acting via actuators counts; agency lives in the machine’s policy loop.
- Franklin & Graesser (1996, AI): insists on autonomous agents that pursue agendas over time, making the entity a persistent computational system.
- OpenAI agents & ChatGPT agent (2025, AI): modern LLM runtimes that plan, execute, and hand back results; the entity is explicitly software coordinating tools.
self-critique
- blurry boundaries: bandura’s humans and jensen & meckling’s managers both rely on cognition plus structure; some entries could live in two columns.
- context bias: the machine-centered samples emphasize LLM ecosystems from 2025, which may underplay robotics or biological autonomy perspectives.
- ordinal drift: arranging the frames left-to-right hints at developmental progress, but history swings between them rather than advancing in one direction.
questions for you
- does a three-bucket layout (human → institutional/hybrid → machine) clarify or oversimplify how disciplines talk about agency?
- which definitions in your dataset feel like genuine hybrids that deserve a fourth category?
- how might legal or policy debates change if we insisted on naming the entity frame before defining “agentic ai”?
- where should emergent multi-agent collectives—e.g., swarms of LLMs plus humans—sit on this dimension?