agent goal dynamics
on this page
quick read
some definitions treat agents as executors of fixed goals, while others assume agents debate, renegotiate, or even generate objectives. this dimension tracks that spectrum:
- goal acceptance: the agent receives a mandate and optimizes for it.
- goal adaptation: the agent can reinterpret or refine a mandate to fit context.
- goal negotiation: the agent (or multi-agent group) bargains over objectives or shared payoffs.
Goal dynamics across agent definitions
Rendering diagram...
Representative definitions mapped from fixed goal acceptance to negotiation.
comparison table
| goal stance | defining move | example definitions | key signals |
|---|---|---|---|
| goal acceptance | agent follows externally supplied objective without debate | Restatement (Second) of Agency (1958), Sutton & Barto RL agent (1998), OpenAI agents (2025) | Mandate stated up front; success equals compliance or reward maximization. |
| goal adaptation | agent refines, reprioritizes, or balances goals within constraints | Pattie Maes software agents (1994), Russell & Norvig AI agent (1995), Anthropic Claude tool use (2025) | Agent interprets intent, chooses tactics, and may trade off sub-goals autonomously. |
| goal negotiation | agent co-determines objectives with others, often through communication | Weiss multi-agent systems (1999), Jennings/Sycara/Wooldridge AOSE (2001), Microsoft AutoGen multi-agent apps (2025) | Goals emerge from dialogue, contracts, or multi-party optimization. |
notable examples
goal acceptance
- Restatement (Second) of Agency (1958, law): the agent must act according to the principal’s objectives; divergence invites breach.
- Sutton & Barto reinforcement learning agent (1998, AI): the reward function is fixed; the agent’s job is to maximize expected return.
- OpenAI “New tools for building agents” (2025, AI): systems “independently accomplish tasks on behalf of users,” but the user goal remains the north star.
goal adaptation
- Pattie Maes software agents (1994, HCI): assistants learn user preferences, reprioritize information, and decide when to intervene.
- Russell & Norvig AI agent (1995, AI): the agent maximizes a performance measure, trading off actions based on context.
- Anthropic Claude tool use (2025, AI): Claude interprets instructions, breaks them down, and decides which tools serve the user’s intent.
goal negotiation
- Weiss multi-agent systems (1999, AI): agents coordinate, cooperate, and negotiate to reach joint solutions, especially when goals conflict.
- Jennings, Sycara, Wooldridge AOSE roadmap (2001, AI): agents are social entities that negotiate commitments and allocate tasks.
- Microsoft AutoGen multi-agent applications (2025, AI): agents message each other to propose plans, critique, and converge on solutions collaboratively.
self-critique
- reward vs. mandate ambiguity: reinforcement learning agents “accept” rewards, but reward shaping blurs into negotiation when humans adjust signals.
- communication proxies: documentation for modern LLM agents often highlights tool use more than explicit goal bargaining; evidence of negotiation may be indirect.
- missing non-cooperative frames: game-theoretic agents that strategically misreport or defect could complicate the tidy acceptance/adaptation/negotiation buckets.
questions for you
- should we split “goal adaptation” into reactive reframing versus proactive goal generation?
- how many of the machine-centered definitions actually support negotiation today, versus marketing language that gestures at collaboration?
- would a legal or policy lens benefit from distinguishing between agents that may renegotiate objectives and those that must obey?
- which dataset entries demonstrate multi-agent goal conflict (e.g., contract law, political science) that we have not highlighted yet?