Recognition: unknown
Post-AGI Economies: Autonomy and the First Fundamental Theorem of Welfare Economics
Pith reviewed 2026-05-08 13:08 UTC · model grok-4.3
The pith
The First Fundamental Theorem of Welfare Economics holds in post-AGI economies when competitive equilibrium accounts for agents' varying autonomy.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Using a minimal general-equilibrium model with autonomy-conditioned welfare, welfare-status assignment, delegation accounting, and verification institutions, we set out conditions for which autonomy-complete competitive equilibrium is autonomy-Pareto efficient. The classical theorem is recovered as the low-autonomy limit.
What carries the argument
A minimal general-equilibrium model that conditions welfare on autonomy status, assigns welfare-bearing identity, accounts for delegation, and incorporates verification institutions while preserving the mapping from competitive equilibrium to Pareto efficiency.
Load-bearing premise
A minimal general-equilibrium model can incorporate autonomy-conditioned welfare, welfare-status assignment, delegation accounting, and verification institutions while still preserving the core efficiency mapping from competitive equilibrium to Pareto efficiency.
What would settle it
A concrete competitive market populated by high-autonomy artificial agents in which an equilibrium allocation fails to be autonomy-Pareto efficient despite satisfying standard market-clearing conditions.
read the original abstract
The First Fundamental Theorem of Welfare Economics assumes that welfare-bearing agents are autonomous and implicitly relies on a binary distinction between autonomy and instrumentality. Welfare subjects are those who have autonomy and therefore the capacity to choose and enter into utility comparisons, while everything else does not. In post-AGI economies this presupposition becomes nontrivial because artificial systems may exhibit varying degrees of autonomy, functioning as tools, delegates, strategic market actors, manipulators of choice environments, or possible welfare subjects. We argue that the theorem ought to be subject to an autonomy qualification where the impact of these changes in autonomy assumptions is incorporated. Using a minimal general-equilibrium model with autonomy-conditioned welfare, welfare-status assignment, delegation accounting, and verification institutions, we set out conditions for which autonomy-complete competitive equilibrium is autonomy-Pareto efficient. The classical theorem is recovered as the low-autonomy limit.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper argues that the First Fundamental Theorem of Welfare Economics relies on an implicit binary autonomy assumption that becomes nontrivial in post-AGI economies, where artificial systems may act as tools, delegates, or welfare subjects. It introduces a minimal general-equilibrium model incorporating autonomy-conditioned welfare, welfare-status assignment, delegation accounting, and verification institutions, and claims to derive conditions under which an autonomy-complete competitive equilibrium is autonomy-Pareto efficient, recovering the classical theorem as the low-autonomy limit.
Significance. If the formal conditions can be established without circularity or new market failures, the work would usefully generalize welfare economics to mixed human-AI settings and clarify the role of autonomy in efficiency results. The approach of recovering the standard theorem as a special case is a clear strength, but the absence of explicit equations, proofs, or falsifiable predictions in the manuscript limits its current contribution to the literature.
major comments (1)
- The abstract states that the model 'sets out conditions' for autonomy-complete CE to be autonomy-Pareto efficient, yet supplies no equations, equilibrium definitions, or welfare criteria. This omission is load-bearing because the central claim requires showing that welfare-status assignment and delegation accounting do not introduce unpriced externalities or incomplete markets that would violate the standard assumptions (local non-satiation, no externalities, complete markets) under which the First Theorem holds.
minor comments (1)
- The abstract introduces several new terms ('autonomy-conditioned welfare', 'autonomy-complete competitive equilibrium', 'autonomy-Pareto efficient') without brief definitions or references to prior work on extended welfare theorems, which reduces accessibility.
Simulated Author's Rebuttal
We thank the referee for their careful reading and constructive feedback on our manuscript. The major comment identifies a genuine gap in formalization that we address directly below.
read point-by-point responses
-
Referee: The abstract states that the model 'sets out conditions' for autonomy-complete CE to be autonomy-Pareto efficient, yet supplies no equations, equilibrium definitions, or welfare criteria. This omission is load-bearing because the central claim requires showing that welfare-status assignment and delegation accounting do not introduce unpriced externalities or incomplete markets that would violate the standard assumptions (local non-satiation, no externalities, complete markets) under which the First Theorem holds.
Authors: We agree that the current manuscript presents the argument at a conceptual level and does not contain explicit equations, formal equilibrium definitions, or welfare criteria. The phrase 'sets out conditions' in the abstract refers to the high-level description of the minimal model components rather than a rigorous derivation. This is a substantive limitation for establishing the central claim. In the revised version we will add a formal section that (i) defines autonomy-conditioned welfare functions u_i(x | A_i) where A_i is the autonomy level of agent i, (ii) specifies welfare-status assignment as a mapping from agents to welfare-subject status, (iii) incorporates delegation accounting via contractible actions and verification institutions V that price any externalities, and (iv) defines an autonomy-complete competitive equilibrium as a price vector p and allocation x such that each agent (human or AI delegate) optimizes subject to budget and autonomy constraints with market clearing. We will then prove that, provided V ensures no unpriced externalities from delegation and markets remain complete, with local non-satiation holding for all welfare subjects, the equilibrium is autonomy-Pareto efficient. The classical First Theorem is recovered exactly when A_i is binary and low for non-human entities. This revision will explicitly verify that the standard assumptions are preserved rather than violated. revision: yes
Circularity Check
No significant circularity; extension remains self-contained.
full rationale
The paper extends the First Welfare Theorem by qualifying it with autonomy levels in a minimal GE model that includes autonomy-conditioned welfare, welfare-status assignment, delegation accounting, and verification institutions. It claims to derive conditions under which autonomy-complete competitive equilibrium is autonomy-Pareto efficient, with the classical theorem recovered as the low-autonomy limit. No equations, definitions, or derivation steps are provided in the available text that reduce the claimed result to a tautology or fitted input by construction. The autonomy qualification is presented as an added modeling choice rather than a self-referential redefinition that forces the efficiency mapping. The central claim therefore retains independent theoretical content and does not collapse to its inputs.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Welfare-bearing agents are distinguished by a binary or graded autonomy property that determines capacity for utility comparisons.
- ad hoc to paper Verification institutions can be added to the general-equilibrium model without disturbing the competitive-equilibrium-to-efficiency mapping.
Reference graph
Works this paper leans on
-
[1]
Economic Policy40(121), 13–58 (2025)
Acemoglu, D.: The simple macroeconomics of AI. Economic Policy40(121), 13–58 (2025)
2025
-
[2]
NBER Working Paper 31872, National Bureau of Economic Research (2023)
Acemoglu, D., Makhdoumi, A., Malekian, A., Ozdaglar, A.: A model of behavioral manipulation. NBER Working Paper 31872, National Bureau of Economic Research (2023)
2023
-
[3]
American Economic Review 108(6), 1488–1542 (2018)
Acemoglu, D., Restrepo, P.: The race between man and machine: Implications of technology for growth, factor shares, and employment. American Economic Review 108(6), 1488–1542 (2018)
2018
-
[4]
Journal of Economic Perspectives33(2), 3–30 (2019) Autonomy-Qualified First Welfare Theorem 17
Acemoglu, D., Restrepo, P.: Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives33(2), 3–30 (2019) Autonomy-Qualified First Welfare Theorem 17
2019
-
[5]
Aghion, P., Holden, R.: Incomplete contracts and the theory of the firm: What have we learned over the past 25 years? Journal of Economic Perspectives25(2), 181–197 (2011)
2011
-
[6]
Quarterly Journal of Economics84(3), 488–500 (1970)
Akerlof, G.A.: The market for “lemons”: Quality uncertainty and the market mech- anism. Quarterly Journal of Economics84(3), 488–500 (1970)
1970
-
[7]
Concrete Problems in AI Safety
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety. arxiv:1606.06565 (2016)
work page internal anchor Pith review arXiv 2016
-
[8]
Econometrica22(3), 265–290 (1954)
Arrow, K.J., Debreu, G.: Existence of an equilibrium for a competitive economy. Econometrica22(3), 265–290 (1954)
1954
-
[9]
Science384(6698), 842–845 (2024)
Bengio, Y., Hinton, G., Yao, A., et al.: Managing extreme AI risks amid rapid progress. Science384(6698), 842–845 (2024)
2024
-
[10]
Bennett, M.T.: How To build conscious machines. Ph.D. thesis, The Australian National University (Australia) (2025)
2025
-
[11]
Quarterly Journal of Economics124(1), 51–104 (2009)
Bernheim, B.D., Rangel, A.: Beyond revealed preference: Choice-theoretic foun- dations for behavioral welfare economics. Quarterly Journal of Economics124(1), 51–104 (2009)
2009
-
[12]
Oxford University Press, Oxford (2024)
Birch, J.: The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. Oxford University Press, Oxford (2024)
2024
-
[13]
Bowman, S.R., et al.: Measuring progress on scalable oversight for large language models (2022)
2022
-
[14]
Butlin, P., Long, R., Elmoznino, E., et al.: Consciousness in artificial intelligence: Insights from the science of consciousness. arxiv:2308.08708 (2023)
-
[15]
PNAS Nexus3(6), pgae191 (2024)
Capraro, V., Lentsch, A., Acemoglu, D., et al.: The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus3(6), pgae191 (2024)
2024
-
[16]
Casper, S., Davies, X., Shi, C., Gilbert, T.K., Scheurer, J., Rando, J., Freedman, R., Korbak, T., Lindner, D., Freire, P., et al.: Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv:2307.15217 (2023)
-
[17]
Catalini, C., Hui, X., Wu, J.: Some simple economics of AGI. arXiv:2602.20946 (2026)
- [18]
-
[19]
In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
Chan, A., Salganik, R., Markelius, A., et al.: Harms from increasingly agentic algorithmic systems. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. pp. 651–666 (2023)
2023
-
[20]
Science391(6792), eaec8352 (2026)
Cheng, M., Lee, C., Khadpe, P., Yu, S., Han, D., Jurafsky, D.: Sycophantic ai de- creases prosocial intentions and promotes dependence. Science391(6792), eaec8352 (2026)
2026
-
[21]
In: Advances in Neural Information Processing Systems
Christiano, P.F., Leike, J., Brown, T.B., Martic, M., Legg, S., Amodei, D.: Deep reinforcement learning from human preferences. In: Advances in Neural Information Processing Systems. vol. 30 (2017)
2017
-
[22]
Yale University Press, New Haven (1959)
Debreu, G.: Theory of Value: An Axiomatic Analysis of Economic Equilibrium. Yale University Press, New Haven (1959)
1959
-
[23]
Fanciullo, J.: Are current AI systems capable of well-being? Asian Journal of Philosophy4, 42 (2025)
2025
-
[24]
Feng, K.J., McDonald, D.W., Zhang, A.X.: Levels of autonomy for ai agents. arXiv:2506.12469 (2025)
-
[25]
Minds and Machines30, 411–437 (2020)
Gabriel, I.: Artificial intelligence, values, and alignment. Minds and Machines30, 411–437 (2020)
2020
-
[26]
Asian Journal of Philosophy4, 25 (2025) 18 E
Goldstein, S., Kirk-Giannini, C.D.: AI wellbeing. Asian Journal of Philosophy4, 25 (2025) 18 E. Perrier
2025
-
[27]
Quarterly Journal of Economics101(2), 229–264 (1986)
Greenwald, B.C., Stiglitz, J.E.: Externalities in economies with imperfect infor- mation and incomplete markets. Quarterly Journal of Economics101(2), 229–264 (1986)
1986
-
[28]
In: Agrawal, A.K., Brynjolfsson, E., Korinek, A
Hadfield, G.K., Koh, A.: An economy of AI agents. In: Agrawal, A.K., Brynjolfsson, E., Korinek, A. (eds.) The Economics of Transformative AI, chap. 5. University of Chicago Press (2025), https://www.nber.org/chapters/c15305
2025
-
[29]
In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
Hadfield-Menell, D., Hadfield, G.K.: Incomplete contracting and AI alignment. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. pp. 417–422. Association for Computing Machinery, New York, NY (2019)
2019
-
[30]
In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
Hadfield-Menell, D., Hadfield, G.K.: Incomplete contracting and AI alignment. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. pp. 417–422 (2019)
2019
-
[31]
In: Advances in Neural Information Processing Systems
Hadfield-Menell, D., Russell, S., Abbeel, P., Dragan, A.: Cooperative inverse rein- forcement learning. In: Advances in Neural Information Processing Systems. vol. 29 (2016)
2016
-
[32]
Journal of Law, Economics, and Organization 7(Special Issue), 24–52 (1991)
Holmstrom, B., Milgrom, P.: Multitask principal–agent analyses: Incentive contracts, asset ownership, and job design. Journal of Law, Economics, and Organization 7(Special Issue), 24–52 (1991)
1991
-
[33]
Huang, L., Xiao, W., Vishnoi, N.K.: Delegation and verification under AI. Cowles Discussion Paper 2500, Cowles Foundation for Research in Economics, Yale Univer- sity (2026), arXiv:2603.02961
-
[34]
Imas, A., Lee, K., Misra, S.: Agentic interactions (2025), SSRN working paper, December 6, 2025
2025
-
[35]
ACM Comput
Ji, J., Qiu, T., Chen, B., Zhou, J., Zhang, B., Hong, D., Lou, H., Wang, K., Duan, Y., He, Z., Vierling, L., Zhang, Z., Zeng, F., Dai, J., Pan, X., Xu, H., O’Gara, A., Ng, K., Tse, B., Fu, J., Mcaleer, S., Wang, Y., Yang, M., Liu, Y., Wang, Y., Zhu, S.C., Guo, Y., Yang, Y., Gao, W.: Ai alignment: A contemporary survey. ACM Comput. Surv.58(5) (Nov 2025)
2025
-
[36]
NBER Working Paper 32980, National Bureau of Economic Research (2024)
Korinek, A.: Economic policy challenges for the age of AI. NBER Working Paper 32980, National Bureau of Economic Research (2024)
2024
-
[37]
NBER Working Paper 24174, National Bureau of Economic Research (2017)
Korinek, A., Stiglitz, J.E.: Artificial intelligence and its implications for income distribution and unemployment. NBER Working Paper 24174, National Bureau of Economic Research (2017)
2017
-
[38]
NBER Working Paper 32255, National Bureau of Economic Research (2024)
Korinek, A., Suh, D.: Scenarios for the transition to AGI. NBER Working Paper 32255, National Bureau of Economic Research (2024)
2024
-
[39]
Leibo, J.Z., Vezhnevets, A.S., Cunningham, W.A., Bileschi, S.M.: A pragmatic view of ai personhood (2025)
2025
-
[40]
Nature648, 394–401 (2025)
Lin, H., Czarnek, G., Lewis, B., White, J.P., Berinsky, A.J., Costello, T., Pennycook, G., Rand, D.G.: Persuading voters using human–artificial intelligence dialogues. Nature648, 394–401 (2025)
2025
-
[41]
Long, R., Sebo, J., Butlin, P., Finlinson, K., Fish, K., Harding, J., Pfau, J., Sims, T., Birch, J., Chalmers, D.: Taking ai welfare seriously (2024)
2024
-
[42]
In: Agrawal, A.K., Brynjolfsson, E., Korinek, A
Ludwig, J., Mullainathan, S., Pink, S.L., Rambachan, A.: Algorithms as a vehicle to reflective equilibrium: Behavioral economics 2.0. In: Agrawal, A.K., Brynjolfsson, E., Korinek, A. (eds.) The Economics of Transformative AI, chap. 11. University of Chicago Press (2025)
2025
-
[43]
Oxford University Press, New York (1995)
Mas-Colell, A., Whinston, M.D., Green, J.R.: Microeconomic Theory. Oxford University Press, New York (1995)
1995
-
[44]
Proceedings of the ACM on Human-Computer Interaction3(CSCW), 81:1–81:32 (2019)
Mathur, A., Acar, G., Friedman, M.J., Lucherini, E., Mayer, J., Chetty, M., Narayanan, A.: Dark patterns at scale: Findings from a crawl of 11k shopping Autonomy-Qualified First Welfare Theorem 19 websites. Proceedings of the ACM on Human-Computer Interaction3(CSCW), 81:1–81:32 (2019)
2019
-
[45]
In: International Conference on Machine Learning
Morris, M.R., Sohl-Dickstein, J., Fiedel, N., Warkentin, T., Dafoe, A., Faust, A., Farabet, C., Legg, S.: Position: Levels of agi for operationalizing progress on the path to agi. In: International Conference on Machine Learning. pp. 36308–36321. PMLR (2024)
2024
-
[46]
In: Proceed- ings of the Seventeenth International Conference on Machine Learning
Ng, A.Y., Russell, S.J.: Algorithms for inverse reinforcement learning. In: Proceed- ings of the Seventeenth International Conference on Machine Learning. pp. 663–670 (2000)
2000
-
[47]
Cambridge University Press, Cambridge (2000)
Nussbaum, M.C.: Women and Human Development: The Capabilities Approach. Cambridge University Press, Cambridge (2000)
2000
-
[48]
In: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology
Park, J.S., O’Brien, J.C., Cai, C.J., Morris, M.R., Liang, P., Bernstein, M.S.: Generative agents: Interactive simulacra of human behavior. In: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. pp. 1–22. Association for Computing Machinery (2023)
2023
-
[49]
Recherches Économiques de Louvain56(3–4), 383–390 (1990)
Pattanaik, P.K., Xu, Y.: On ranking opportunity sets in terms of freedom of choice. Recherches Économiques de Louvain56(3–4), 383–390 (1990)
1990
-
[50]
Perrier, E., Bennett, M.T.: Time, identity and consciousness in language model agents. arxiv:2603.09043 (2026)
-
[51]
Clarendon Press, Oxford (1986)
Raz, J.: The Morality of Freedom. Clarendon Press, Oxford (1986)
1986
-
[52]
Open Book Publishers, Cambridge (2017)
Robeyns, I.: Wellbeing, Freedom and Social Justice: The Capability Approach Re-Examined. Open Book Publishers, Cambridge (2017)
2017
-
[53]
AI and Ethics5(1), 591–606 (2025)
Sebo, J., Long, R.: Moral consideration for ai systems by 2030. AI and Ethics5(1), 591–606 (2025)
2030
-
[54]
Journal of Political Economy78(1), 152–157 (1970)
Sen, A.: The impossibility of a Paretian liberal. Journal of Political Economy78(1), 152–157 (1970)
1970
-
[55]
North-Holland, Amsterdam (1985)
Sen, A.: Commodities and Capabilities. North-Holland, Amsterdam (1985)
1985
-
[56]
Journal of Econometrics50(1–2), 15–29 (1991)
Sen, A.: Welfare, preference and freedom. Journal of Econometrics50(1–2), 15–29 (1991)
1991
-
[57]
Oxford University Press, Oxford (1999)
Sen, A.: Development as Freedom. Oxford University Press, Oxford (1999)
1999
-
[58]
NBER Working Paper 34468, National Bureau of Economic Research (2025), https://www.nber
Shahidi, P., Rusak, G., Manning, B.S., Fradkin, A., Horton, J.J.: The Coasean Singularity? demand, supply, and market design with AI agents. NBER Working Paper 34468, National Bureau of Economic Research (2025), https://www.nber. org/papers/w34468
2025
-
[59]
NBER Working Paper 3641, National Bureau of Economic Research (1991)
Stiglitz, J.E.: The invisible hand and modern welfare economics. NBER Working Paper 3641, National Bureau of Economic Research (1991)
1991
-
[60]
Georgetown Law Technology Review4, 1–45 (2019)
Susser, D., Roessler, B., Nissenbaum, H.: Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review4, 1–45 (2019)
2019
-
[61]
Internet Policy Review8(2) (2019)
Susser, D., Roessler, B., Nissenbaum, H.: Technology, autonomy, and manipulation. Internet Policy Review8(2) (2019)
2019
-
[62]
Tomašev, N., Franklin, M., Osindero, S.: Intelligent AI delegation. arXiv:2602.11865 (2026)
-
[63]
NBER Working Paper 31815, National Bureau of Economic Research (2023)
Trammell, P., Korinek, A.: Economic growth under transformative AI. NBER Working Paper 31815, National Bureau of Economic Research (2023)
2023
-
[64]
Varian, H.R.: Microeconomic Analysis. W. W. Norton, New York, 3rd edn. (1992)
1992
-
[65]
Veit, W.: Is consciousness required for AI welfare? Asian Journal of Philosophy5, 18 (2026)
2026
-
[66]
Voyager: An Open-Ended Embodied Agent with Large Language Models
Wang, G., Xie, Y., Jiang, Y., Mandlekar, A., Xiao, C., Zhu, Y., Fan, L., Anand- kumar, A.: Voyager: An open-ended embodied agent with large language models. arXiv:2305.16291 (2023) 20 E. Perrier
work page internal anchor Pith review arXiv 2023
-
[67]
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation
Wu, Q., Bansal, G., Zhang, J., Wu, Y., Li, B., Zhu, E., Jiang, L., Zhang, X., Zhang, S., Liu, J., et al.: Autogen: Enabling next-gen llm applications via multi-agent conversation. arXiv:2308.08155 (2023)
work page internal anchor Pith review arXiv 2023
-
[68]
In: International Conference on Learning Representations (2023)
Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., Cao, Y.: ReAct: Synergizing reasoning and acting in language models. In: International Conference on Learning Representations (2023)
2023
-
[69]
Information, Communication & Society20(1), 118–136 (2017)
Yeung, K.: ’Hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society20(1), 118–136 (2017)
2017
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.