Do AI agents trump human agency?
Metadata
Show full item recordAuthor
Monasterio Astobiza, AníbalEditorial
Springer Nature
Materia
Artificial intelligence Collective behavior Decision Making Processes
Date
2025-11-21Referencia bibliográfica
Astobiza, A.M. Do AI agents trump human agency?. Discov Artif Intell 5, 348 (2025). https://doi.org/10.1007/s44163-025-00608-y
Sponsorship
MICIU/AEI and European Social Fund, AutAI PID PID2022-137953OB-I00; MICIU/AEI and European Social Fund Plus, PID2024-156166OA-I00; Consejería de Universidad, Investigación e Innovación, GrantEMEC_2023_00442Abstract
Artificial agents are examined within simulated environments to elucidate the emergence of collective behaviors and decision-making processes under diverse environmental pressures and population structures. Using an Agent-Based Modeling (ABM) framework, the simulation tracked 157,097 iterations across four agent types: cooperators, defectors, super-reciprocators, and free riders; while analyzing 12 core metrics, including alignment indices, coherence, and environmental stress. The results revealed distinct phase transitions in behavior, with low-density populations (d < 0.4) supporting strong consensus formation and higher densities (d > 0.8) leading to fragmentation and increased competition. Agent alignment consistently ranged between 0.28 and 0.37, reflecting partial but stable consensus across conditions. Cooperative behaviors emerged and persisted only when resource availability exceeded a critical threshold (RG ≥ 6), underscoring the role of resource abundance in sustaining collective intelligence. Through interventions such as network topology changes and cognitive plasticity adjustments, agents demonstrated emergent behavioral patterns that arose from their rule-based interactions. It is important to note that these patterns, while complex, do not constitute “sophisticated decision-making” in the sense of genuine intelligence or understanding. Rather, they represent emergent properties of the system, collective behaviors that cannot be reduced to individual agent rules but emerge from their interactions under specific environmental conditions. This distinction is crucial for avoiding misattribution of intelligence to rule-following systems, even when those systems produce complex outputs. These findings provide insights into the mechanisms driving emergent intelligence in artificial systems and their implications for the governance and ethical design of future AI agents.





