TY - GEN
T1 - Emergent Cooperation under Uncertain Incentive Alignment
AU - Orzan, Nicole
AU - Acar, Erman
AU - Grossi, Davide
AU - Rădulescu, Roxana
N1 - Publisher Copyright:
© 2024 International Foundation for Autonomous Agents and Multiagent Systems.
PY - 2024
Y1 - 2024
N2 - Understanding the emergence of cooperation in systems of computational agents is crucial for the development of effective cooperative AI. Interaction among individuals in real-world settings are often sparse and occur within a broad spectrum of incentives, which often are only partially known. In this work, we explore how cooperation can arise among reinforcement learning agents in scenarios characterised by infrequent encounters, and where agents face uncertainty about the alignment of their incentives with those of others. To do so, we train the agents under a wide spectrum of environments ranging from fully competitive, to fully cooperative, to mixed-motives. Under this type of uncertainty we study the effects of mechanisms, such as reputation and intrinsic rewards, that have been proposed in the literature to foster cooperation in mixed-motives environments. Our findings show that uncertainty substantially lowers the agents' ability to engage in cooperative behaviour, when that would be the best course of action. In this scenario, the use of effective reputation mechanisms and intrinsic rewards boosts the agents' capability to act nearly-optimally in cooperative environments, while greatly enhancing cooperation in mixed-motive environments as well.
AB - Understanding the emergence of cooperation in systems of computational agents is crucial for the development of effective cooperative AI. Interaction among individuals in real-world settings are often sparse and occur within a broad spectrum of incentives, which often are only partially known. In this work, we explore how cooperation can arise among reinforcement learning agents in scenarios characterised by infrequent encounters, and where agents face uncertainty about the alignment of their incentives with those of others. To do so, we train the agents under a wide spectrum of environments ranging from fully competitive, to fully cooperative, to mixed-motives. Under this type of uncertainty we study the effects of mechanisms, such as reputation and intrinsic rewards, that have been proposed in the literature to foster cooperation in mixed-motives environments. Our findings show that uncertainty substantially lowers the agents' ability to engage in cooperative behaviour, when that would be the best course of action. In this scenario, the use of effective reputation mechanisms and intrinsic rewards boosts the agents' capability to act nearly-optimally in cooperative environments, while greatly enhancing cooperation in mixed-motive environments as well.
KW - Intrinsic Rewards
KW - Multi-Agent Reinforcement Learning
KW - Public Goods Game
KW - Social Dilemmas
UR - http://www.scopus.com/inward/record.url?scp=85196410971&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85196410971
T3 - Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
SP - 1521
EP - 1530
BT - 23rd International Conference on Autonomous Agents and Multiagent Systems,
PB - ACM Press Digital Library
T2 - 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024
Y2 - 6 May 2024 through 10 May 2024
ER -