Stochastic collusion and the power law of learning: a general reinforcement learning model of cooperation

Research output: Contribution to journalArticleAcademicpeer-review

31 Citations (Scopus)

Abstract

Concerns about models of cultural adaptation as analogs of genetic selection have led cognitive game theorists to explore learning-theoretic specifications. Two prominent examples, the Bush-Mosteller stochastic learning model and the Roth-Erev payoff-matching model, are aligned and integrated as special cases of a general reinforcement learning model. Both models predict stochastic collusion as a backward-looking solution to the problem of cooperation in social dilemmas based on a random walk into a self-reinforcing cooperative equilibrium. The integration uncovers hidden assumptions that constrain the generality of the theoretical derivations. Specifically, Roth and Erev assume a "power law of learning"-the curious but plausible tendency for learning to diminish with success and intensify with failure. Computer simulation is used to explore the effects on stochastic collusion in three social dilemma games. The analysis shows how the integration of alternative models can uncover underlying principles and lead to a more general theory.

Original languageEnglish
Pages (from-to)629-653
Number of pages25
JournalJournal of Conflict Resolution
Volume46
Issue number5
Publication statusPublished - Oct-2002

Cite this