Learning from Monte Carlo Rollouts with Opponent Models for Playing Tron

Stefan Knegt, Madalina M. Drugan, Marco Wiering*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

2 Citations (Scopus)
27 Downloads (Pure)


This paper describes a novel reinforcement learning system for learning to play the game of Tron. The system combines Q-learning, multi-layer perceptrons, vision grids, opponent modelling, and Monte Carlo rollouts in a novel way. By learning an opponent model, Monte Carlo rollouts can be effectively applied to generate state trajectories for all possible actions from which improved action estimates can be computed. This allows to extend experience replay by making it possible to update the state-action values of all actions in a given game state simultaneously. The results show that the use of experience replay that updates the Q-values of all actions simultaneously strongly outperforms the conventional experience replay that only updates the Q-value of the performed action. The results also show that using short or long rollout horizons during training lead to similar good performances against two fixed opponents.
Original languageEnglish
Title of host publicationICAART 2018
Subtitle of host publicationAgents and Artificial Intelligence
EditorsJ. van den Herik , A. Rocha
Place of PublicationCham
ISBN (Electronic)978-3-030-05453-3
ISBN (Print)978-3-030-05452-6
Publication statusPublished - 30-Dec-2018
EventICAART 2018: International Conference on Agents and Artificial Intelligence - Funchal, Portugal
Duration: 16-Jan-201818-Jan-2018

Publication series

Name Lecture Notes in Computer Science book series
ISSN (Print)0302-9743


ConferenceICAART 2018


  • Reinforcement Learning
  • Neural Networks
  • Computer Games

Cite this