An Investigation Into the Effect of the Learning Rate on Overestimation Bias of Connectionist Q-learning

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

10 Citations (Scopus)

Abstract

In Reinforcement learning, Q-learning is the best-known algorithm but it suffers from overestimation bias, which may lead to poor performance or unstable learning. In this paper, we present a novel analysis of this problem using various control tasks. For solving these tasks, Q-learning is combined with a multilayer perceptron (MLP), experience replay, and a target network. We focus our analysis on the effect of the learning rate when training the MLP. Furthermore, we examine if decaying the learning rate over time has advantages over static ones. Experiments have been performed using various maze-solving problems involving deterministic or stochastic transition functions and 2D or 3D grids and two Open-AI gym control problems. We conducted the same experiments with Double Q-learning using two MLPs with the same parameter settings, but without target networks. The results on the maze problems show that for Q-learning combined with the MLP, the overestimation occurs when higher learning rates are used and not when lower learning rates are used. The Double Q-learning variant becomes much less stable with higher learning rates and with low learning rates the overestimation bias may still occur. Overall, decaying learning rates clearly improves the performances of both Q-learning and Double Q-learning.
Original languageEnglish
Title of host publicationProceedings of the 13th International Conference on Agents and Artificial Intelligence
EditorsAna Paula Rocha, Luc Steels, Jaap van den Herik
PublisherSciTePress
Pages107-118
Number of pages12
Volume2
ISBN (Print)978-989-758-484-8
DOIs
Publication statusPublished - 10-Feb-2021
Event13th International Conference on Agents and Artificial Intelligence - Vienna, Austria, Vienna, Austria
Duration: 4-Feb-20216-Feb-2021

Conference

Conference13th International Conference on Agents and Artificial Intelligence
Country/TerritoryAustria
CityVienna
Period04/02/202106/02/2021

Fingerprint

Dive into the research topics of 'An Investigation Into the Effect of the Learning Rate on Overestimation Bias of Connectionist Q-learning'. Together they form a unique fingerprint.

Cite this