Reinforcement Learning with Potential Functions Trained to Discriminate Good and Bad States

Yifei Chen*, Hamidreza Kasaei, Lambert Schomaker, Marco Wiering

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

1 Downloads (Pure)


Reward shaping is an efficient way to incorporate domain knowledge into a reinforcement learning agent. Nev-ertheless, it is unpractical and inconvenient to require prior knowledge for designing shaping rewards. Therefore, learning the shaping reward function by the agent during training could be more effective. In this paper, based on the potential-based reward shaping framework, which guarantees policy invariance, we propose to learn a potential function concurrently with training an agent using a reinforcement learning algorithm. In the proposed method, the potential function is trained by examining states that occur in good and in bad episodes. We apply the proposed adaptive potential function while training an agent with Q-learning and develop two novel algorithms. One is APF-QMLP, which applies the good/bad state potential function combined with Q-learning and multi-layer perceptrons (MLPs) to estimate the Q-function. The other is APF-Dueling-DQN, which combines the novel potential function with Dueling DQN. In particular, an autoencoder is adopted in APF-Dueling-DQN to map image states from Atari games to hash codes. We evaluated the created algorithms empirically in four environments: a six-room maze, CartPole, Acrobot, and Ms-Pacman, involving low-dimensional or high-dimensional state spaces. The experimental results showed that the proposed adaptive potential function improved the performances of the selected reinforcement learning algorithms.
Original languageEnglish
Title of host publication2021 International Joint Conference on Neural Networks (IJCNN)
Number of pages7
ISBN (Print)978-1-6654-4597-9
Publication statusPublished - 22-Jul-2021
Event2021 International Joint Conference on Neural Networks (IJCNN) - Shenzhen, China
Duration: 18-Jul-202122-Jul-2021


Conference2021 International Joint Conference on Neural Networks (IJCNN)


  • Training
  • Codes
  • Neural networks
  • Reinforcement learning
  • Games

Cite this