Have a personal or library account? Click to login
Extending Environments to Measure Self-reflection in Reinforcement Learning Cover

Extending Environments to Measure Self-reflection in Reinforcement Learning

Open Access
|Nov 2022

Abstract

We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent’s hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment’s outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a way of measuring how self-reflective an agent is. We give examples of extended environments and introduce a simple transformation which experimentally seems to increase some standard RL agents’ performance in a certain type of extended environment.

Language: English
Page range: 1 - 24
Submitted on: Jul 21, 2022
Accepted on: Oct 28, 2022
Published on: Nov 3, 2022
Published by: Artificial General Intelligence Society
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2022 Samuel Allen Alexander, Michael Castaneda, Kevin Compher, Oscar Martinez, published by Artificial General Intelligence Society
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.

Volume 13 (2022): Issue 1 (October 2022)