Control Meets Learning Seminar
Simple Agent, Complex Environment: Efficient Reinforcement Learning with Agent State
I will describe a reinforcement learning agent that, with specification only of agent state dynamics and a reward function, can operate with some degree of competence in any environment. The agent applies an optimistic version of Q-learning to update value predictions that are based on the agent's actions and aleatoric states. We establish a regret bound demonstrating convergence to near-optimal per-period performance, where the time required is polynomial in the number of actions and aleatoric states, as well as the reward averaging time of the best policy among those for which actions depend on history only through aleatoric state. Notably, there is no further dependence on the number of environment states or averaging times associated with other policies or statistics of history.
Contact: Jolene Brink email@example.com
For more information visit: https://sites.google.com/view/control-meets-learning