Papers
arxiv:1505.04497
A Definition of Happiness for Reinforcement Learning Agents
Published on May 18, 2015
Authors:
Abstract
The temporal difference error is proposed as a formal definition of happiness for reinforcement learning agents, representing the discrepancy between actual and expected rewards.
AI-generated summary
What is happiness for reinforcement learning agents? We seek a formal definition satisfying a list of desiderata. Our proposed definition of happiness is the temporal difference error, i.e. the difference between the value of the obtained reward and observation and the agent's expectation of this value. This definition satisfies most of our desiderata and is compatible with empirical research on humans. We state several implications and discuss examples.
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/1505.04497 in a model README.md to link it from this page.
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/1505.04497 in a Space README.md to link it from this page.
Collections including this paper 0
No Collection including this paper
Add this paper to a collection to link it from this page.