|
|
Focus of Attention in Reinforcement Learning
|
|
|
|
|
نویسنده
|
Li Lihong ,Bulitko Vadim ,Greiner Russell
|
منبع
|
journal of universal computer science - 2007 - دوره : 13 - شماره : 9 - صفحه:1246 -1269
|
چکیده
|
Classification-based reinforcement learning (rl) methods have recently been proposed as an alternative to the traditional value-function based methods. these methods use a classifier to represent a policy, where the input (features) to the classifier is the state and the output (class label) for that state is the desired action. the reinforcement-learning community knows that focusing on more important states can lead to improved performance. in this paper, we investigate the idea of focused learning in the context of classification-based rl. specifically, we define a useful notation of state importance, which we use to prove rigorous bounds on policy loss. furthermore, we show that a classification-based rl agent may behave arbitrarily poorly if it treats all states as equally important.
|
کلیدواژه
|
reinforcement learning ,function approximation ,generalization ,attention
|
آدرس
|
Rutgers University, USA, University of Alberta, Canada, University of Alberta, Canada
|
پست الکترونیکی
|
greiner@cs.ualberta.ca
|
|
|
|
|
|
|
|
|
|
|
|
Authors
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|