Reasoning about RoboCup soccer narratives
Hannaneh Hajishirzi, Julia Hockenmaier, et al.
UAI 2011
In this study, we discuss a baseline function for the estimation of a natural policy gradient with respect to variance, and demonstrate a condition in which an optimal baseline function that reduces the variance is equivalent to the state value function. However, outside of this condition, the state value could be considerably different from the optimal baseline. For such cases, an extended version of the NTD algorithm is proposed, where an auxiliary function is estimated to adjust the baseline, being state value estimates in the original NTD version, to the optimal baseline. The proposed algorithm is applied to simple MDPs and a challenging pendulum swing-up problem. © International Symposium on Artificial Life and Robotics (ISAROB). 2008.
Hannaneh Hajishirzi, Julia Hockenmaier, et al.
UAI 2011
S. Winograd
Journal of the ACM
Paula Harder, Venkatesh Ramesh, et al.
EGU 2023
David W. Jacobs, Daphna Weinshall, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence