Bertrand games between multi-class queues
Parijat Dube, Rahul Jain
CDC/CCC 2009
We generalize the PAC Learning framework for Markov Decision Processes developed in [18]. We consider the reward function to depend on both the state and the action. Both the state and action spaces can potentially be countably infinite. We obtain an estimate for the value function of a Markov decision process, which assigns to each policy its expected discounted reward. This expected reward can be estimated as the empirical average of the reward over many independent simulation runs. We derive bounds on the number of runs needed for the convergence of the empirical average to the expected reward uniformly for a class of policies, in terms of the V-C or pseudo dimension of the policy class. We then propose a framework to obtain an ε-optimal policy from simulation. We provide sample complexity of such an approach. © 2007 IEEE.
Parijat Dube, Rahul Jain
CDC/CCC 2009
Jinyan Shao, Long Wang
CDC 2007
Parijat Dube, Rahul Jain
ITC 2010
Murtaza Zafer, Eytan Modiano
CDC 2007