CRAAM
2.0.0
Robust and Approximate Markov Decision Processes
|
A robust solution to a robust or regular MDP. More...
#include <robust_values.hpp>
Public Member Functions | |
SolutionRobust () | |
Empty SolutionRobust. | |
SolutionRobust (size_t statecount) | |
Empty SolutionRobust for a problem with statecount states. | |
SolutionRobust (numvec valuefunction, indvec policy) | |
Empty SolutionRobust for a problem with policy and value function. | |
SolutionRobust (numvec valuefunction, indvec policy, vector< numvec > natpolicy, prec_t residual=-1, long iterations=-1) | |
![]() | |
Solution (size_t statecount) | |
Empty solution for a problem with statecount states. | |
Solution (numvec valuefunction, indvec policy, prec_t residual=-1, long iterations=-1) | |
Empty solution for a problem with a given value function and policy. | |
prec_t | total_return (const Transition &initial) const |
Computes the total return of the solution given the initial distribution. More... | |
Public Attributes | |
vector< numvec > | natpolicy |
Randomized policy of nature, probabilities only for states that have non-zero probability in the MDP model (or for outcomes) | |
![]() | |
numvec | valuefunction |
Value function. | |
indvec | policy |
index of the action to take for each states | |
prec_t | residual |
Bellman residual of the computation. | |
long | iterations |
Number of iterations taken. | |
A robust solution to a robust or regular MDP.