CRAAM
2.0.0
Robust and Approximate Markov Decision Processes
|
General representation of samples:
\[ \Sigma = (s_i, a_i, s_i', r_i, w_i)_{i=0}^{m-1} \]
See Sample for definitions of individual values. More...
#include <Samples.hpp>
Public Member Functions | |
void | add_initial (const State &decstate) |
Adds an initial state. | |
void | add_initial (State &&decstate) |
Adds an initial state. | |
void | add_sample (const Sample< State, Action > &sample) |
Adds a sample starting in a decision state. | |
void | add_sample (State state_from, Action action, State state_to, prec_t reward, prec_t weight, long step, long run) |
Adds a sample starting in a decision state. | |
prec_t | mean_return (prec_t discount) |
Computes the discounted mean return over all the samples. More... | |
size_t | size () const |
Number of samples. | |
Sample< State, Action > | get_sample (long i) const |
Access to samples. | |
Sample< State, Action > | operator[] (long i) const |
Access to samples. | |
const vector< State > & | get_initial () const |
List of initial states. | |
const vector< State > & | get_states_from () const |
const vector< Action > & | get_actions () const |
const vector< State > & | get_states_to () const |
const vector< prec_t > & | get_rewards () const |
const vector< prec_t > & | get_weights () const |
const vector< long > & | get_runs () const |
const vector< long > & | get_steps () const |
Protected Attributes | |
vector< State > | states_from |
vector< Action > | actions |
vector< State > | states_to |
vector< prec_t > | rewards |
vector< prec_t > | weights |
vector< long > | runs |
vector< long > | steps |
vector< State > | initial |
General representation of samples:
\[ \Sigma = (s_i, a_i, s_i', r_i, w_i)_{i=0}^{m-1} \]
See Sample for definitions of individual values.
State | Type defining states |
Action | Type defining actions |
|
inline |
Computes the discounted mean return over all the samples.
discount | Discount factor |