OneStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType > Class Template Reference

Forward declaration of OneStepQLearningWorker. More...

Public Types

using ActionType = typename EnvironmentType::Action
 
using StateType = typename EnvironmentType::State
 
using TransitionType = std::tuple< StateType, ActionType, double, StateType >
 

Public Member Functions

 OneStepQLearningWorker (const UpdaterType &updater, const EnvironmentType &environment, const TrainingConfig &config, bool deterministic)
 Construct one step Q-Learning worker with the given parameters and environment. More...

 
 OneStepQLearningWorker (const OneStepQLearningWorker &other)
 Copy another OneStepQLearningWorker. More...

 
 OneStepQLearningWorker (OneStepQLearningWorker &&other)
 Take ownership of another OneStepQLearningWorker. More...

 
 ~OneStepQLearningWorker ()
 Clean memory. More...

 
void Initialize (NetworkType &learningNetwork)
 Initialize the worker. More...

 
OneStepQLearningWorkeroperator= (const OneStepQLearningWorker &other)
 Copy another OneStepQLearningWorker. More...

 
OneStepQLearningWorkeroperator= (OneStepQLearningWorker &&other)
 Take ownership of another OneStepQLearningWorker. More...

 
bool Step (NetworkType &learningNetwork, NetworkType &targetNetwork, size_t &totalSteps, PolicyType &policy, double &totalReward)
 The agent will execute one step. More...

 

Detailed Description


template
<
typename
EnvironmentType
,
typename
NetworkType
,
typename
UpdaterType
,
typename
PolicyType
>

class mlpack::rl::OneStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType >

Forward declaration of OneStepQLearningWorker.

One step Q-Learning worker.

Template Parameters
EnvironmentTypeThe type of the reinforcement learning task.
NetworkTypeThe type of the network model.
UpdaterTypeThe type of the optimizer.
PolicyTypeThe type of the behavior policy.
EnvironmentTypeThe type of the reinforcement learning task.
NetworkTypeThe type of the network model.
UpdaterTypeThe type of the optimizer.
PolicyTypeThe type of the behavior policy. *

Definition at line 147 of file async_learning.hpp.

Member Typedef Documentation

◆ ActionType

using ActionType = typename EnvironmentType::Action

Definition at line 39 of file one_step_q_learning_worker.hpp.

◆ StateType

using StateType = typename EnvironmentType::State

Definition at line 38 of file one_step_q_learning_worker.hpp.

◆ TransitionType

using TransitionType = std::tuple<StateType, ActionType, double, StateType>

Definition at line 40 of file one_step_q_learning_worker.hpp.

Constructor & Destructor Documentation

◆ OneStepQLearningWorker() [1/3]

OneStepQLearningWorker ( const UpdaterType &  updater,
const EnvironmentType &  environment,
const TrainingConfig config,
bool  deterministic 
)
inline

Construct one step Q-Learning worker with the given parameters and environment.

Parameters
updaterThe optimizer.
environmentThe reinforcement learning task.
configHyper-parameters.
deterministicWhether it should be deterministic.

Definition at line 51 of file one_step_q_learning_worker.hpp.

◆ OneStepQLearningWorker() [2/3]

OneStepQLearningWorker ( const OneStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType > &  other)
inline

Copy another OneStepQLearningWorker.

Parameters
otherOneStepQLearningWorker to copy.

Definition at line 71 of file one_step_q_learning_worker.hpp.

◆ OneStepQLearningWorker() [3/3]

OneStepQLearningWorker ( OneStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType > &&  other)
inline

Take ownership of another OneStepQLearningWorker.

Parameters
otherOneStepQLearningWorker to take ownership of.

Definition at line 101 of file one_step_q_learning_worker.hpp.

◆ ~OneStepQLearningWorker()

Clean memory.

Definition at line 203 of file one_step_q_learning_worker.hpp.

Member Function Documentation

◆ Initialize()

void Initialize ( NetworkType &  learningNetwork)
inline

Initialize the worker.

Parameters
learningNetworkThe shared network.

Definition at line 214 of file one_step_q_learning_worker.hpp.

◆ operator=() [1/2]

OneStepQLearningWorker& operator= ( const OneStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType > &  other)
inline

Copy another OneStepQLearningWorker.

Parameters
otherOneStepQLearningWorker to copy.

Definition at line 131 of file one_step_q_learning_worker.hpp.

◆ operator=() [2/2]

OneStepQLearningWorker& operator= ( OneStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType > &&  other)
inline

Take ownership of another OneStepQLearningWorker.

Parameters
otherOneStepQLearningWorker to take ownership of.

Definition at line 168 of file one_step_q_learning_worker.hpp.

◆ Step()

bool Step ( NetworkType &  learningNetwork,
NetworkType &  targetNetwork,
size_t &  totalSteps,
PolicyType &  policy,
double &  totalReward 
)
inline

The agent will execute one step.

Parameters
learningNetworkThe shared learning network.
targetNetworkThe shared target network.
totalStepsThe shared counter for total steps.
policyThe shared behavior policy.
totalRewardThis will be the episode return if the episode ends after this step. Otherwise this is invalid.
Returns
Indicate whether current episode ends after this step.

Definition at line 243 of file one_step_q_learning_worker.hpp.

References TrainingConfig::Discount(), TrainingConfig::GradientLimit(), TrainingConfig::StepLimit(), TrainingConfig::StepSize(), TrainingConfig::TargetNetworkSyncInterval(), and TrainingConfig::UpdateInterval().


The documentation for this class was generated from the following files:
  • /home/jenkins-mlpack/mlpack.org/_src/mlpack-3.2.1/src/mlpack/methods/reinforcement_learning/async_learning.hpp
  • /home/jenkins-mlpack/mlpack.org/_src/mlpack-3.2.1/src/mlpack/methods/reinforcement_learning/worker/one_step_q_learning_worker.hpp