NStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType > Class Template Reference

Forward declaration of NStepQLearningWorker. More...

Public Types

using ActionType = typename EnvironmentType::Action
 
using StateType = typename EnvironmentType::State
 
using TransitionType = std::tuple< StateType, ActionType, double, StateType >
 

Public Member Functions

 NStepQLearningWorker (const UpdaterType &updater, const EnvironmentType &environment, const TrainingConfig &config, bool deterministic)
 Construct N-step Q-Learning worker with the given parameters and environment. More...

 
 NStepQLearningWorker (const NStepQLearningWorker &other)
 Copy another NStepQLearningWorker. More...

 
 NStepQLearningWorker (NStepQLearningWorker &&other)
 Take ownership of another NStepQLearningWorker. More...

 
 ~NStepQLearningWorker ()
 Clean memory. More...

 
void Initialize (NetworkType &learningNetwork)
 Initialize the worker. More...

 
NStepQLearningWorkeroperator= (const NStepQLearningWorker &other)
 Copy another NStepQLearningWorker. More...

 
NStepQLearningWorkeroperator= (NStepQLearningWorker &&other)
 Take ownership of another NStepQLearningWorker. More...

 
bool Step (NetworkType &learningNetwork, NetworkType &targetNetwork, size_t &totalSteps, PolicyType &policy, double &totalReward)
 The agent will execute one step. More...

 

Detailed Description


template
<
typename
EnvironmentType
,
typename
NetworkType
,
typename
UpdaterType
,
typename
PolicyType
>

class mlpack::rl::NStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType >

Forward declaration of NStepQLearningWorker.

N-step Q-Learning worker.

Template Parameters
EnvironmentTypeThe type of the reinforcement learning task.
NetworkTypeThe type of the network model.
UpdaterTypeThe type of the optimizer.
PolicyTypeThe type of the behavior policy.

Definition at line 179 of file async_learning.hpp.

Member Typedef Documentation

◆ ActionType

using ActionType = typename EnvironmentType::Action

Definition at line 39 of file n_step_q_learning_worker.hpp.

◆ StateType

using StateType = typename EnvironmentType::State

Definition at line 38 of file n_step_q_learning_worker.hpp.

◆ TransitionType

using TransitionType = std::tuple<StateType, ActionType, double, StateType>

Definition at line 40 of file n_step_q_learning_worker.hpp.

Constructor & Destructor Documentation

◆ NStepQLearningWorker() [1/3]

NStepQLearningWorker ( const UpdaterType &  updater,
const EnvironmentType &  environment,
const TrainingConfig config,
bool  deterministic 
)
inline

Construct N-step Q-Learning worker with the given parameters and environment.

Parameters
updaterThe optimizer.
environmentThe reinforcement learning task.
configHyper-parameters.
deterministicWhether it should be deterministic.

Definition at line 51 of file n_step_q_learning_worker.hpp.

◆ NStepQLearningWorker() [2/3]

NStepQLearningWorker ( const NStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType > &  other)
inline

Copy another NStepQLearningWorker.

Parameters
otherNStepQLearningWorker to copy.

Definition at line 71 of file n_step_q_learning_worker.hpp.

◆ NStepQLearningWorker() [3/3]

NStepQLearningWorker ( NStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType > &&  other)
inline

Take ownership of another NStepQLearningWorker.

Parameters
otherNStepQLearningWorker to take ownership of.

Definition at line 101 of file n_step_q_learning_worker.hpp.

◆ ~NStepQLearningWorker()

~NStepQLearningWorker ( )
inline

Clean memory.

Definition at line 203 of file n_step_q_learning_worker.hpp.

Member Function Documentation

◆ Initialize()

void Initialize ( NetworkType &  learningNetwork)
inline

Initialize the worker.

Parameters
learningNetworkThe shared network.

Definition at line 214 of file n_step_q_learning_worker.hpp.

◆ operator=() [1/2]

NStepQLearningWorker& operator= ( const NStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType > &  other)
inline

Copy another NStepQLearningWorker.

Parameters
otherNStepQLearningWorker to copy.

Definition at line 131 of file n_step_q_learning_worker.hpp.

◆ operator=() [2/2]

NStepQLearningWorker& operator= ( NStepQLearningWorker< EnvironmentType, NetworkType, UpdaterType, PolicyType > &&  other)
inline

Take ownership of another NStepQLearningWorker.

Parameters
otherNStepQLearningWorker to take ownership of.

Definition at line 168 of file n_step_q_learning_worker.hpp.

◆ Step()

bool Step ( NetworkType &  learningNetwork,
NetworkType &  targetNetwork,
size_t &  totalSteps,
PolicyType &  policy,
double &  totalReward 
)
inline

The agent will execute one step.

Parameters
learningNetworkThe shared learning network.
targetNetworkThe shared target network.
totalStepsThe shared counter for total steps.
policyThe shared behavior policy.
totalRewardThis will be the episode return if the episode ends after this step. Otherwise this is invalid.
Returns
Indicate whether current episode ends after this step.

Definition at line 243 of file n_step_q_learning_worker.hpp.

References TrainingConfig::Discount(), TrainingConfig::GradientLimit(), TrainingConfig::StepLimit(), TrainingConfig::StepSize(), TrainingConfig::TargetNetworkSyncInterval(), and TrainingConfig::UpdateInterval().


The documentation for this class was generated from the following files:
  • /home/jenkins-mlpack/mlpack.org/_src/mlpack-3.2.1/src/mlpack/methods/reinforcement_learning/async_learning.hpp
  • /home/jenkins-mlpack/mlpack.org/_src/mlpack-3.2.1/src/mlpack/methods/reinforcement_learning/worker/n_step_q_learning_worker.hpp