mlpack  git-master
AdaGradUpdate Class Reference

Implementation of the AdaGrad update policy. More...

Public Member Functions

 AdaGradUpdate (const double epsilon=1e-8)
 Construct the AdaGrad update policy with given epsilon parameter. More...

 
double Epsilon () const
 Get the value used to initialise the squared gradient parameter. More...

 
double & Epsilon ()
 Modify the value used to initialise the squared gradient parameter. More...

 
void Initialize (const size_t rows, const size_t cols)
 The Initialize method is called by SGD Optimizer method before the start of the iteration update process. More...

 
void Update (arma::mat &iterate, const double stepSize, const arma::mat &gradient)
 Update step for SGD. More...

 

Detailed Description

Implementation of the AdaGrad update policy.

AdaGrad update policy chooses learning rate dynamically by adapting to the data. Hence AdaGrad eliminates the need to manually tune the learning rate.

For more information, see the following.

@article{duchi2011adaptive,
author = {Duchi, John and Hazan, Elad and Singer, Yoram},
title = {Adaptive subgradient methods for online learning and
stochastic optimization},
journal = {Journal of Machine Learning Research},
volume = {12},
number = {Jul},
pages = {2121--2159},
year = {2011}
}

Definition at line 41 of file ada_grad_update.hpp.

Constructor & Destructor Documentation

◆ AdaGradUpdate()

AdaGradUpdate ( const double  epsilon = 1e-8)
inline

Construct the AdaGrad update policy with given epsilon parameter.

Parameters
epsilonThe epsilon value used to initialise the squared gradient parameter.

Definition at line 50 of file ada_grad_update.hpp.

Member Function Documentation

◆ Epsilon() [1/2]

double Epsilon ( ) const
inline

Get the value used to initialise the squared gradient parameter.

Definition at line 88 of file ada_grad_update.hpp.

◆ Epsilon() [2/2]

double& Epsilon ( )
inline

Modify the value used to initialise the squared gradient parameter.

Definition at line 90 of file ada_grad_update.hpp.

◆ Initialize()

void Initialize ( const size_t  rows,
const size_t  cols 
)
inline

The Initialize method is called by SGD Optimizer method before the start of the iteration update process.

In AdaGrad update policy, squared gradient matrix is initialized to the zeros matrix with the same size as gradient matrix (see mlpack::optimization::SGD::Optimizer).

Parameters
rowsNumber of rows in the gradient matrix.
colsNumber of columns in the gradient matrix.

Definition at line 64 of file ada_grad_update.hpp.

◆ Update()

void Update ( arma::mat &  iterate,
const double  stepSize,
const arma::mat &  gradient 
)
inline

Update step for SGD.

The AdaGrad update adapts the learning rate by performing larger updates for more sparse parameters and smaller updates for less sparse parameters .

Parameters
iterateParameters that minimize the function.
stepSizeStep size to be used for the given iteration.
gradientThe gradient matrix.

Definition at line 79 of file ada_grad_update.hpp.


The documentation for this class was generated from the following file: