mlpack
gitmaster

Exponential backoff stepsize reduction policy for parallel SGD. More...
Public Member Functions  
ExponentialBackoff (const size_t firstBackoffEpoch, const double step, const double beta)  
Member initializer constructor to construct the exponential backoff policy with the required parameters. More...  
double  StepSize (const size_t numEpoch) 
Get the step size for the current gradient update. More...  
Detailed Description
Exponential backoff stepsize reduction policy for parallel SGD.
For more information, see the following.
{1106.5730, Author = {Feng Niu and Benjamin Recht and Christopher Re and Stephen J. Wright}, Title = {HOGWILD!: A LockFree Approach to Parallelizing Stochastic Gradient Descent}, Year = {2011}, Eprint = {arXiv:1106.5730}, }
This stepsize update scheme gives robust 1/k convergence rates to the implementation of parallel SGD.
Definition at line 38 of file exponential_backoff.hpp.
Constructor & Destructor Documentation
◆ ExponentialBackoff()

inline 
Member initializer constructor to construct the exponential backoff policy with the required parameters.
 Parameters

firstBackoffEpoch The number of updates to run before the first stepsize backoff. step The initial stepsize(gamma). beta The reduction factor. This should be a value in range (0, 1).
Definition at line 50 of file exponential_backoff.hpp.
Member Function Documentation
◆ StepSize()

inline 
Get the step size for the current gradient update.
 Parameters

numEpoch The iteration number of the current update.
 Returns
 The stepsize for the current iteration.
Definition at line 65 of file exponential_backoff.hpp.
The documentation for this class was generated from the following file:
 /var/www/www.mlpack.org/mlpackgit/src/mlpack/core/optimizers/parallel_sgd/decay_policies/exponential_backoff.hpp
Generated by 1.8.13