ReparametrizationType< InputType, OutputType > Class Template Reference

Implementation of the Reparametrization layer class. More...

Inheritance diagram for ReparametrizationType< InputType, OutputType >:

Public Member Functions

 ReparametrizationType (const bool stochastic=true, const bool includeKl=true, const double beta=1)
 Create the Reparametrization layer object. More...

 
 ReparametrizationType (const ReparametrizationType &layer)
 Copy Constructor. More...

 
 ReparametrizationType (ReparametrizationType &&layer)
 Move Constructor. More...

 
virtual ~ReparametrizationType ()
 
void Backward (const InputType &input, const OutputType &gy, OutputType &g)
 Ordinary feed backward pass of a neural network, calculating the function f(x) by propagating x backwards trough f. More...

 
double Beta () const
 Get the value of the beta hyperparameter. More...

 
double & Beta ()
 Modify the value of the beta hyperparameter. More...

 
ReparametrizationTypeClone () const
 Clone the ReparametrizationType object. More...

 
void ComputeOutputDimensions ()
 Compute the output dimensions. More...

 
void Forward (const InputType &input, OutputType &output)
 Ordinary feed forward pass of a neural network, evaluating the function f(x) by propagating the activity forward through f. More...

 
bool IncludeKL () const
 Get the value of the includeKl parameter. More...

 
bool & IncludeKL ()
 Modify the value of the includeKl parameter. More...

 
double Loss ()
 Get the KL divergence with standard normal. More...

 
ReparametrizationTypeoperator= (const ReparametrizationType &layer)
 Copy assignment operator. More...

 
ReparametrizationTypeoperator= (ReparametrizationType &&layer)
 Move assignment operator. More...

 
template
<
typename
Archive
>
void serialize (Archive &ar, const uint32_t)
 Serialize the layer. More...

 
bool Stochastic () const
 Get the value of the stochastic parameter. More...

 
bool & Stochastic ()
 Modify the value of the stochastic parameter. More...

 
- Public Member Functions inherited from Layer< InputType, OutputType >
 Layer ()
 Default constructor. More...

 
 Layer (const Layer &layer)
 Copy constructor. This is not responsible for copying weights! More...

 
 Layer (Layer &&layer)
 Move constructor. This is not responsible for moving weights! More...

 
virtual ~Layer ()
 Default deconstructor. More...

 
virtual void Backward (const InputType &, const InputType &, InputType &)
 Performs a backpropagation step through the layer, with respect to the given input. More...

 
virtual void Forward (const InputType &, InputType &)
 Takes an input object, and computes the corresponding output of the layer. More...

 
virtual void Forward (const InputType &, const InputType &)
 Takes an input and output object, and computes the corresponding loss of the layer. More...

 
virtual void Gradient (const InputType &, const InputType &, InputType &)
 Computing the gradient of the layer with respect to its own input. More...

 
const std::vector< size_t > & InputDimensions () const
 Get the input dimensions. More...

 
std::vector< size_t > & InputDimensions ()
 Modify the input dimensions. More...

 
virtual Layeroperator= (const Layer &layer)
 Copy assignment operator. This is not responsible for copying weights! More...

 
virtual Layeroperator= (Layer &&layer)
 Move assignment operator. This is not responsible for moving weights! More...

 
const std::vector< size_t > & OutputDimensions ()
 Get the output dimensions. More...

 
virtual size_t OutputSize () final
 Get the number of elements in the output from this layer. More...

 
virtual const InputType & Parameters () const
 Get the parameters. More...

 
virtual InputType & Parameters ()
 Set the parameters. More...

 
void serialize (Archive &ar, const uint32_t)
 Serialize the layer. More...

 
virtual void SetWeights (typename InputType ::elem_type *)
 Reset the layer parameter. More...

 
virtual bool const & Training () const
 Get whether the layer is currently in training mode. More...

 
virtual bool & Training ()
 Modify whether the layer is currently in training mode. More...

 
virtual size_t WeightSize () const
 Get the total number of trainable weights in the layer. More...

 

Additional Inherited Members

- Protected Attributes inherited from Layer< InputType, OutputType >
std::vector< size_t > inputDimensions
 Logical input dimensions of each point. More...

 
std::vector< size_t > outputDimensions
 Logical output dimensions of each point. More...

 
bool training
 If true, the layer is in training mode; otherwise, it is in testing mode. More...

 
bool validOutputDimensions
 This is true if ComputeOutputDimensions() has been called, and outputDimensions can be considered to be up-to-date. More...

 

Detailed Description


template
<
typename
InputType
=
arma::mat
,
typename
OutputType
=
arma::mat
>

class mlpack::ann::ReparametrizationType< InputType, OutputType >

Implementation of the Reparametrization layer class.

This layer samples from the given parameters of a normal distribution.

This class also supports beta-VAE, a state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner.

For more information, refer the following paper.

@article{ICLR2017,
title = {beta-VAE: Learning basic visual concepts with a constrained
variational framework},
author = {Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess,
Xavier Glorot, Matthew Botvinick, Shakir Mohamed and
Alexander Lerchner | Google DeepMind},
journal = {2017 International Conference on Learning Representations
(ICLR)},
year = {2017},
url = {https://deepmind.com/research/publications/beta-VAE-Learning-Basic-Visual-Concepts-with-a-Constrained-Variational-Framework}
}
Template Parameters
InputTypeType of the input data (arma::colvec, arma::mat, arma::sp_mat or arma::cube).
OutputTypeType of the output data (arma::colvec, arma::mat, arma::sp_mat or arma::cube).

Definition at line 57 of file reparametrization.hpp.

Constructor & Destructor Documentation

◆ ReparametrizationType() [1/3]

ReparametrizationType ( const bool  stochastic = true,
const bool  includeKl = true,
const double  beta = 1 
)

Create the Reparametrization layer object.

Note that the inputs are expected to be the parameters of the normal distribution; see the documentation for Forward().

Parameters
stochasticWhether we want random sample or constant.
includeKlWhether we want to include KL loss in backward function.
betaThe beta (hyper)parameter for beta-VAE mentioned above.

Referenced by ReparametrizationType< InputType, OutputType >::Clone(), and ReparametrizationType< InputType, OutputType >::~ReparametrizationType().

◆ ~ReparametrizationType()

◆ ReparametrizationType() [2/3]

ReparametrizationType ( const ReparametrizationType< InputType, OutputType > &  layer)

Copy Constructor.

◆ ReparametrizationType() [3/3]

ReparametrizationType ( ReparametrizationType< InputType, OutputType > &&  layer)

Move Constructor.

Member Function Documentation

◆ Backward()

void Backward ( const InputType &  input,
const OutputType &  gy,
OutputType &  g 
)

Ordinary feed backward pass of a neural network, calculating the function f(x) by propagating x backwards trough f.

Using the results from the feed forward pass.

Parameters
inputThe propagated input activation.
gyThe backpropagated error.
gThe calculated gradient.

Referenced by ReparametrizationType< InputType, OutputType >::~ReparametrizationType().

◆ Beta() [1/2]

double Beta ( ) const
inline

Get the value of the beta hyperparameter.

Definition at line 140 of file reparametrization.hpp.

◆ Beta() [2/2]

double& Beta ( )
inline

Modify the value of the beta hyperparameter.

Definition at line 142 of file reparametrization.hpp.

◆ Clone()

ReparametrizationType* Clone ( ) const
inlinevirtual

Clone the ReparametrizationType object.

This handles polymorphism correctly.

Implements Layer< InputType, OutputType >.

Definition at line 77 of file reparametrization.hpp.

References ReparametrizationType< InputType, OutputType >::ReparametrizationType().

◆ ComputeOutputDimensions()

void ComputeOutputDimensions ( )
inlinevirtual

Compute the output dimensions.

This should be overloaded if the layer is meant to work on higher-dimensional objects. When this is called, it is a safe assumption that InputDimensions() is correct.

Reimplemented from Layer< InputType, OutputType >.

Definition at line 144 of file reparametrization.hpp.

References Layer< InputType, OutputType >::inputDimensions, Layer< InputType, OutputType >::outputDimensions, and ReparametrizationType< InputType, OutputType >::serialize().

◆ Forward()

void Forward ( const InputType &  input,
OutputType &  output 
)

Ordinary feed forward pass of a neural network, evaluating the function f(x) by propagating the activity forward through f.

Note that input is expected to be the parameters of the distribution. The first input.n_rows / 2 elements correspond to the pre-standard-deviation values for each output element, and the second input.n_rows / 2 elements correspond to the means for each element. Thus, the output size of the layer is the number of input elements divided by 2.

Parameters
inputInput data used for evaluating the specified function.
outputResulting output activation.

Referenced by ReparametrizationType< InputType, OutputType >::~ReparametrizationType().

◆ IncludeKL() [1/2]

bool IncludeKL ( ) const
inline

Get the value of the includeKl parameter.

Definition at line 135 of file reparametrization.hpp.

◆ IncludeKL() [2/2]

bool& IncludeKL ( )
inline

Modify the value of the includeKl parameter.

Definition at line 137 of file reparametrization.hpp.

◆ Loss()

double Loss ( )
virtual

Get the KL divergence with standard normal.

Reimplemented from Layer< InputType, OutputType >.

Referenced by ReparametrizationType< InputType, OutputType >::~ReparametrizationType().

◆ operator=() [1/2]

ReparametrizationType& operator= ( const ReparametrizationType< InputType, OutputType > &  layer)

◆ operator=() [2/2]

ReparametrizationType& operator= ( ReparametrizationType< InputType, OutputType > &&  layer)

Move assignment operator.

◆ serialize()

void serialize ( Archive &  ar,
const uint32_t   
)

◆ Stochastic() [1/2]

bool Stochastic ( ) const
inline

Get the value of the stochastic parameter.

Definition at line 130 of file reparametrization.hpp.

◆ Stochastic() [2/2]

bool& Stochastic ( )
inline

Modify the value of the stochastic parameter.

Definition at line 132 of file reparametrization.hpp.


The documentation for this class was generated from the following file:
  • /home/jenkins-mlpack/mlpack.org/_src/mlpack-git/src/mlpack/methods/ann/layer/not_adapted/reparametrization.hpp