VirtualBatchNormType< InputType, OutputType > Class Template Reference

Declaration of the VirtualBatchNorm layer class. More...

Inheritance diagram for VirtualBatchNormType< InputType, OutputType >:

Public Member Functions

 VirtualBatchNormType ()
 Create the VirtualBatchNorm object. More...

 
 VirtualBatchNormType (const InputType &referenceBatch, const size_t size, const double eps=1e-8)
 Create the VirtualBatchNorm layer object for a specified number of input units. More...

 
void Backward (const InputType &, const OutputType &gy, OutputType &g)
 Backward pass through the layer. More...

 
VirtualBatchNormTypeClone () const
 Clone the VirtualBatchNormType object. More...

 
double Epsilon () const
 Get the epsilon value. More...

 
void Forward (const InputType &input, OutputType &output)
 Forward pass of the Virtual Batch Normalization layer. More...

 
void Gradient (const InputType &, const OutputType &error, OutputType &gradient)
 Calculate the gradient using the output delta and the input activations. More...

 
size_t InSize () const
 Get the number of input units. More...

 
OutputType const & Parameters () const
 Get the parameters. More...

 
OutputType & Parameters ()
 Modify the parameters. More...

 
template
<
typename
Archive
>
void serialize (Archive &ar, const uint32_t)
 Serialize the layer. More...

 
void SetWeights (typename OutputType::elem_type *weightsPtr)
 Reset the layer parameters. More...

 
const size_t WeightSize () const
 Get the total number of trainable weights in the layer. More...

 
- Public Member Functions inherited from Layer< InputType, OutputType >
 Layer ()
 Default constructor. More...

 
 Layer (const Layer &layer)
 Copy constructor. This is not responsible for copying weights! More...

 
 Layer (Layer &&layer)
 Move constructor. This is not responsible for moving weights! More...

 
virtual ~Layer ()
 Default deconstructor. More...

 
virtual void Backward (const InputType &, const InputType &, InputType &)
 Performs a backpropagation step through the layer, with respect to the given input. More...

 
virtual void ComputeOutputDimensions ()
 Compute the output dimensions. More...

 
virtual void CustomInitialize (InputType &, const size_t)
 Override the weight matrix of the layer. More...

 
virtual void Forward (const InputType &, InputType &)
 Takes an input object, and computes the corresponding output of the layer. More...

 
virtual void Forward (const InputType &, const InputType &)
 Takes an input and output object, and computes the corresponding loss of the layer. More...

 
virtual void Gradient (const InputType &, const InputType &, InputType &)
 Computing the gradient of the layer with respect to its own input. More...

 
const std::vector< size_t > & InputDimensions () const
 Get the input dimensions. More...

 
std::vector< size_t > & InputDimensions ()
 Modify the input dimensions. More...

 
virtual double Loss ()
 Get the layer loss. More...

 
virtual Layeroperator= (const Layer &layer)
 Copy assignment operator. This is not responsible for copying weights! More...

 
virtual Layeroperator= (Layer &&layer)
 Move assignment operator. This is not responsible for moving weights! More...

 
const std::vector< size_t > & OutputDimensions ()
 Get the output dimensions. More...

 
virtual size_t OutputSize () final
 Get the number of elements in the output from this layer. More...

 
void serialize (Archive &ar, const uint32_t)
 Serialize the layer. More...

 
virtual void SetWeights (typename InputType ::elem_type *)
 Reset the layer parameter. More...

 
virtual bool const & Training () const
 Get whether the layer is currently in training mode. More...

 
virtual bool & Training ()
 Modify whether the layer is currently in training mode. More...

 

Additional Inherited Members

- Protected Attributes inherited from Layer< InputType, OutputType >
std::vector< size_t > inputDimensions
 Logical input dimensions of each point. More...

 
std::vector< size_t > outputDimensions
 Logical output dimensions of each point. More...

 
bool training
 If true, the layer is in training mode; otherwise, it is in testing mode. More...

 
bool validOutputDimensions
 This is true if ComputeOutputDimensions() has been called, and outputDimensions can be considered to be up-to-date. More...

 

Detailed Description


template
<
typename
InputType
=
arma::mat
,
typename
OutputType
=
arma::mat
>

class mlpack::ann::VirtualBatchNormType< InputType, OutputType >

Declaration of the VirtualBatchNorm layer class.

Instead of using the batch statistics for normalizing on a mini-batch, it uses a reference subset of the data for calculating the normalization statistics.

For more information, refer to the following paper,

@article{Goodfellow2016,
author = {Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung,
Alec Radford, Xi Chen},
title = {Improved Techniques for Training GANs},
year = {2016},
url = {https://arxiv.org/abs/1606.03498},
}
Template Parameters
InputTypeType of the input data (arma::colvec, arma::mat, arma::sp_mat or arma::cube).
OutputTypeType of the output data (arma::colvec, arma::mat, arma::sp_mat or arma::cube).

Definition at line 48 of file virtual_batch_norm.hpp.

Constructor & Destructor Documentation

◆ VirtualBatchNormType() [1/2]

Create the VirtualBatchNorm object.

Referenced by VirtualBatchNormType< InputType, OutputType >::Clone().

◆ VirtualBatchNormType() [2/2]

VirtualBatchNormType ( const InputType &  referenceBatch,
const size_t  size,
const double  eps = 1e-8 
)

Create the VirtualBatchNorm layer object for a specified number of input units.

Parameters
referenceBatchThe data from which the normalization statistics are computed.
sizeThe number of input units / channels.
epsThe epsilon added to variance to ensure numerical stability.

Member Function Documentation

◆ Backward()

void Backward ( const InputType &  ,
const OutputType &  gy,
OutputType &  g 
)

Backward pass through the layer.

Parameters
*(input) The input activations.
gyThe backpropagated error.
gThe calculated gradient.

Referenced by VirtualBatchNormType< InputType, OutputType >::Clone().

◆ Clone()

◆ Epsilon()

double Epsilon ( ) const
inline

Get the epsilon value.

Definition at line 120 of file virtual_batch_norm.hpp.

◆ Forward()

void Forward ( const InputType &  input,
OutputType &  output 
)

Forward pass of the Virtual Batch Normalization layer.

Transforms the input data into zero mean and unit variance, scales the data by a factor gamma and shifts it by beta.

Parameters
inputInput data for the layer.
outputResulting output activations.

Referenced by VirtualBatchNormType< InputType, OutputType >::Clone().

◆ Gradient()

void Gradient ( const InputType &  ,
const OutputType &  error,
OutputType &  gradient 
)

Calculate the gradient using the output delta and the input activations.

Parameters
*(input) The input activations.
errorThe calculated error.
gradientThe calculated gradient.

Referenced by VirtualBatchNormType< InputType, OutputType >::Clone().

◆ InSize()

size_t InSize ( ) const
inline

Get the number of input units.

Definition at line 117 of file virtual_batch_norm.hpp.

◆ Parameters() [1/2]

OutputType const& Parameters ( ) const
inlinevirtual

Get the parameters.

Reimplemented from Layer< InputType, OutputType >.

Definition at line 112 of file virtual_batch_norm.hpp.

◆ Parameters() [2/2]

OutputType& Parameters ( )
inlinevirtual

Modify the parameters.

Reimplemented from Layer< InputType, OutputType >.

Definition at line 114 of file virtual_batch_norm.hpp.

◆ serialize()

void serialize ( Archive &  ar,
const uint32_t   
)

◆ SetWeights()

void SetWeights ( typename OutputType::elem_type *  weightsPtr)

Reset the layer parameters.

Referenced by VirtualBatchNormType< InputType, OutputType >::Clone().

◆ WeightSize()

const size_t WeightSize ( ) const
inlinevirtual

Get the total number of trainable weights in the layer.

Reimplemented from Layer< InputType, OutputType >.

Definition at line 122 of file virtual_batch_norm.hpp.

References VirtualBatchNormType< InputType, OutputType >::serialize().


The documentation for this class was generated from the following file:
  • /home/jenkins-mlpack/mlpack.org/_src/mlpack-git/src/mlpack/methods/ann/layer/not_adapted/virtual_batch_norm.hpp