PReLUType< InputType, OutputType > Class Template Reference

The PReLU activation function, defined by (where alpha is trainable) More...

Inheritance diagram for PReLUType< InputType, OutputType >:

Public Member Functions

 PReLUType (const double userAlpha=0.03)
 Create the PReLU object using the specified parameters. More...

 
double const & Alpha () const
 Get the non zero gradient. More...

 
double & Alpha ()
 Modify the non zero gradient. More...

 
void Backward (const InputType &input, const OutputType &gy, OutputType &g)
 Ordinary feed backward pass of a neural network, calculating the function f(x) by propagating x backwards through f. More...

 
PReLUTypeClone () const
 Clone the PReLUType object. This handles polymorphism correctly. More...

 
void Forward (const InputType &input, OutputType &output)
 Ordinary feed forward pass of a neural network, evaluating the function f(x) by propagating the activity forward through f. More...

 
void Gradient (const InputType &input, const OutputType &error, OutputType &gradient)
 Calculate the gradient using the output delta and the input activation. More...

 
OutputType const & Parameters () const
 Get the parameters. More...

 
OutputType & Parameters ()
 Modify the parameters. More...

 
template
<
typename
Archive
>
void serialize (Archive &ar, const uint32_t)
 Serialize the layer. More...

 
void SetWeights (typename OutputType::elem_type *weightsPtr)
 Reset the layer parameter. More...

 
size_t WeightSize () const
 Get size of weights. More...

 
- Public Member Functions inherited from Layer< InputType, OutputType >
 Layer ()
 Default constructor. More...

 
 Layer (const Layer &layer)
 Copy constructor. This is not responsible for copying weights! More...

 
 Layer (Layer &&layer)
 Move constructor. This is not responsible for moving weights! More...

 
virtual ~Layer ()
 Default deconstructor. More...

 
virtual void Backward (const InputType &, const InputType &, InputType &)
 Performs a backpropagation step through the layer, with respect to the given input. More...

 
virtual void ComputeOutputDimensions ()
 Compute the output dimensions. More...

 
virtual void CustomInitialize (InputType &, const size_t)
 Override the weight matrix of the layer. More...

 
virtual void Forward (const InputType &, InputType &)
 Takes an input object, and computes the corresponding output of the layer. More...

 
virtual void Forward (const InputType &, const InputType &)
 Takes an input and output object, and computes the corresponding loss of the layer. More...

 
virtual void Gradient (const InputType &, const InputType &, InputType &)
 Computing the gradient of the layer with respect to its own input. More...

 
const std::vector< size_t > & InputDimensions () const
 Get the input dimensions. More...

 
std::vector< size_t > & InputDimensions ()
 Modify the input dimensions. More...

 
virtual double Loss ()
 Get the layer loss. More...

 
virtual Layeroperator= (const Layer &layer)
 Copy assignment operator. This is not responsible for copying weights! More...

 
virtual Layeroperator= (Layer &&layer)
 Move assignment operator. This is not responsible for moving weights! More...

 
const std::vector< size_t > & OutputDimensions ()
 Get the output dimensions. More...

 
virtual size_t OutputSize () final
 Get the number of elements in the output from this layer. More...

 
void serialize (Archive &ar, const uint32_t)
 Serialize the layer. More...

 
virtual void SetWeights (typename InputType ::elem_type *)
 Reset the layer parameter. More...

 
virtual bool const & Training () const
 Get whether the layer is currently in training mode. More...

 
virtual bool & Training ()
 Modify whether the layer is currently in training mode. More...

 

Additional Inherited Members

- Protected Attributes inherited from Layer< InputType, OutputType >
std::vector< size_t > inputDimensions
 Logical input dimensions of each point. More...

 
std::vector< size_t > outputDimensions
 Logical output dimensions of each point. More...

 
bool training
 If true, the layer is in training mode; otherwise, it is in testing mode. More...

 
bool validOutputDimensions
 This is true if ComputeOutputDimensions() has been called, and outputDimensions can be considered to be up-to-date. More...

 

Detailed Description


template
<
typename
InputType
=
arma::mat
,
typename
OutputType
=
arma::mat
>

class mlpack::ann::PReLUType< InputType, OutputType >

The PReLU activation function, defined by (where alpha is trainable)

\begin{eqnarray*} f(x) &=& \max(x, alpha*x) \\ f'(x) &=& \left\{ \begin{array}{lr} 1 & : x > 0 \\ alpha & : x \le 0 \end{array} \right. \end{eqnarray*}

Template Parameters
InputTypeThe type of the layer's inputs. The layer automatically cast inputs to this type (Default: arma::mat).
OutputTypeThe type of the computation which also causes the output to also be in this type. The type also allows the computation and weight type to differ from the input type (Default: arma::mat).

Definition at line 45 of file parametric_relu.hpp.

Constructor & Destructor Documentation

◆ PReLUType()

PReLUType ( const double  userAlpha = 0.03)

Create the PReLU object using the specified parameters.

The non zero gradient can be adjusted by specifying tha parameter alpha in the range 0 to 1. Default (alpha = 0.03). This parameter is trainable.

Parameters
userAlphaNon zero gradient

Referenced by PReLUType< InputType, OutputType >::Clone().

Member Function Documentation

◆ Alpha() [1/2]

double const& Alpha ( ) const
inline

Get the non zero gradient.

Definition at line 101 of file parametric_relu.hpp.

◆ Alpha() [2/2]

double& Alpha ( )
inline

Modify the non zero gradient.

Definition at line 103 of file parametric_relu.hpp.

◆ Backward()

void Backward ( const InputType &  input,
const OutputType &  gy,
OutputType &  g 
)

Ordinary feed backward pass of a neural network, calculating the function f(x) by propagating x backwards through f.

Using the results from the feed forward pass.

Parameters
inputThe propagated input activation.
gyThe backpropagated error.
gThe calculated gradient.

Referenced by PReLUType< InputType, OutputType >::Clone().

◆ Clone()

◆ Forward()

void Forward ( const InputType &  input,
OutputType &  output 
)

Ordinary feed forward pass of a neural network, evaluating the function f(x) by propagating the activity forward through f.

Parameters
inputInput data used for evaluating the specified function.
outputResulting output activation.

Referenced by PReLUType< InputType, OutputType >::Clone().

◆ Gradient()

void Gradient ( const InputType &  input,
const OutputType &  error,
OutputType &  gradient 
)

Calculate the gradient using the output delta and the input activation.

Parameters
inputThe input parameter used for calculating the gradient.
errorThe calculated error.
gradientThe calculated gradient.

Referenced by PReLUType< InputType, OutputType >::Clone().

◆ Parameters() [1/2]

OutputType const& Parameters ( ) const
inlinevirtual

Get the parameters.

Reimplemented from Layer< InputType, OutputType >.

Definition at line 96 of file parametric_relu.hpp.

◆ Parameters() [2/2]

OutputType& Parameters ( )
inlinevirtual

Modify the parameters.

Reimplemented from Layer< InputType, OutputType >.

Definition at line 98 of file parametric_relu.hpp.

◆ serialize()

void serialize ( Archive &  ar,
const uint32_t   
)

Serialize the layer.

Referenced by PReLUType< InputType, OutputType >::WeightSize().

◆ SetWeights()

void SetWeights ( typename OutputType::elem_type *  weightsPtr)

Reset the layer parameter.

Referenced by PReLUType< InputType, OutputType >::Clone().

◆ WeightSize()

size_t WeightSize ( ) const
inlinevirtual

Get size of weights.

Reimplemented from Layer< InputType, OutputType >.

Definition at line 106 of file parametric_relu.hpp.

References PReLUType< InputType, OutputType >::serialize().


The documentation for this class was generated from the following file:
  • /home/jenkins-mlpack/mlpack.org/_src/mlpack-git/src/mlpack/methods/ann/layer/not_adapted/parametric_relu.hpp