Artificial Neural Network. More...
Namespaces | |
augmented | |
Classes | |
class | Add |
Implementation of the Add module class. More... | |
class | AddMerge |
Implementation of the AddMerge module class. More... | |
class | AddVisitor |
AddVisitor exposes the Add() method of the given module. More... | |
class | AlphaDropout |
The alpha - dropout layer is a regularizer that randomly with probability 'ratio' sets input values to alphaDash. More... | |
class | AtrousConvolution |
Implementation of the Atrous Convolution class. More... | |
class | BackwardVisitor |
BackwardVisitor executes the Backward() function given the input, error and delta parameter. More... | |
class | BaseLayer |
Implementation of the base layer. More... | |
class | BatchNorm |
Declaration of the Batch Normalization layer class. More... | |
class | BernoulliDistribution |
Multiple independent Bernoulli distributions. More... | |
class | BiasSetVisitor |
BiasSetVisitor updates the module bias parameters given the parameters set. More... | |
class | BilinearInterpolation |
Definition and Implementation of the Bilinear Interpolation Layer. More... | |
class | BinaryRBM |
For more information, see the following paper: More... | |
class | BRNN |
Implementation of a standard bidirectional recurrent neural network container. More... | |
class | Concat |
Implementation of the Concat class. More... | |
class | Concatenate |
Implementation of the Concatenate module class. More... | |
class | ConcatPerformance |
Implementation of the concat performance class. More... | |
class | Constant |
Implementation of the constant layer. More... | |
class | ConstInitialization |
This class is used to initialize weight matrix with constant values. More... | |
class | Convolution |
Implementation of the Convolution class. More... | |
class | CopyVisitor |
This visitor is to support copy constructor for neural network module. More... | |
class | CReLU |
A concatenated ReLU has two outputs, one ReLU and one negative ReLU, concatenated together. More... | |
class | CrossEntropyError |
The cross-entropy performance function measures the network's performance according to the cross-entropy between the input and target distributions. More... | |
class | DCGAN |
For more information, see the following paper: More... | |
class | DeleteVisitor |
DeleteVisitor executes the destructor of the instantiated object. More... | |
class | DeltaVisitor |
DeltaVisitor exposes the delta parameter of the given module. More... | |
class | DeterministicSetVisitor |
DeterministicSetVisitor set the deterministic parameter given the deterministic value. More... | |
class | DiceLoss |
The dice loss performance function measures the network's performance according to the dice coefficient between the input and target distributions. More... | |
class | DropConnect |
The DropConnect layer is a regularizer that randomly with probability ratio sets the connection values to zero and scales the remaining elements by factor 1 /(1 - ratio). More... | |
class | Dropout |
The dropout layer is a regularizer that randomly with probability 'ratio' sets input values to zero and scales the remaining elements by factor 1 / (1 - ratio) rather than during test time so as to keep the expected sum same. More... | |
class | EarthMoverDistance |
The earth mover distance function measures the network's performance according to the Kantorovich-Rubinstein duality approximation. More... | |
class | ELU |
The ELU activation function, defined by. More... | |
class | FastLSTM |
An implementation of a faster version of the Fast LSTM network layer. More... | |
class | FFN |
Implementation of a standard feed forward network. More... | |
class | FFTConvolution |
Computes the two-dimensional convolution through fft. More... | |
class | FlexibleReLU |
The FlexibleReLU activation function, defined by. More... | |
class | ForwardVisitor |
ForwardVisitor executes the Forward() function given the input and output parameter. More... | |
class | FullConvolution |
class | GAN |
The implementation of the standard GAN module. More... | |
class | GaussianInitialization |
This class is used to initialize weigth matrix with a gaussian. More... | |
class | Glimpse |
The glimpse layer returns a retina-like representation (down-scaled cropped images) of increasing scale around a given location in a given image. More... | |
class | GlorotInitializationType |
This class is used to initialize the weight matrix with the Glorot Initialization method. More... | |
class | GradientSetVisitor |
GradientSetVisitor update the gradient parameter given the gradient set. More... | |
class | GradientUpdateVisitor |
GradientUpdateVisitor update the gradient parameter given the gradient set. More... | |
class | GradientVisitor |
SearchModeVisitor executes the Gradient() method of the given module using the input and delta parameter. More... | |
class | GradientZeroVisitor |
class | GRU |
An implementation of a gru network layer. More... | |
class | HardSigmoidFunction |
The hard sigmoid function, defined by. More... | |
class | HardTanH |
The Hard Tanh activation function, defined by. More... | |
class | HeInitialization |
This class is used to initialize weight matrix with the He initialization rule given by He et. More... | |
class | Highway |
Implementation of the Highway layer. More... | |
class | IdentityFunction |
The identity function, defined by. More... | |
class | InitTraits |
This is a template class that can provide information about various initialization methods. More... | |
class | InitTraits< KathirvalavakumarSubavathiInitialization > |
Initialization traits of the kathirvalavakumar subavath initialization rule. More... | |
class | InitTraits< NguyenWidrowInitialization > |
Initialization traits of the Nguyen-Widrow initialization rule. More... | |
class | Join |
Implementation of the Join module class. More... | |
class | KathirvalavakumarSubavathiInitialization |
This class is used to initialize the weight matrix with the method proposed by T. More... | |
class | KLDivergence |
The Kullback–Leibler divergence is often used for continuous distributions (direct regression). More... | |
class | LayerNorm |
Declaration of the Layer Normalization class. More... | |
class | LayerTraits |
This is a template class that can provide information about various layers. More... | |
class | LeakyReLU |
The LeakyReLU activation function, defined by. More... | |
class | LecunNormalInitialization |
This class is used to initialize weight matrix with the Lecun Normalization initialization rule. More... | |
class | Linear |
Implementation of the Linear layer class. More... | |
class | LinearNoBias |
Implementation of the LinearNoBias class. More... | |
class | LoadOutputParameterVisitor |
LoadOutputParameterVisitor restores the output parameter using the given parameter set. More... | |
class | LogisticFunction |
The logistic function, defined by. More... | |
class | LogSoftMax |
Implementation of the log softmax layer. More... | |
class | Lookup |
Implementation of the Lookup class. More... | |
class | LossVisitor |
LossVisitor exposes the Loss() method of the given module. More... | |
class | LRegularizer |
The L_p regularizer for arbitrary integer p. More... | |
class | LSTM |
Implementation of the LSTM module class. More... | |
class | MaxPooling |
Implementation of the MaxPooling layer. More... | |
class | MaxPoolingRule |
class | MeanPooling |
Implementation of the MeanPooling. More... | |
class | MeanPoolingRule |
class | MeanSquaredError |
The mean squared error performance function measures the network's performance according to the mean of squared errors. More... | |
class | MiniBatchDiscrimination |
Implementation of the MiniBatchDiscrimination layer. More... | |
class | MultiplyConstant |
Implementation of the multiply constant layer. More... | |
class | MultiplyMerge |
Implementation of the MultiplyMerge module class. More... | |
class | NaiveConvolution |
Computes the two-dimensional convolution. More... | |
class | NegativeLogLikelihood |
Implementation of the negative log likelihood layer. More... | |
class | NetworkInitialization |
This class is used to initialize the network with the given initialization rule. More... | |
class | NguyenWidrowInitialization |
This class is used to initialize the weight matrix with the Nguyen-Widrow method. More... | |
class | NoRegularizer |
Implementation of the NoRegularizer. More... | |
class | OivsInitialization |
This class is used to initialize the weight matrix with the oivs method. More... | |
class | OrthogonalInitialization |
This class is used to initialize the weight matrix with the orthogonal matrix initialization. More... | |
class | OrthogonalRegularizer |
Implementation of the OrthogonalRegularizer. More... | |
class | OutputHeightVisitor |
OutputHeightVisitor exposes the OutputHeight() method of the given module. More... | |
class | OutputParameterVisitor |
OutputParameterVisitor exposes the output parameter of the given module. More... | |
class | OutputWidthVisitor |
OutputWidthVisitor exposes the OutputWidth() method of the given module. More... | |
class | Padding |
Implementation of the Padding module class. More... | |
class | ParametersSetVisitor |
ParametersSetVisitor update the parameters set using the given matrix. More... | |
class | ParametersVisitor |
ParametersVisitor exposes the parameters set of the given module and stores the parameters set into the given matrix. More... | |
class | PReLU |
The PReLU activation function, defined by (where alpha is trainable) More... | |
class | RandomInitialization |
This class is used to initialize randomly the weight matrix. More... | |
class | RBM |
The implementation of the RBM module. More... | |
class | ReconstructionLoss |
The reconstruction loss performance function measures the network's performance equal to the negative log probability of the target with the input distribution. More... | |
class | RectifierFunction |
The rectifier function, defined by. More... | |
class | Recurrent |
Implementation of the RecurrentLayer class. More... | |
class | RecurrentAttention |
This class implements the Recurrent Model for Visual Attention, using a variety of possible layer implementations. More... | |
class | ReinforceNormal |
Implementation of the reinforce normal layer. More... | |
class | Reparametrization |
Implementation of the Reparametrization layer class. More... | |
class | ResetCellVisitor |
ResetCellVisitor executes the ResetCell() function. More... | |
class | ResetVisitor |
ResetVisitor executes the Reset() function. More... | |
class | RewardSetVisitor |
RewardSetVisitor set the reward parameter given the reward value. More... | |
class | RNN |
Implementation of a standard recurrent neural network container. More... | |
class | RunSetVisitor |
RunSetVisitor set the run parameter given the run value. More... | |
class | SaveOutputParameterVisitor |
SaveOutputParameterVisitor saves the output parameter into the given parameter set. More... | |
class | Select |
The select module selects the specified column from a given input matrix. More... | |
class | Sequential |
Implementation of the Sequential class. More... | |
class | SetInputHeightVisitor |
SetInputHeightVisitor updates the input height parameter with the given input height. More... | |
class | SetInputWidthVisitor |
SetInputWidthVisitor updates the input width parameter with the given input width. More... | |
class | SigmoidCrossEntropyError |
The SigmoidCrossEntropyError performance function measures the network's performance according to the cross-entropy function between the input and target distributions. More... | |
class | SoftplusFunction |
The softplus function, defined by. More... | |
class | SoftsignFunction |
The softsign function, defined by. More... | |
class | SpikeSlabRBM |
For more information, see the following paper: More... | |
class | StandardGAN |
For more information, see the following paper: More... | |
class | Subview |
Implementation of the subview layer. More... | |
class | SVDConvolution |
Computes the two-dimensional convolution using singular value decomposition. More... | |
class | SwishFunction |
The swish function, defined by. More... | |
class | TanhFunction |
The tanh function, defined by. More... | |
class | TransposedConvolution |
Implementation of the Transposed Convolution class. More... | |
class | ValidConvolution |
class | VirtualBatchNorm |
Declaration of the VirtualBatchNorm layer class. More... | |
class | VRClassReward |
Implementation of the variance reduced classification reinforcement layer. More... | |
class | WeightNorm |
Declaration of the WeightNorm layer class. More... | |
class | WeightSetVisitor |
WeightSetVisitor update the module parameters given the parameters set. More... | |
class | WeightSizeVisitor |
WeightSizeVisitor returns the number of weights of the given module. More... | |
class | WGAN |
For more information, see the following paper: More... | |
class | WGANGP |
For more information, see the following paper: More... | |
Typedefs | |
template < class ActivationFunction = LogisticFunction , typename InputDataType = arma::mat , typename OutputDataType = arma::mat > | |
using | CustomLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType > |
Standard Sigmoid layer. More... | |
template < typename MatType = arma::mat > | |
using | Embedding = Lookup< MatType, MatType > |
using | GlorotInitialization = GlorotInitializationType< false > |
GlorotInitialization uses uniform distribution. More... | |
template < class ActivationFunction = HardSigmoidFunction , typename InputDataType = arma::mat , typename OutputDataType = arma::mat > | |
using | HardSigmoidLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType > |
Standard HardSigmoid-Layer using the HardSigmoid activation function. More... | |
template < class ActivationFunction = IdentityFunction , typename InputDataType = arma::mat , typename OutputDataType = arma::mat > | |
using | IdentityLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType > |
Standard Identity-Layer using the identity activation function. More... | |
typedef LRegularizer< 1 > | L1Regularizer |
The L1 Regularizer. More... | |
typedef LRegularizer< 2 > | L2Regularizer |
The L2 Regularizer. More... | |
template<typename... CustomLayers> | |
using | LayerTypes = boost::variant< Add< arma::mat, arma::mat > *, AddMerge< arma::mat, arma::mat > *, AtrousConvolution< NaiveConvolution< ValidConvolution >, NaiveConvolution< FullConvolution >, NaiveConvolution< ValidConvolution >, arma::mat, arma::mat > *, BaseLayer< LogisticFunction, arma::mat, arma::mat > *, BaseLayer< IdentityFunction, arma::mat, arma::mat > *, BaseLayer< TanhFunction, arma::mat, arma::mat > *, BaseLayer< RectifierFunction, arma::mat, arma::mat > *, BaseLayer< SoftplusFunction, arma::mat, arma::mat > *, BatchNorm< arma::mat, arma::mat > *, BilinearInterpolation< arma::mat, arma::mat > *, Concat< arma::mat, arma::mat > *, Concatenate< arma::mat, arma::mat > *, ConcatPerformance< NegativeLogLikelihood< arma::mat, arma::mat >, arma::mat, arma::mat > *, Constant< arma::mat, arma::mat > *, Convolution< NaiveConvolution< ValidConvolution >, NaiveConvolution< FullConvolution >, NaiveConvolution< ValidConvolution >, arma::mat, arma::mat > *, TransposedConvolution< NaiveConvolution< ValidConvolution >, NaiveConvolution< FullConvolution >, NaiveConvolution< ValidConvolution >, arma::mat, arma::mat > *, DropConnect< arma::mat, arma::mat > *, Dropout< arma::mat, arma::mat > *, AlphaDropout< arma::mat, arma::mat > *, ELU< arma::mat, arma::mat > *, FlexibleReLU< arma::mat, arma::mat > *, Glimpse< arma::mat, arma::mat > *, HardTanH< arma::mat, arma::mat > *, Highway< arma::mat, arma::mat > *, Join< arma::mat, arma::mat > *, LayerNorm< arma::mat, arma::mat > *, LeakyReLU< arma::mat, arma::mat > *, CReLU< arma::mat, arma::mat > *, Linear< arma::mat, arma::mat, NoRegularizer > *, LinearNoBias< arma::mat, arma::mat, NoRegularizer > *, LogSoftMax< arma::mat, arma::mat > *, Lookup< arma::mat, arma::mat > *, LSTM< arma::mat, arma::mat > *, GRU< arma::mat, arma::mat > *, FastLSTM< arma::mat, arma::mat > *, MaxPooling< arma::mat, arma::mat > *, MeanPooling< arma::mat, arma::mat > *, MiniBatchDiscrimination< arma::mat, arma::mat > *, MultiplyConstant< arma::mat, arma::mat > *, MultiplyMerge< arma::mat, arma::mat > *, NegativeLogLikelihood< arma::mat, arma::mat > *, Padding< arma::mat, arma::mat > *, PReLU< arma::mat, arma::mat > *, MoreTypes, CustomLayers *... > |
using | MoreTypes = boost::variant< Recurrent< arma::mat, arma::mat > *, RecurrentAttention< arma::mat, arma::mat > *, ReinforceNormal< arma::mat, arma::mat > *, Reparametrization< arma::mat, arma::mat > *, Select< arma::mat, arma::mat > *, Sequential< arma::mat, arma::mat, false > *, Sequential< arma::mat, arma::mat, true > *, Subview< arma::mat, arma::mat > *, VRClassReward< arma::mat, arma::mat > *, VirtualBatchNorm< arma::mat, arma::mat > *, WeightNorm< arma::mat, arma::mat > *> |
template < class ActivationFunction = RectifierFunction , typename InputDataType = arma::mat , typename OutputDataType = arma::mat > | |
using | ReLULayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType > |
Standard rectified linear unit non-linearity layer. More... | |
template<typename InputDataType = arma::mat, typename OutputDataType = arma::mat, typename... CustomLayers> | |
using | Residual = Sequential< InputDataType, OutputDataType, true, CustomLayers... > |
using | SELU = ELU< arma::mat, arma::mat > |
template < class ActivationFunction = LogisticFunction , typename InputDataType = arma::mat , typename OutputDataType = arma::mat > | |
using | SigmoidLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType > |
Standard Sigmoid-Layer using the logistic activation function. More... | |
template < class ActivationFunction = SoftplusFunction , typename InputDataType = arma::mat , typename OutputDataType = arma::mat > | |
using | SoftPlusLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType > |
Standard Softplus-Layer using the Softplus activation function. More... | |
template < class ActivationFunction = TanhFunction , typename InputDataType = arma::mat , typename OutputDataType = arma::mat > | |
using | TanHLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType > |
Standard hyperbolic tangent layer. More... | |
using | XavierInitialization = GlorotInitializationType< true > |
XavierInitilization is the popular name for this method. More... | |
Functions | |
HAS_ANY_METHOD_FORM (Model, HasModelCheck) | |
HAS_MEM_FUNC (Gradient, HasGradientCheck) | |
HAS_MEM_FUNC (Deterministic, HasDeterministicCheck) | |
HAS_MEM_FUNC (Parameters, HasParametersCheck) | |
HAS_MEM_FUNC (Add, HasAddCheck) | |
HAS_MEM_FUNC (Location, HasLocationCheck) | |
HAS_MEM_FUNC (Reset, HasResetCheck) | |
HAS_MEM_FUNC (ResetCell, HasResetCellCheck) | |
HAS_MEM_FUNC (Reward, HasRewardCheck) | |
HAS_MEM_FUNC (InputWidth, HasInputWidth) | |
HAS_MEM_FUNC (InputHeight, HasInputHeight) | |
HAS_MEM_FUNC (Rho, HasRho) | |
HAS_MEM_FUNC (Loss, HasLoss) | |
HAS_MEM_FUNC (Run, HasRunCheck) | |
HAS_MEM_FUNC (Bias, HasBiasCheck) | |
template < typename ModelType > | |
double | InceptionScore (ModelType Model, arma::mat images, size_t splits=1) |
Function that computes Inception Score for a set of images produced by a GAN. More... | |
Artificial Neural Network.
using CustomLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType> |
Standard Sigmoid layer.
Definition at line 31 of file custom_layer.hpp.
Definition at line 131 of file lookup.hpp.
using GlorotInitialization = GlorotInitializationType<false> |
GlorotInitialization uses uniform distribution.
Definition at line 148 of file glorot_init.hpp.
using HardSigmoidLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType> |
Standard HardSigmoid-Layer using the HardSigmoid activation function.
Definition at line 185 of file base_layer.hpp.
using IdentityLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType> |
Standard Identity-Layer using the identity activation function.
Definition at line 141 of file base_layer.hpp.
typedef LRegularizer<1> L1Regularizer |
The L1 Regularizer.
Definition at line 62 of file lregularizer.hpp.
typedef LRegularizer<2> L2Regularizer |
The L2 Regularizer.
Definition at line 67 of file lregularizer.hpp.
using LayerTypes = boost::variant< Add<arma::mat, arma::mat>*, AddMerge<arma::mat, arma::mat>*, AtrousConvolution<NaiveConvolution<ValidConvolution>, NaiveConvolution<FullConvolution>, NaiveConvolution<ValidConvolution>, arma::mat, arma::mat>*, BaseLayer<LogisticFunction, arma::mat, arma::mat>*, BaseLayer<IdentityFunction, arma::mat, arma::mat>*, BaseLayer<TanhFunction, arma::mat, arma::mat>*, BaseLayer<RectifierFunction, arma::mat, arma::mat>*, BaseLayer<SoftplusFunction, arma::mat, arma::mat>*, BatchNorm<arma::mat, arma::mat>*, BilinearInterpolation<arma::mat, arma::mat>*, Concat<arma::mat, arma::mat>*, Concatenate<arma::mat, arma::mat>*, ConcatPerformance<NegativeLogLikelihood<arma::mat, arma::mat>, arma::mat, arma::mat>*, Constant<arma::mat, arma::mat>*, Convolution<NaiveConvolution<ValidConvolution>, NaiveConvolution<FullConvolution>, NaiveConvolution<ValidConvolution>, arma::mat, arma::mat>*, TransposedConvolution<NaiveConvolution<ValidConvolution>, NaiveConvolution<FullConvolution>, NaiveConvolution<ValidConvolution>, arma::mat, arma::mat>*, DropConnect<arma::mat, arma::mat>*, Dropout<arma::mat, arma::mat>*, AlphaDropout<arma::mat, arma::mat>*, ELU<arma::mat, arma::mat>*, FlexibleReLU<arma::mat, arma::mat>*, Glimpse<arma::mat, arma::mat>*, HardTanH<arma::mat, arma::mat>*, Highway<arma::mat, arma::mat>*, Join<arma::mat, arma::mat>*, LayerNorm<arma::mat, arma::mat>*, LeakyReLU<arma::mat, arma::mat>*, CReLU<arma::mat, arma::mat>*, Linear<arma::mat, arma::mat, NoRegularizer>*, LinearNoBias<arma::mat, arma::mat, NoRegularizer>*, LogSoftMax<arma::mat, arma::mat>*, Lookup<arma::mat, arma::mat>*, LSTM<arma::mat, arma::mat>*, GRU<arma::mat, arma::mat>*, FastLSTM<arma::mat, arma::mat>*, MaxPooling<arma::mat, arma::mat>*, MeanPooling<arma::mat, arma::mat>*, MiniBatchDiscrimination<arma::mat, arma::mat>*, MultiplyConstant<arma::mat, arma::mat>*, MultiplyMerge<arma::mat, arma::mat>*, NegativeLogLikelihood<arma::mat, arma::mat>*, Padding<arma::mat, arma::mat>*, PReLU<arma::mat, arma::mat>*, MoreTypes, CustomLayers*... > |
Definition at line 247 of file layer_types.hpp.
using MoreTypes = boost::variant< Recurrent<arma::mat, arma::mat>*, RecurrentAttention<arma::mat, arma::mat>*, ReinforceNormal<arma::mat, arma::mat>*, Reparametrization<arma::mat, arma::mat>*, Select<arma::mat, arma::mat>*, Sequential<arma::mat, arma::mat, false>*, Sequential<arma::mat, arma::mat, true>*, Subview<arma::mat, arma::mat>*, VRClassReward<arma::mat, arma::mat>*, VirtualBatchNorm<arma::mat, arma::mat>*, WeightNorm<arma::mat, arma::mat>* > |
Definition at line 190 of file layer_types.hpp.
Standard rectified linear unit non-linearity layer.
Definition at line 152 of file base_layer.hpp.
using Residual = Sequential< InputDataType, OutputDataType, true, CustomLayers...> |
Definition at line 241 of file sequential.hpp.
using SigmoidLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType> |
Standard Sigmoid-Layer using the logistic activation function.
Definition at line 130 of file base_layer.hpp.
using SoftPlusLayer = BaseLayer< ActivationFunction, InputDataType, OutputDataType> |
Standard Softplus-Layer using the Softplus activation function.
Definition at line 174 of file base_layer.hpp.
Standard hyperbolic tangent layer.
Definition at line 163 of file base_layer.hpp.
using XavierInitialization = GlorotInitializationType<true> |
XavierInitilization is the popular name for this method.
Definition at line 143 of file glorot_init.hpp.
mlpack::ann::HAS_ANY_METHOD_FORM | ( | Model | , |
HasModelCheck | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Gradient | , |
HasGradientCheck | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Deterministic | , |
HasDeterministicCheck | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Parameters | , |
HasParametersCheck | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Add | , |
HasAddCheck | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Location | , |
HasLocationCheck | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Reset | , |
HasResetCheck | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | ResetCell | , |
HasResetCellCheck | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Reward | , |
HasRewardCheck | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | InputWidth | , |
HasInputWidth | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | InputHeight | , |
HasInputHeight | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Rho | , |
HasRho | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Loss | , |
HasLoss | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Run | , |
HasRunCheck | |||
) |
mlpack::ann::HAS_MEM_FUNC | ( | Bias | , |
HasBiasCheck | |||
) |
double mlpack::ann::InceptionScore | ( | ModelType | Model, |
arma::mat | images, | ||
size_t | splits = 1 |
||
) |
Function that computes Inception Score for a set of images produced by a GAN.
For more information, see the following.
Model | Model for evaluating the quality of images. |
images | Images generated by GAN. |