[mlpack] disadvantages of using tuple as framework for neural network

Ryan Curtin ryan at ratml.org
Mon Feb 9 11:36:40 EST 2015


On Mon, Feb 09, 2015 at 10:17:39PM +0800, Shangtong Zhang wrote:
> Hi,
> 
> 
> I implement cnn based on current framework of NN.
> I meet some problem when I try to test my cnn on mnist dataset.
> To do classification on mnist dataset, I try to realize LeNet5,
> but LeNet5 has so many connections.
> In one layer, there are 6 feature maps, and in next layer there are 16 feature maps.
> And an implementation in theano connects these two layers use full connection.
> This means there are 6 * 16 = 96 connections,
> then I need to instantiates 96 class of ConnectionType and store them in a tuple.
> so I need to get a tuple like c1, c2, c3, ………, c96, it’s too large that I can’t write it manually.
> But it seems I can't generate it automatically
> or I need some very complicate c++ technique to do this.
> http://stackoverflow.com/questions/28410697/c-convert-vector-to-tuple
> 
> So I think tuple isn't appropriate for this work.
> I suggest that we use vector to replace tuple and have a base class for all ConnectionType class.
> In this way, ConnectionTraits may also be not used.

Using a base class for all ConnectionType classes would mean virtual
functions, which in this context would incur non-negligible slowdown
(since the ConnectionType's functions will be used so much).

Although I agree that making a std::tuple<> with 96 objects in it is
unwieldy, we should be careful to consider the drawbacks of inheritance
in a library that focuses on speed.

I've barely looked at the code in methods/ann/, so I certainly don't
have any better ideas, but I at least wanted to point out this
perspective.

-- 
Ryan Curtin    | "You got to stick with your principles."
ryan at ratml.org |   - Harry Waters



More information about the mlpack mailing list