[mlpack] Talking about Neural Networks

Marcus Edel marcus.edel at fu-berlin.de
Sun Jan 11 10:46:34 EST 2015


Hello,

I think presently there is great interest in the abilities of neural networks,
so great to see you are interested in this field. I'm happy to discuss with you
about new ideas.

I've implemented the structure to cover a bunch of different architectures and
techniques. Which results in a modular design something like a lego kit. Right
now you can choose

* how the different layer/neurons should work (e.g. use a custom activation
function or implment a memory cell (LSTM layer). Basically the layer returns
the output using a given input.

* how the different layers are connected (e.g. add a recurring connection to the
neuron layer, or connect the input layer with hidden layer A and B and the
output Layer. So the design is not limited to the standard layout. In which
the input layer is connected with the hidden layer and the hidden layer is
connected with the output layer).

* how the different weights should be updated. You can train the weights between
the input layer and the hidden layer with SGD and the weights between the
hidden and output layer with rprop if this is what you want. Or you can hold
multiple weights or a single weight.

I have several ideas I liked to see in the feature:

- bidirectional networks
- Deep Belief Networks
- RMSProp
- Dropout
- Dropconnect
- Maxout
- Convolutional Neural Nets

This is just on the top of my head. I'm happy to discuss with you about any of
this or other methods.

Right now I'm working on a solid test base. Before moving forward and
implementing new methods.

I hope this is helpful...

Thanks and happy new year!

Marcus

> On 11 Jan 2015, at 04:27, Siddharth Agrawal <siddharth.950 at gmail.com> wrote:
> 
> Hi Udit,
> 
> I would just like to pitch in with something. I have actually already written a module called sparse_autoencoder. However, you could make it compatible with the NeuronClass structure that Marcus is in the process of writing. Or maybe pick up some other neural network. :)
> 
> Regards,
> Siddharth
> 
> On Sun, Jan 11, 2015 at 1:26 AM, Udit Saxena <saxena.udit at gmail.com <mailto:saxena.udit at gmail.com>> wrote:
> Hi Marcus, 
> 
> How's it going ? A very happy new year to you !
> 
> I saw your mail to Shangtong on covering Neural Networks and your enthusiasm about such projects. 
> 
> I am quite interested in this, and was planning on helping cover a Sparse Autoencoder for MLPACK, as part of learning what Deep Learning is all about, but lost the opportunity to do that. :)
> 
> I'm quite interested in working on such projects myself and was hoping we could talk about any plans you might have which I might be able to contribute/help on. 
> What ideas do you have about this ? Anything you're working on which you'd like to talk about with me or get me involved with ? 
> 
> I'll probably start work at my internship a few weeks from now but am still hoping to work on this on my own time. 
> 
> Thanks.
> 
> -- 
> ---------------------------------------
> Udit Saxena
> 
> 
> 
> 
> _______________________________________________
> mlpack mailing list
> mlpack at cc.gatech.edu <mailto:mlpack at cc.gatech.edu>
> https://mailman.cc.gatech.edu/mailman/listinfo/mlpack <https://mailman.cc.gatech.edu/mailman/listinfo/mlpack>
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cc.gatech.edu/pipermail/mlpack/attachments/20150111/fb748eb7/attachment-0003.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5136 bytes
Desc: not available
URL: <http://mailman.cc.gatech.edu/pipermail/mlpack/attachments/20150111/fb748eb7/attachment-0003.bin>


More information about the mlpack mailing list