[mlpack] GSOC 2018 [Essential Deep Learning Modules]

bansa031 University of Minnesota bansa031 at umn.edu
Mon Feb 19 21:34:32 EST 2018


Hey Marcus,

As i was looking through the stacked GAN papers i came across a few
versions of it. Mostly the idea behind all of them is same but a couple of
things are different basically the way the whole thing is trained.
Can you let me know which version of Stacked GAN do we need. or should it
be a feature of the design where one can choose one method over other.

Also in the papers they have stopped with just 2 GAN's stacked together. So
are we also just implementing that or is it users choice to add how many
networks can they combine. Also if it is latter are they any limits as to
how many one can add.

Also for the Capsule Networks from what i have seen till now i think i
would have to implement another file and i cannot use FFN file directly. As
in capsule network the basic building block is quite different. Also the
training algorithm there works quite differently. But that's just what it
feels like to me initially. If you have some thoughts on it, i would love
to know them.

Thank You

On Fri, Feb 16, 2018 at 6:00 AM, Marcus Edel <marcus.edel at fu-berlin.de>
wrote:

> Hello Ayush,
>
> I had a talk with Shikhar yesterday and he told me that he is almost done
> with
> all the GAN implementation and would not require any help from my side.
> So, I
> would like to start working on Stacked GAN and Capsule Network for now if
> possible. Can you please let me know what should be my steps regarding
> this.
>
>
> We should start about thinking how the interface should look like, that
> depends
> on the idea. So in case of Capsule Networks, what code could be reused,
> what do
> we have to extend or add. Also, do we write another class that looks like
> the
> existing FFN class or can we write the Capsule Networks as a layer and use
> the
> FFN class and just add the Capsule Network as a layer.
>
> For the Stacked GAN project, can we reuse parts of the GAN code Shikhar is
> working on, or should we write a new class, if the second case is the
> better
> way, how should this class look like.
>
> So, before we start with the actual implementation we should start a
> discussion
> either here on the mailing list or Github.
>
> I hope this is helpful, let me know if I should clarify anything.
>
> Thanks,
> Marcus
>
> On 15. Feb 2018, at 23:40, bansa031 University of Minnesota <
> bansa031 at umn.edu> wrote:
>
> Hey Ryan,
>
> I had a talk with Shikhar yesterday and he told me that he is almost done
> with all the GAN implementation and would not require any help from my
> side. So, I would like to start working on Stacked GAN and Capsule Network
> for now if possible. Can you please let me know what should be my steps
> regarding this.
>
> As for now I have got a local copy of the code for MLPack library and have
> started going through it.
>
>
> Thank You
>
> On Wed, Feb 14, 2018 at 9:23 AM, Ryan Curtin <ryan at ratml.org> wrote:
>
>> On Wed, Feb 14, 2018 at 05:29:39AM -0600, bansa031 University of
>> Minnesota wrote:
>> > Hi,
>> >
>> > I am Ayush Bansal, a second-year master's student at University of
>> > Minnesota, USA. I have a keen interest in the field of Deep Learning and
>> > Machine Learning. I have been using and exploring various deep learning
>> > models and techniques and am very interested in working on Generative
>> > Adversarial Networks. I have worked on them for my project last semester
>> > especially in the framework of Capsule Networks. If possible I would
>> like
>> > to work on the proposed idea of either Stacked Generative Adversarial
>> > Networks or Improved Techniques for Training GANs. I would love to
>> increase
>> > my knowledge in the field of GANs and possibly other models in Deep
>> > Learning.
>> >
>> > If allowed I would also like to implement Capsule Networks in MLPack.
>> All
>> > in all, I would like you to consider my ideas and give me guidance on
>> this
>> > project.
>>
>> Hi Ayush,
>>
>> Thanks for getting in touch.  I think capsule networks could be a nice
>> addition to mlpack, but a thing to keep in mind is that your proposal
>> should be pretty clear on the API that will be provided to users and how
>> users will use it.  For capsule networks, I know that training can also
>> be very slow, so we should focus on a good implementation that is as
>> fast as we can make it.
>>
>> For the GANs, we had a student (Kris Singh) work on implementing them
>> last year but the project was not fully completed.  I would suggest that
>> you take a look at the PRs that are currently open.  Shikhar Jaiswal has
>> been working on finishing the various parts of the components, so it
>> might be worthwhile to talk to him to see where he is and if there is
>> anything you can help with.  Stacked GANs or improved GAN training
>> techniques are a nice idea; a proposal for this should definitely take
>> account of the current GAN implementation status and figure out what
>> needs to be done there.
>>
>> I hope you are enjoying Minneapolis... I was there a while ago and liked
>> the city.  I wanted to jump into the Mississippi River but it was cold
>> and probably that is not a good idea in general... :)
>>
>> Thanks,
>>
>> Ryan
>>
>> --
>> Ryan Curtin    | "It is very cold... in space."
>> ryan at ratml.org |   - Khan
>>
>
>
>
> --
> Ayush Bansal
> University of Minnesota - Twin Cities - Class of 2018
> Financial Mathematics Major
> Mobile : 612-245-7925 <(612)%20245-7925>
> bansa031 at umn.edu | ayushb666 at gmail.com
>
> _______________________________________________
> mlpack mailing list
> mlpack at lists.mlpack.org
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
>
>
>


-- 
Ayush Bansal
University of Minnesota - Twin Cities - Class of 2018
Financial Mathematics Major
Mobile : 612-245-7925
bansa031 at umn.edu | ayushb666 at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://knife.lugatgt.org/pipermail/mlpack/attachments/20180219/5f840d62/attachment-0001.html>


More information about the mlpack mailing list