[mlpack] Fwd: Variational Autoencoders and Reinforcement Learning

arjun k arjun.k018 at gmail.com
Mon Mar 12 21:45:25 EDT 2018


---------- Forwarded message ----------
From: arjun k <arjun.k018 at gmail.com>
Date: Mon, Mar 12, 2018 at 6:45 PM
Subject: Re: [mlpack] Variational Autoencoders and Reinforcement Learning
To: Marcus Edel <marcus.edel at fu-berlin.de>


Hi,

Thank you, Marcus, for the quick reply.  That clarifies the doubts I had. I
am interested in both the projects, reinforcement learning and variational
autoencoders with almost equal importance to both. So is there any way that
I can involve in both the projects. Maybe focus on one and have some
involvement in the other?. In that case, how would I write a proposal to
this effect(write as two separate proposals or mention my interest in both
under one proposal)?

On Mon, Mar 12, 2018 at 9:41 AM, Marcus Edel <marcus.edel at fu-berlin.de>
wrote:

> Hello Arjun,
>
> welcome and thanks for getting in touch.
>
> I am Arjun, currently pursuing my Master's in Computer Science at the
> University
> of Massachusetts, Amherst, I came across the project on variational
> autoencoders
> and Reinforcement learning project and they look very interesting. Hope I
> am not
> too late.
>
>
> The application phase opens today, so you are not too late.
>
> I am more interested in the reinforcement learning project as it involves
> some
> research in a field that I am working on and would like to get involved.
> As I
> understand, coding up an algorithm and implementing it in a single game
> would
> not be much of an issue. How many algorithms are proposed to be benchmarked
> against each other? Is there any new idea that is being tested or the
> research
> component is the benchmark alone?
>
>
> Keep in mind each method has to be tested and documented, which takes
> time, so
> my recommendation is to focus on one or two (depending on the method). The
> research component is two-fold, you could compare different algorithms or
> improve/ extend the method you are working on e.g. by integrating another
> search
> strategy, but this isn't a must, the focus is to extend the existing
> framework.
>
> In the variational encoders I am quite familiar with generative modeling
> having
> worked on some research projects myself(https://arxiv.org/abs/1802.07401),
> As we
> can make variational encoders is just a training procedure, how abstracted
> are
> you intending the implementation to be. Should the framework allow the
> user to
> be able to customize the underlying neural network and add additional
> features
> or is it highly abstracted with no control over the underlying
> architecture and
> only able to use VAE as a black box?
>
>
> Ideally, a user can modify the model structure based on the existing
> infrastructure, providing a black box, is something that naturally results
> from
> the first idea. And could be realized in the form of a specific model
> something
> like: https://github.com/mlpack/models/tree/master/Kaggle/DigitRecognizer
>
> I hope anything I said was helpful, let me know if I should clarify
> anything.
>
> Thanks,
> Marcus
>
> On 11. Mar 2018, at 22:23, arjun k <arjun.k018 at gmail.com> wrote:
>
> Hi,
>
> I am Arjun, currently pursuing my Master's in Computer Science at the
> University of Massachusetts, Amherst, I came across the project on
> variational autoencoders and Reinforcement learning project and they look
> very interesting. Hope I am not too late.
>
> I am more interested in the reinforcement learning project as it involves
> some research in a field that I am working on and would like to get
> involved. As I understand, coding up an algorithm and implementing it in a
> single game would not be much of an issue. How many algorithms are proposed
> to be benchmarked against each other? Is there any new idea that is being
> tested or the research component is the benchmark alone?
>
> In the variational encoders I am quite familiar with generative modeling
> having worked on some research projects myself(https://arxiv.org/abs/1
> 802.07401), As we can make variational encoders is just a training
> procedure, how abstracted are you intending the implementation to be.
> Should the framework allow the user to be able to customize the underlying
> neural network and add additional features or is it highly abstracted with
> no control over the underlying architecture and only able to use VAE as a
> black box?
>
> Thank you,
> Arjun Karuvally,
> College of Information and Computer Science,
> University of Massachusetts, Amherst.
> _______________________________________________
> mlpack mailing list
> mlpack at lists.mlpack.org
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://knife.lugatgt.org/pipermail/mlpack/attachments/20180312/96086824/attachment.html>


More information about the mlpack mailing list