[mlpack] Ideas for GSOC

Marcus Edel marcus.edel at fu-berlin.de
Sun Feb 3 08:19:14 EST 2019


Hello Xiaohong,

we've updated the RL idea, you are welcome to propose different methods that
aren't listed.

Thanks,
Marcus

> On 2. Feb 2019, at 14:46, problemset <problemset at 163.com> wrote:
> 
> 
> Hi, Manish, 
> 
> Great to know more idea will be added to the idealist. I definitely will try to add more cutting-edge RL techniques. I plan to write a proposal for those techniques. It will be completed soon. 
> 
> Regards,
> Xiaohong 
> 
> 
> 
> At 2019-02-01 11:29:06, "Manish Kumar" <manish887kr at gmail.com> wrote:
> Hi Xiaohong,
> 
> It's good to see some novel techniques being added to mlpack's RL framework. Thanks for working over PER, I am confident that it will be ready to merge soon enough.
> 
> Regarding GSoC idealist, we are working over adding some recent RL ideas. We will update the list very soon. You may also propose any idea, you belive could be a good addition. 
> 
> Regards,
> Manish Kunar
> 
> 
> 
> On Fri, 1 Feb 2019, 07:33 problemset, <problemset at 163.com <mailto:problemset at 163.com>> wrote:
> Hello, everyone, 
> 
> I am Xiaohong Ji, undergraduate student from Wuhan University. My research interest is machine learning and deep reinforcement learning, so I am interested in project reinforcement learning. My first greeting email was received a warmful response from this great community. I am writing this email is that I wish I can apply the GSOC 2019 and look for a potential mentor.  :)
> 
> Currently, I am implementing the PER project <https://github.com/mlpack/mlpack/pull/1614>. I finished all the functionality and Manish Kumar's code review. I want to go further. I saw the updated Idealist and found that there are many new interesting projects. I am wondering, If we want to apply GSOC 2019  in the mlpack community, can we pick some reinforcement learning project like GSOC 2018 idealist or focus on those project provided in the idealist? Other deep learning methods are also attractive to me, but I wish I can go further in the reinforcement learning part. 
> 
> Thanks,
> Xiaohong
> 
> 
> 
>  
> _______________________________________________
> mlpack mailing list
> mlpack at lists.mlpack.org <mailto:mlpack at lists.mlpack.org>
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack <http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack>
> 
> 
>  
> _______________________________________________
> mlpack mailing list
> mlpack at lists.mlpack.org
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://knife.lugatgt.org/pipermail/mlpack/attachments/20190203/4f74de9a/attachment.html>


More information about the mlpack mailing list