[mlpack] GSoC-2021

Marcus Edel marcus.edel at fu-berlin.de
Tue Mar 9 19:07:54 EST 2021


Hello Gopi M. Tatiraju,

thanks for reaching out; I like both ideas, I can see the first idea would
integrate perfectly into the preprocessing pipeline; that said, it would be
useful to discuss the project's scope in more detail. Specifically, what
functionality you like to add, in #2727 you already implemented some
features, so I'm curious to hear what other features you have in mind.

The RL idea sounds interesting as well, and I think could also fit into the
RL codebase that is already there. I'm curious what do you mean with
"rewards schemes"?

Thanks,
Marcus

> On 9. Mar 2021, at 14:55, Gopi Manohar Tatiraju <deathcoderx at gmail.com> wrote:
> 
> Hello mlpack,
> 
> I am Gopi Manohar Tatiraju currently in my final year of Engineering from India. 
> 
> I've been working on mlpack for quite some time now. I've tried to contribute and learn from the community. I've received ample support from the community which made learning really fun.
> 
> Now, as GSoC is back with its 2021 edition, I want to take this opportunity to learn from the mentors and contribute to the community.
> 
> I am planning to contribute to mlapck under GSoC 2021. Currently, I am working on creating a pandas dataframe-like class that can be used to analyze the datasets in a better way. 
> 
> Having a class like this would help in working with datasets as ml is not only about the model but about data as well. 
> 
> I have a pr already open for this: https://github.com/mlpack/mlpack/pull/2727 <https://github.com/mlpack/mlpack/pull/2727> 
> 
> I wanted to know if I can work on this in GSoC? As it was not listed on the idea page, but I think this would be a start to something useful and big.
> 
> If this idea doesn't seem workable right now, I want to implement RL Environments for Trading and some working examples for each env.
>  
> What all exactly I am planning to implement are the building blocks of any RL system:
> rewards schemes
> action schemes
> env
> 
> Fin-Tech is a growing field, and there is a lot of application of Deep-Q Learning there. 
> 
> I am planning to implement different strategies like Bull-Sell-Hold, Long only, Short only...
> This will make example-repo rich in terms of DRL examples...
> We can even build a small backtesting module that can be used to run backtest on our predictions.
> 
> There are some libraries that are currently working on such models in python, we can use it as a reference to go forward.
> FinRL: https://github.com/AI4Finance-LLC/FinRL-Library <https://github.com/AI4Finance-LLC/FinRL-Library>
> 
> Planning to implement: 
> 
> Different types of envs for different kind of financial tasks:
> single stock trading env
> multi stock trading env
> portfolio selection env
> Some example env in python: https://github.com/AI4Finance-LLC/FinRL-Library/tree/master/finrl/env <https://github.com/AI4Finance-LLC/FinRL-Library/tree/master/finrl/env>
> 
> Different types of action_schemes:
> make only long trades
> make only short trades
> make both long and short
> BHS(Buy Hold Sell)
> Example action_schemes: https://github.com/tensortrade-org/tensortrade/blob/master/tensortrade/env/default/actions.py <https://github.com/tensortrade-org/tensortrade/blob/master/tensortrade/env/default/actions.py>
> 
> We can see class BHS, SimpleOrder, etc.
> 
> Different types of reward_schemes:
> simple reward
> risk-adjusted reward
> position based reward
> 
> For the past 3 months, I've been working as an ML Researcher in a Fin-Tech startup and have worked on this only. 
>  
> I would love to hear your feedback and suggestions.
> 
> Regards.
> Gopi M. Tatiraju
> _______________________________________________
> mlpack mailing list
> mlpack at lists.mlpack.org
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://knife.lugatgt.org/pipermail/mlpack/attachments/20210309/ead397ce/attachment-0001.htm>


More information about the mlpack mailing list