[mlpack] Structuring mlpack RL

Ryan Curtin ryan at ratml.org
Thu Mar 12 19:30:35 EDT 2020


On Fri, Feb 28, 2020 at 04:40:30PM +0000, Adithya Praveen wrote:
> Hey there! Adi here.
> From what I understand, right now DQN's and other methods in mlpack
> are implemented in only the test file for "q_learning", and are
> limited to the environments implemented in mlpack. If I'm not
> mistaken, these environments return state spaces / observations that
> are limited to vectors.

Hey Adi,

Thanks for getting in touch.  I'm not 100% familiar with the
reinforcement learning code, but I do believe that you're right that the
only implementations and usage is in the test file.

I actually can't really answer the first question, but maybe I can
provide an opinion on the second one.

> 2. I would also like to know if adding an "agents" folder with
> different RL algos, to the RL section makes sense? You know, so that
> mlpack user's could just create, say, a "DQN Agent" for an environment
> and try running it.

Personally, I think that this would be cool.  Take a look at the issue
opened up for discussion in the models/ repository:

https://github.com/mlpack/models/issues/61

The way that discussion is going, it seems like some implementations of
RL tasks might fit nicely into that repository.  But perhaps something
more general can be made---I'm not sure exactly what you have in mind.

As a side note, your email didn't directly address this, but I think it
would be great if we could come up with a tutorial or some kind of
demonstration/example of how mlpack's reinforcement learning code could
actually be used for real-world tasks.  Perhaps that might be worth
thinking about also?  I'm not sure if there are already any efforts for
that underway.

Thanks!

Ryan

-- 
Ryan Curtin    | "Are those... live rounds?"
ryan at ratml.org | "Seven-six-two millimeter.  Full metal jacket."


More information about the mlpack mailing list