[mlpack] Stop gradient in mlpack ANN module
problemset
problemset at 163.com
Fri Feb 15 21:33:32 EST 2019
Hi, Ryan,
Yes, the solution may be suitable for some cases. But in some complicated case, I am not sure it is able to handle. Such as there are three models, Q1, Q2, V, and two loss functions Loss1, Loss2. Loss1 is calculated with the output of V and Q1 (RMSE or other form loss function), but we only back prop the gradient to the model V's parameters rather than model Q1's parameters.
This is a simple abstract of the SAC model, I am sure there are more diverse demands for the ANN module. I still need more time to figure out how to achieve this, hope our discussion will provide some thoughts (sparks) about it.
I am not sure I explain the case clearly, please let me know if you need more information.
Regards,
Xiaohong
At 2019-02-15 10:15:27, "Ryan Curtin" <ryan at ratml.org> wrote:
>On Fri, Feb 15, 2019 at 09:35:03AM +0800, problemset wrote:
>> Hi, all,
>>
>> Nowadays, as the ML/DL/RL developed quickly, there is more diversity
>> demand on the flexibility of ANN module. I am wondering that is there
>> a way to stopping gradient back prop through a particular layer in
>> mlpack. Like Pytorch uses detach() while Tensorflow uses
>> stop_gradien.
>
>Hey there Xiaohong,
>
>Could we create a layer we could add that just doesn't pass a gradient
>through, perhaps?
>
>That may not be the best solution (in fact I am sure it is not) but it
>could at least be a start.
>
>--
>Ryan Curtin | "I know... but I really liked those ones."
>ryan at ratml.org | - Vincent
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://knife.lugatgt.org/pipermail/mlpack/attachments/20190216/b0024d8a/attachment.html>
More information about the mlpack
mailing list