mlpack IRC logs, 2018-03-19

Logs for the day 2018-03-19 (starts at 0:00 UTC) are shown below.

March 2018
--- Log opened Mon Mar 19 00:00:58 2018
00:04 < MystikNinja> There are a number of compilation errors ( It seems that there has been some change in the API since the mvu code was written, for example, the mvuSolver object is called with the wrong number of parameters in the constructor (2 instead of 4). Is there something simple I'm missing or will the mvu code have to be significantly re-written?
00:13 < zoq> MystikNinja: Not sure if a complete re-implementation is necessary at this point, but you definitely have to update some line here and there.
00:14 < MystikNinja> zoq: It would probably just involve re-writing the offending calls to match the current APIs. I'll try doing it and see what happens.
00:17 < zoq> MystikNinja: You are probably right, let me know if you need any help.
00:24 -!- MystikNinja [980e8ebd@gateway/web/freenode/ip.] has quit [Quit: Page closed]
01:56 -!- travis-ci [] has joined #mlpack
01:56 < travis-ci> ShikharJ/mlpack#111 (RBM - fe8ecee : Shikhar Jaiswal): The build has errored.
01:56 < travis-ci> Change view :
01:56 < travis-ci> Build details :
01:56 -!- travis-ci [] has left #mlpack []
02:30 -!- csoni2 [~csoni@] has joined #mlpack
02:32 -!- csoni [~csoni@] has quit [Ping timeout: 264 seconds]
02:45 -!- sooham [49fcce4b@gateway/web/freenode/ip.] has joined #mlpack
02:48 -!- travis-ci [] has joined #mlpack
02:48 < travis-ci> ShikharJ/mlpack#112 (GAN - 0428a92 : Shikhar Jaiswal): The build has errored.
02:48 < travis-ci> Change view :
02:48 < travis-ci> Build details :
02:48 -!- travis-ci [] has left #mlpack []
02:49 -!- csoni2 [~csoni@] has quit [Read error: Connection reset by peer]
02:59 -!- csoni [~csoni@] has joined #mlpack
03:04 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
03:27 -!- csoni [~csoni@] has joined #mlpack
03:43 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
04:51 -!- moksh [daf82e6b@gateway/web/freenode/ip.] has joined #mlpack
04:55 < moksh> Hey @zoq, @rcurtin. The mlpack/models repo currently has just one model, for digit recognition, and an open pr for a LSTM model. Can you suggest a model that would a good addition to the repository? I would to work on that.
05:26 -!- moksh [daf82e6b@gateway/web/freenode/ip.] has quit [Quit: Page closed]
05:58 -!- sooham [49fcce4b@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
06:09 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has joined #mlpack
06:09 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has quit [Client Quit]
06:27 -!- rf_sust2018 [~flyingsau@] has joined #mlpack
06:30 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has joined #mlpack
06:56 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has quit [Quit: Page closed]
06:57 -!- nishagandhi [82f5c01a@gateway/web/freenode/ip.] has joined #mlpack
06:57 -!- nishagandhi [82f5c01a@gateway/web/freenode/ip.] has quit [Client Quit]
06:58 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has joined #mlpack
06:58 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has quit [Client Quit]
06:59 -!- nishagandhi [82f5c01a@gateway/web/freenode/ip.] has joined #mlpack
06:59 -!- nishagandhi [82f5c01a@gateway/web/freenode/ip.] has quit [Client Quit]
07:02 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has joined #mlpack
07:03 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has quit [Client Quit]
07:22 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has joined #mlpack
08:09 < Nisha_> Hi @zoq, @rcurtin, can SVMs be implemented in mlpack? I was thinking around the lines of optimizing the hinge loss by stochastic batch gradient descent. Also, I have experience in LSTM recurrent neural network models. I was wondering if you could point me in the right direction for working in SVMs / LSTM models. Thankyou :)
08:11 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has quit [Quit: Page closed]
08:48 -!- csoni [~csoni@] has joined #mlpack
09:08 -!- ketuls [0e8b7a78@gateway/web/freenode/ip.] has joined #mlpack
09:13 -!- rf_sust2018 [~flyingsau@] has quit [Quit: Leaving.]
09:14 -!- mohaxxpop [~xvdqosac@2400:6180:0:d0::ce7:7001] has joined #mlpack
09:14 -!- mohaxxpop [~xvdqosac@2400:6180:0:d0::ce7:7001] has quit [Client Quit]
09:17 -!- ketuls [0e8b7a78@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
10:44 -!- csoni [~csoni@] has quit [Ping timeout: 240 seconds]
11:05 < zoq> moksh: Yes, please feel free :)
11:08 < zoq> Nisha_: Hello there, if you are going to write an SVM implementation, keep in mind that it should outperform libsvm, which might be pretty difficult, but if you're up for a challenge, then that might be a good one :)
11:08 < zoq> Nisha_: Since you are interested in LSTM's, you might find Quasi-Recurrent Neural Networks interesting as well.
12:34 -!- csoni [~csoni@] has joined #mlpack
12:38 -!- csoni [~csoni@] has quit [Ping timeout: 240 seconds]
12:56 < Atharva> Should I put the entire API in the proposal or should I give a link to the file?
12:59 -!- travis-ci [] has joined #mlpack
12:59 < travis-ci> ShikharJ/mlpack#113 (GAN - ecd920b : Shikhar Jaiswal): The build has errored.
12:59 < travis-ci> Change view :
12:59 < travis-ci> Build details :
12:59 -!- travis-ci [] has left #mlpack []
13:00 < zoq> Atharva: This is up to you.
13:04 < Atharva> zoq: I will probably give a link the cpp file and in the proposal explain how I will implement each function.
13:47 -!- csoni [~csoni@] has joined #mlpack
13:58 -!- csoni [~csoni@] has quit [Ping timeout: 240 seconds]
14:00 -!- csoni [~csoni@] has joined #mlpack
14:17 -!- travis-ci [] has joined #mlpack
14:17 < travis-ci> mlpack/mlpack#4452 (master - 1ee8268 : Ryan Curtin): The build has errored.
14:17 < travis-ci> Change view :
14:17 < travis-ci> Build details :
14:17 -!- travis-ci [] has left #mlpack []
14:25 -!- rajiv_ [cb81c382@gateway/web/freenode/ip.] has joined #mlpack
14:25 < rajiv_> In the proposal timeline, how much time should I allocate for the 2nd and the final evaluations?
14:27 < Atharva> zoq: there is a FFN constructor that allows us to set the predictors and responses when creating the FFN object, but all the Train methods defined ask for predictors amd responses.
14:27 < Atharva> Shouldn’t there be a definition of Train method where it uses the predictors amd responses stored by the constructor
14:28 < rcurtin> rajiv_: no need to allocate time for the evaluations themselves
14:28 < rcurtin> that's the job of the mentor
14:34 -!- rajiv_ [cb81c382@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
14:55 -!- donjin_master [9d25b6b8@gateway/web/freenode/ip.] has joined #mlpack
14:59 < donjin_master> hello everyone i want to draft my proposal on reinforcement learning project in GSOC '18 should i apply deep Q learning with experience relay in summer
15:00 < donjin_master> I am little bit confused that how many algorithms we should have to applied in summer
15:07 -!- csoni [~csoni@] has quit [Ping timeout: 276 seconds]
15:15 < zoq> Atharva: If you think that is reasonable, sure.
15:16 < zoq> donjin_master: Hello there, the number depends on the complexity of the method you are interested in.
15:17 < Atharva> zoq: I think for that definition, if someone calls Train() even when they have not set the predictors and responses, we can just throw an error.
15:17 < Atharva> But otherwise, calling Train() is very intuitive when someone has already set the training data during construction of the object.
15:19 < Atharva> Should I open a PR for this?
15:20 -!- donjin_master [9d25b6b8@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
15:23 < rcurtin> Atharva: no, I disagree on this one---if you construct the object with given predictors and responses, then Train() should be directly called by that constructor (like the other mlpack algorithms)
15:23 < rcurtin> but it doesn't really make sense to call Train() again after that
15:31 < Atharva> Okay, but in this case, the constructor is not training the network even when data is provided.
15:32 < Atharva> We could change that and Train it in the constructor as you mentioned.
15:35 -!- arsh [9d27d83c@gateway/web/freenode/ip.] has joined #mlpack
15:43 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
15:50 -!- rajiv_ [0e8ba202@gateway/web/freenode/ip.] has joined #mlpack
15:51 -!- rajiv_ [0e8ba202@gateway/web/freenode/ip.] has quit [Client Quit]
16:00 < rcurtin> Atharva: that would be my suggestion; let's see what Marcus thinks
16:00 < Atharva> Yeah
16:02 -!- arsh [9d27d83c@gateway/web/freenode/ip.] has quit [Quit: Page closed]
16:10 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has joined #mlpack
16:34 < Nisha_> Hi @zoq, thank you for your suggestion. I will look into quasi- recurrent neural networks. I am currently reading papers (like : on QRNN and will think on how to go about implementing this in mlpack. Am I thinking in the right direction? And could you give me details about what all could be expected in the implementation of QRNN? Thanks, Nisha Gandhi
16:40 -!- sourabhvarshney1 [0e8b7937@gateway/web/freenode/ip.] has joined #mlpack
16:42 < zoq> Atharva: The FFN/RNN class is kinda special at this point, since currently you can't pass layer information at construction time, you have to use Add(..).
16:42 < sourabhvarshney1> @zoq I was going through rnn code base. I found in the first constructor, there were no predictors and response set. Does the comment imply implicit predictors and responses?
16:44 < Atharva> zoq: oh, yeah, that didn’t cross my mind. Then, what do you think we should do? Define another Train() function?
16:50 < zoq> Atharva: I think we could remove the extra constructor.
16:52 < Atharva> zoq: yeah, that works too, but is there some reason you don’t want Train() function?
16:52 < sourabhvarshney1> @zoq may be I put myself in wrong direction.
16:53 -!- csoni [~csoni@] has joined #mlpack
16:57 -!- sourabhvarshney1 [0e8b7937@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
16:57 -!- sourabhvarshney [75efcce2@gateway/web/freenode/ip.] has joined #mlpack
17:00 -!- sourabhvarshney1 [73f840a9@gateway/web/freenode/ip.] has joined #mlpack
17:01 -!- sourabhvarshney [75efcce2@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
17:03 < sourabhvarshney1> zoq: I just found the same thing as @atharva did. I think there is a requirement of modification of that constructor or requirement of removal of that constructor because every train method requires predictors and responses. Either way can work. Also there is some need to modify some comments. Should I open a PR to do that?
17:10 -!- sourabhvarshney1 [73f840a9@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
17:12 -!- sourabhvarshney1 [75efd264@gateway/web/freenode/ip.] has joined #mlpack
17:14 < zoq> sourabhvarshney1: You are right, personally I would remove the constructor.
17:15 < zoq> Have to check the hpt module, if it requires the constructor.
17:17 < sourabhvarshney1> zoq: Also in the above constructor, the comment is written like create the rnn object with given predictors and response set. But the constructor does not require these. Should I modify the comment?
17:24 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
17:26 < Atharva> sourabhvarshney1: Are you going to remove the constructor from ANN as well?
17:27 < Nisha_> zoq: thank you for your suggestion. I will look into quasi- recurrent neural networks. I am currently reading papers (like : on QRNN and will think on how to go about implementing this in mlpack. Could you give me details about what all could be expected in the implementation of QRNN?
17:31 -!- yashsharan [6741c40a@gateway/web/freenode/ip.] has joined #mlpack
17:32 -!- csoni [~csoni@] has joined #mlpack
17:39 < sourabhvarshney1> Atharva: Yes I can. But I think Marcus is doing it.
17:40 < yashsharan> @zoq.I have submitted the draft of my proposal.Kindly review it and suggest any changes which would be required.Thank You.
17:51 < zoq> Nisha_: In case of QRNN, we have to write a separate class similar to the existing FFN/RNN class, which enables us to add a layer and train the model.
17:52 < zoq> Atharva: sourabhvars: If either one of you likes to open a PR with the changes, please feel free.
17:53 < zoq> yashsharan: Okay, I'll take a look once I have a chance.
17:57 < rcurtin> zoq: it looks like the GradientBatchNormLayerTest from #1275 is failing; I have been working with it locally and it looks like it fails about ~50% of the time with different random seeds
17:57 < rcurtin> the CheckGradient() different is often between 0.001 and 0.002, so to fix it I would have to adjust the tolerance
17:58 < Nisha_> Okay, thanks @zoq. I will look into it.
17:58 < rcurtin> but it seems to me like the tolerances are already very large, so much so that I wonder if anything is wrong---in the BatchNormTest where we compare with another implementation, the tolerance is 0.1%, which seems a little bit high to me
17:58 < rcurtin> I wanted to see what you thought before I dig further... does this seem reasonable to you? or do you think it's likely that there is a bug in the implementation?
18:00 < zoq> rcurtin: hm, sounds like a bug to me, I'll recheck the gradient pass later today. If you like we can comment the test for now.
18:01 < rcurtin> there's no hurry, let me know what you find when you take a look
18:01 < rcurtin> I'm still working on the PR for the random test fixes and this one was new so I looked into it quickly :)
18:02 < zoq> rcurtin: Sure, currently working on the memory issues Eugene pointed out.
18:07 < rcurtin> sounds good
18:09 -!- Nisha_ [82f5c01a@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
18:21 -!- sourabhvarshney1 [75efd264@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
18:25 -!- sourabhvarshney1 [dce3cb82@gateway/web/freenode/ip.] has joined #mlpack
18:26 < sourabhvarshney1> zoq: Atharva: I would like to work on the issue if you have no problem
18:26 < sourabhvarshney1> guys
18:41 -!- haritha1313 [0e8bf0fb@gateway/web/freenode/ip.] has joined #mlpack
18:44 < haritha1313> @rcurtin: @zoq: I am working on my proposal for GSoC and I plan to focus it on neural collaborative filtering. To benchmark the same what would be suggestable?
18:44 < haritha1313> There is an existing python implementation of ncf which gives hit ratio and ndcg metrics, whereas the mlpack implementation and python implementations focus on rmse.
18:46 < haritha1313> The NCF paper has its own perfomance comparisons with other existing methods, so I would like to know your opinion on benchmarking it myself, and if so which metric would be preferable?
19:01 < Atharva> sourabhvarshney1: Okay, you can remove the constructor from FFN as well.
19:02 < sourabhvarshney1> Thanks
19:11 -!- nikhilweee [~nikhilwee@] has joined #mlpack
19:32 -!- yashsharan [6741c40a@gateway/web/freenode/ip.] has quit [Quit: Page closed]
19:53 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
20:20 -!- __amir__ [uid287022@gateway/web/] has quit [Quit: Connection closed for inactivity]
20:26 -!- sourabhvarshney1 [dce3cb82@gateway/web/freenode/ip.] has quit [Quit: Page closed]
21:02 -!- ImQ009_ [~ImQ009@unaffiliated/imq009] has joined #mlpack
21:05 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Ping timeout: 256 seconds]
21:13 -!- ImQ009_ [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
21:57 -!- kgytfd [] has joined #mlpack
21:57 -!- kgytfd [] has quit [Client Quit]
22:14 -!- haritha1313 [0e8bf0fb@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
--- Log closed Tue Mar 20 00:00:00 2018