mlpack IRC logs, 2018-07-24

Logs for the day 2018-07-24 (starts at 0:00 UTC) are shown below.

July 2018
Sun
Mon
Tue
Wed
Thu
Fri
Sat
1
2
3
4
5
6
7
8
9
10
11
12
13
21
22
23
24
25
26
27
--- Log opened Tue Jul 24 00:00:00 2018
02:19 -!- Samir [~AhmedSami@156.222.124.179] has joined #mlpack
02:54 < Samir> Hello, I want to contribute to mlpack during my summer time and learn more about reinforcement learning, Is there anyone currently working on the reinforcement project (which was an idea of GSoC 18), I am interested in getting experience and more knowledge
02:54 < Samir> I am currently building the repository
03:03 -!- Samir|Phone [~Mutter@156.222.124.179] has joined #mlpack
03:05 -!- Samir [~AhmedSami@156.222.124.179] has quit [Killed (cherryh.freenode.net (Nickname regained by services))]
03:05 -!- Samir [~Samir__@156.222.124.179] has joined #mlpack
03:18 -!- Samir [~Samir__@156.222.124.179] has quit [Quit: Leaving]
03:19 -!- Samir [~Samir__@156.222.124.179] has joined #mlpack
03:22 -!- Samir|Phone [~Mutter@156.222.124.179] has quit [Quit: Mutter: www.mutterirc.com]
04:00 -!- Samir_ [~Samir__@156.222.113.25] has joined #mlpack
04:00 -!- Samir [~Samir__@156.222.124.179] has quit [Read error: Connection reset by peer]
04:00 -!- Samir_ is now known as Samir
04:34 -!- yaswagner [4283a544@gateway/web/freenode/ip.66.131.165.68] has quit [Ping timeout: 252 seconds]
05:39 < Atharva> zoq: I trained it using RMSProp and the results are much better,
05:40 < Atharva> but I trained it much more I think, when I used Adam, I stopped it after about 3-4 hours. Last night, I trained it for about 7.
05:41 < Atharva> To make sure, I will train it tonight again using Adam for 7 hours and see the results.
09:11 < zoq> Atharva: Thanks for the update, do you have any numbers?
09:15 < Atharva> zoq: Yes, With RMSProp, after about 120 epochs, loss went down to ~107 (which includes kl divergence). With Adam, after 50 epochs, it went down to ~120(includes kl divergence)
09:15 < zoq> Atharva: Okay, thanks!
09:18 < Atharva> zoq: Tonight, I will train with Adam for more than 100 epochs and see how much the loss goes down.
09:19 < zoq> Atharva: Good, if need a machine to run some experiments, let us know.
09:21 < Atharva> zoq: Thanks! Maybe I will need it when I train on celebA dataset. Right now, my machine suffices :)
11:56 < sumedhghaisas> Atharva: Hi atharva
11:57 < sumedhghaisas> Was just looking at the PR. Did you decide the convolutional architecture or took it from some paper?
12:00 < sumedhghaisas> Also could we open a seperate PR for Bernoulli? That way the current PR of ReconstructionLoss can be just put on hold while we merge the new PR
12:00 < sumedhghaisas> The current ReconstructionLoss PR is I think fully reviewed, its better to just hold it for now
12:27 < Atharva> sumedhghaisas: Yeah, I will open a seperate PR.
12:28 < Atharva> About the convolutional architecture, I just saw it on some implementation on github, that too I didn't copy completely. It's just the first experiment so I didn't focus much on the architecture.
12:29 < Atharva> Do you have some suggestions about how I should better decide the architecture?
12:32 < sumedhghaisas> Atharva: The architecture is fine. :)
12:33 < sumedhghaisas> there are couple of good one which are used for state of the art on BinaryMnist but for Mnist this will do
12:33 < sumedhghaisas> its just that, we should mention here if we have taken the architecture from somewhere.
12:34 < Atharva> I haven't really taken it from anywhere, so I guess it's ok
12:34 < sumedhghaisas> Also amazing work on models, we finally did it.I just have couple of small comments on the PR.
12:34 < sumedhghaisas> Atharva: Then its fine. :)
12:35 < Atharva> Thanks for the all the help! Couldn't have been possible without it :)
12:37 < sumedhghaisas> Binary Mnist is more exciting, we can actually compare the results with state of the art and also merge in ReconstructionLoss
12:37 < Samir> Hello guys, can I work on this project (https://github.com/mlpack/mlpack/wiki/SummerOfCodeIdeas#reinforcement-learning)? is it available or there is someone working on it?
12:37 < sumedhghaisas> I think the PR looks good, just have documentational changes from my side.
12:38 < Atharva> Yeah, I tried training a model on binary mnist with reconstructionLoss, and the loss goes down till about ~120 out of which ~10 is kl
12:38 < Atharva> Haven't generated any images though, I will do that sson
12:38 < Atharva> soon
12:38 < sumedhghaisas> great...
12:38 < sumedhghaisas> this is feed forward network?
12:38 < Atharva> Yeah, not conv
12:38 < sumedhghaisas> hmm... should go till 100 though.
12:39 < sumedhghaisas> what is the network?
12:39 < sumedhghaisas> Samir: Hi Samir.
12:39 < Atharva> The same network in vae.cpp
12:39 < Atharva> I didn't train it long enough
12:39 < Samir> Hi sumedhghaisas :)
12:39 < Atharva> Will do it again tonight
12:40 < sumedhghaisas> Samir: Marcus or @zoq is handling the specific project as a mentor. You can ask him for any specific questions. :)
12:40 < sumedhghaisas> I don't have much information about it, sorry. :(
12:41 < sumedhghaisas> can I see the binarization code that you used?
12:41 < sumedhghaisas> Atharva:
12:41 < Samir> sumedhghaisas, Thanks :), I want to contribute to mlpack during my summer time, I will ask them when they are around.
12:45 < sumedhghaisas> Samir: Happy to help in any way I can. :) Let me know if you have any other questions regarding the constribution.
12:46 < Atharva> sumedhghaisas: Do you mean the code I used to prepare binary MNIST from normal MNIST?
12:47 < Atharva> Another thing, using the convolutional model, although the total loss goes down, the kl loss is more than in feedforward models
12:48 < Atharva> Last night, I trained using RMSProp and the error went down to 107, the results look much better than the conv model
12:49 < Atharva> I think it's because the distribution is not fitting the standard normal well
12:49 < Atharva> So, sampling from the prior doesn't give very good results
13:08 < sumedhghaisas> KL discrepancy is expected.
13:08 < sumedhghaisas> Got a meeting in 2 mins.
13:08 < sumedhghaisas> Can I get back to you in 2 hours?
13:12 < Atharva> It’s okay. Actually, I will be a little busy tonight
13:12 < Atharva> Maybe we will talk tomorrow
13:47 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
13:47 < zoq> Samir: Hello, would be great to push the RL project, so if you are interested let me know what you have in mind and we can go from there.
14:10 -!- killer_bee[m] [killerbeem@gateway/shell/matrix.org/x-ufmsryfthuyqihge] has left #mlpack ["Kicked by @appservice-irc:matrix.org : removing from IRC because user idle on matrix for 30+ days"]
17:09 < sumedhghaisas> Atharva: Sure
17:31 -!- travis-ci [~travis-ci@ec2-54-211-92-252.compute-1.amazonaws.com] has joined #mlpack
17:31 < travis-ci> mlpack/mlpack#5343 (master - 455b009 : Ryan Curtin): The build was fixed.
17:31 < travis-ci> Change view : https://github.com/mlpack/mlpack/compare/3b7bbf0f1417...455b00973168
17:31 < travis-ci> Build details : https://travis-ci.org/mlpack/mlpack/builds/407688072
17:31 -!- travis-ci [~travis-ci@ec2-54-211-92-252.compute-1.amazonaws.com] has left #mlpack []
20:00 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
--- Log closed Wed Jul 25 00:00:01 2018