mlpack IRC logs, 2018-06-16
Logs for the day 2018-06-16 (starts at 0:00 UTC) are shown below.
--- Log opened Sat Jun 16 00:00:05 2018
01:24 < zoq> manish7294: results - https://gist.github.com/zoq/0d47a9b503a1bd0dce863e10df3870ef
05:45 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
07:23 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
07:29 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Ping timeout: 276 seconds]
08:10 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
08:45 -!- travis-ci [~email@example.com] has joined #mlpack
08:45 < travis-ci> mlpack/mlpack#5086 (master - 86219b1 : Mikhail Lozhnikov): The build passed.
08:45 < travis-ci> Change view : https://github.com/mlpack/mlpack/compare/e08e76105072...86219b18b5af
08:45 < travis-ci> Build details : https://travis-ci.org/mlpack/mlpack/builds/393005814
08:45 -!- travis-ci [~firstname.lastname@example.org] has left #mlpack 
11:20 < ShikharJ> zoq: Could you mention a resource where I can read up on the visitor pattern related structure of mlpack (specifically within FFN class)?
11:34 -!- manish7294 [8ba79d06@gateway/web/freenode/ip.188.8.131.52] has joined #mlpack
11:34 < manish7294> zoq: Thanks for helping out :)
11:41 < manish7294> rcurtin: zoq: As of now I can use benchmarks metrics with mlpack's lmnn script by adding a predicition option in lmnn_main.cpp(here I am using my own weighted knn predictor) but that can't be done for shogun(instead here we can use shogun's knn classifier to get predicitions). So, I was wondering instead of using two different predictors why don't we just use shogun's, so that we have same base for accuracy comparisons.
12:09 -!- manish7294 [8ba79d06@gateway/web/freenode/ip.184.108.40.206] has quit [Ping timeout: 260 seconds]
12:15 < zoq> ShikharJ: Not much but for me https://www.boost.org/doc/libs/1_55_0/doc/html/variant.html was helpful.
12:15 < zoq> ShikharJ: I'll take a look at the issue later today.
12:16 < zoq> manish7294: Sounds like a reasonable option to me, if it's just used to get the accuracy.
12:17 < ShikharJ> zoq: Ah okay, I'll get to implementing the GANOptimizer class then. I'll begin with formulating a structure for that.
12:17 < ShikharJ> For the time being.
12:18 < zoq> ShikharJ: Yeah, let's start with a basic structure.
13:05 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Read error: Connection reset by peer]
13:10 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
13:29 -!- manish7294 [8ba79d06@gateway/web/freenode/ip.220.127.116.11] has joined #mlpack
13:29 < rcurtin> ma ish7294: I think using shogun's knn classifier is reasonable
13:30 < rcurtin> you could do that for pack too
13:30 < rcurtin> for mlpack too*
13:30 < rcurtin> that seems like a reasonable solution to me
13:30 < rcurtin> I think inside the mlpack script we should not assume that shogun is available
13:35 -!- manish7294 [8ba79d06@gateway/web/freenode/ip.18.104.22.168] has quit [Ping timeout: 260 seconds]
13:49 -!- manish7294 [8ba79d06@gateway/web/freenode/ip.22.214.171.124] has joined #mlpack
13:50 < manish7294> rcurtin: Right, I pushed the changes regarding the classifier.
13:52 -!- manish72942 [~email@example.com] has joined #mlpack
13:53 < manish7294> But don't know why I am stuck up with setting up benchmarks, This time mlpack' lmnn script continually throwing can't execute command. Though everything was working last night, don't know what happened :(
13:54 < manish72942> I have done several rebuilds of mlpack too but can't get what's happening here
13:54 -!- manish7294 [8ba79d06@gateway/web/freenode/ip.126.96.36.199] has quit [Client Quit]
14:09 < rcurtin> hmmm, can you tell me more about what you've done?
14:09 < rcurtin> if the execute failed, the error message does show the command so you could try running that command by hand
14:12 < manish72942> the command works
14:13 < manish72942> even mlpack_lmnn is there in bin
14:13 < rcurtin> can you provide any more detail? maybe print stderr and stdout from the subprocess call?
14:14 < manish72942> Will you be available after 1 hour from now, currently I am not at my PC?
14:14 < manish72942> sorry for that
14:22 < rcurtin> unfortunately no, I will be racing :)
14:22 < rcurtin> but if you can print the errors from stderr or stdout I think it will make it clear what is going erong
14:22 < rcurtin> wrong*
14:34 < manish72942> no problem, I will post them as soon as I reach home
14:34 < manish72942> today is practice right?
14:51 < rcurtin> no, that was yesterday, today is racing?
14:51 < rcurtin> oops accidental question mark
15:04 -!- manish7294 [8ba79d06@gateway/web/freenode/ip.188.8.131.52] has joined #mlpack
15:05 < manish7294> rcurtin: great, have fun :)
15:07 < manish7294> I am posting error here, please don't take the trouble to reply asap, it can be done after the race :)
15:10 < manish7294> all the executions giving similar results to this: [FATAL] Could not execute command: ['/home/manish/benchmarks/libraries/bin/mlpack_lmnn','-i', 'datasets/iris.csv', '-v', '-o', 'distance.csv', '-R', '100', '-p','3', '--seed', '42']
15:27 -!- manish7294 [8ba79d06@gateway/web/freenode/ip.184.108.40.206] has quit [Quit: Page closed]
15:30 < zoq> manish7294: And if you run /home/manish/benchmarks/libraries/bin/mlpack_lmnn -i datasets/iris.csv -v -o distance.csv -R 100 -p 3 --seed 42 by hand it works just fine?
15:32 < ShikharJ> zoq: The DCGAN PR is now completely debugged. However, I have been unable to get the CelebA dataset in the hdf5 format as well, because its running out of space on my system for some reason. Do you think we can merge the PR?
15:33 < zoq> ShikharJ: hm, do you think we could create a CelebA subset just to see if we could get some results?
15:34 < ShikharJ> zoq: It'd be possible if I could get the dataset, rest of the work should be easy.
15:35 < zoq> ShikharJ: Ah, I'll see if I can create a subset.
15:35 < ShikharJ> zoq: We need to have one dataset in the mlpack repository as well to give people an incentive.
15:36 < zoq> ShikharJ: But since it works on the MNIST dataset I see no problem to merge the code on that basis.
15:36 < zoq> ShikharJ: Right
15:36 < ShikharJ> zoq: We must also keep in mind that the images would be 178x218 with CelebA, so we'll have to crop them as well to 64x64 before setting them up for training.
15:37 < ShikharJ> zoq: I'm not sure how I could do the same without removing a part of the face, in some cases.
15:39 < zoq> ShikharJ: One solution would be to pad the image with zeros.
15:40 < ShikharJ> zoq: I'd say its your call for the merging. As I said, when I run the script for the conversion to hdf5, it crashes my system.
15:43 -!- manish7294 [8ba79d06@gateway/web/freenode/ip.220.127.116.11] has joined #mlpack
15:43 < manish7294> zoq: I got [FATAL] Cannot open file 'datasets/iris.csv'. terminate called after throwing an instance of 'std::runtime_error' what(): fatal error; see Log::Fatal output Aborted
15:43 < manish7294> It seems error is in dataset path
15:44 < manish7294> I tried replacing it with /home/manish/benchmarks/datasets/iris.csv and it works just fine
15:46 < zoq> Can you set the library path and try again?
15:46 < zoq> export LD_LIBRARY_PATH=/home/manish/benchmarks/libraries/lib/
15:48 < manish7294> zoq: not working
15:50 < zoq> ShikharJ: I don't mind to merge the code, perhaps after the BatchSupport PR?
15:50 < manish7294> zoq: I think cmd shoud be /libraries/bin/mlpack_lmnn -i datasets/iris.csv -v -o distance.csv -R 100 -p 3 --seed 42
15:50 < manish7294> as we running from inside benchmarks
15:51 < manish7294> and the above works too
15:51 < zoq> Using the full path should be fine.
15:52 < zoq> can you add print(e) to each except block
15:53 < manish7294> zoq: right, its working
15:54 < zoq> manish7294: the benchmark script?
15:55 < manish7294> no, just cmd
15:55 < manish7294> I have started a check for script
15:57 < zoq> The actual error message (print(e)) should be helpful.
15:58 < zoq> actually I get: https://gist.github.com/zoq/8883c671447316b3fab74f47541ced7a
15:59 < zoq> perhaps you already fixed that part?
16:01 < manish7294> Right, This is because I am doing all the debugging and work on slake.
16:02 < manish7294> Now it seems to be working as I just pass through that error, Earlier I was directly getting -2
16:05 < zoq> manish7294: Not sure I can help you at this stage, if you stuck at some point, please push the code and I'll take a look.
16:06 < manish7294> zoq: no problem, If I stuck again I will let you know.
16:06 < zoq> manish7294: Okay, sounds good :)
16:34 < Atharva> zoq: I think the reason negative_log_likelihood wasn't moved to loss_functions is because it's the default for many objects. Other files use NegativeLogLikelihood by just importting layer_types.hpp.
16:34 < Atharva> Should I add loss_functions/negative_log_likelihood.hpp to all of them or just keep it in layer folder
16:35 < zoq> Atharva: hm, what about including negative_log_likelihood.hpp inside layer_types.hpp?
16:36 < Atharva> Yeah, that's the easiest solution, should I add the other loss_functions as well?
16:37 < zoq> Atharva: hm, if the build time keeps almost the same.
16:38 < Atharva> Okay, I will add just negativelog right now
16:47 -!- manish7294 [8ba79d06@gateway/web/freenode/ip.18.104.22.168] has quit [Quit: Page closed]
17:26 < ShikharJ> zoq: Sure.
19:01 -!- manish72942 [~firstname.lastname@example.org] has quit [Ping timeout: 240 seconds]
19:20 -!- sumedhghaisas_ [5c082148@gateway/web/freenode/ip.22.214.171.124] has joined #mlpack
19:25 < sumedhghaisas_> Atharva: Hey Atharva
19:25 < sumedhghaisas_> I saw your PR. Good work :)
19:27 < Atharva> Thanks Sumedh!
19:27 < Atharva> Any comments on it?
19:37 < sumedhghaisas_> Yeah. I didn't quite understand the Reconstruction loss function
19:38 < Atharva> forward?
19:38 < sumedhghaisas_> Shouldn't you be templatizing it with distribution?
19:39 < Atharva> Okay, yes, I don't know why I forgot that.
19:39 < Atharva> I will push a commit
19:39 < sumedhghaisas_> Sure :)
19:39 < Atharva> default will be normal, right?
19:39 < sumedhghaisas_> No rush
19:40 < sumedhghaisas_> I was just little confused while reading the code
19:41 < Atharva> Also, we don't have gradient check now, so we are stuck with simple tests
19:41 < Atharva> Do you think those are enough?
19:45 < sumedhghaisas_> Atharva: Sorry didn't get that. Why don't we have gradient checks?
19:46 < Atharva> Hmm, no other loss functions have employed gradient checks in their tests, even I was wondering why?
19:47 < sumedhghaisas_> Ohh... thats weird
19:47 < sumedhghaisas_> Maybe they are part of networks tested in ann_layer_test?
19:47 < sumedhghaisas_> but I think there only 1 loss function is used
19:47 < sumedhghaisas_> everywhere
19:47 < Atharva> Yeah, not all of them have been tested with gradient check
19:48 < sumedhghaisas_> thats not good
19:48 < sumedhghaisas_> okay we should definitely test Reconstruction loss though
19:48 < sumedhghaisas_> just use a tested network of ann_layer_test and replace the loss with reconstruction loss
19:48 < Atharva> Okay, with or without a repar layer? I don't think it should matter
19:48 < Atharva> Yeah
19:48 < sumedhghaisas_> you are right
19:49 < sumedhghaisas_> it won't matter
19:49 < sumedhghaisas_> actually I would prefer if you test it with just a linear layer
19:49 < sumedhghaisas_> thats much cleaner
19:49 < sumedhghaisas_> what you think?
19:50 < Atharva> Yeah, it's better. Any other suggestions for testing?
19:53 < sumedhghaisas_> umm. Haven't gone through the whole PR yet :)
19:53 < sumedhghaisas_> I will try to go through it today and will comment on it.
19:58 < Atharva> Okay, whenever you are free :)
19:59 < Atharva> Btw, our next task it to support generation throught the predict function, right?
19:59 < sumedhghaisas_> umm... The primary task is to merge Repar layer :)
20:00 < Atharva> Oh, sorry I haven't rebased it yet. I will do it first thing tomorrow.
20:01 < sumedhghaisas_> Sure thing :)
20:01 < sumedhghaisas_> Lets get all this code ready, then we can move on to MNIST
20:02 < sumedhghaisas_> Generation we can support as soon as we define the distribution over it
20:02 < sumedhghaisas_> so thats easy
20:37 < Atharva> I pushed a commit templetizing the distribution but I just realized that in the forward function, we cannot construct any dist passing the upper half as std and lower half as mean
20:38 < Atharva> for example, bernoulli will have different contructor
20:38 < Atharva> I will look into it
21:22 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
--- Log closed Sun Jun 17 00:00:07 2018