ensmallen
mlpack
fast, flexible C++ machine learning library

mlpack IRC logs, 2018-06-21

Logs for the day 2018-06-21 (starts at 0:00 UTC) are shown below.

June 2018
Sun
Mon
Tue
Wed
Thu
Fri
Sat
 
 
 
 
 
1
2
3
4
5
6
7
8
16
17
18
19
20
21
22
30
--- Log opened Thu Jun 21 00:00:13 2018
02:24 -!- seeni [dce18d19@gateway/web/freenode/ip.220.225.141.25] has joined #mlpack
02:26 < seeni> can you say why this thing happpens while building mlpack " from Cython.Distutils import build_ext ModuleNotFoundError: No module named 'Cython' " . But i have Cython installed on my machine
02:26 -!- seeni_ [~seeni@220.225.141.25] has joined #mlpack
02:29 -!- seeni [dce18d19@gateway/web/freenode/ip.220.225.141.25] has quit [Quit: Page closed]
02:29 -!- seeni_ is now known as seeni
02:53 -!- seeni [~seeni@220.225.141.25] has quit [Quit: seeni]
03:21 < rcurtin> seeni: do you have Cython installed for the correct version of python?
03:21 < rcurtin> and which version is installed?
04:07 -!- manish7294 [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has joined #mlpack
04:07 < manish7294> rcurtin: Probably it's late, Are you there?
04:13 < rcurtin> yeah, I am about to go to bed though, but I can stay up for a few more minutes :)
04:14 -!- manish7294_ [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has joined #mlpack
04:14 < manish7294_> rcurtin: Its regarding distance caching in impostors.
04:15 < manish7294_> Do you mean the distance matrix we pass to knn search
04:15 < rcurtin> right, there are a couple little complexities there
04:15 < manish7294_> this right knn.Search(k, neighbors, distances);
04:15 < rcurtin> but yes, when we do knn.Search(), it returns the distances between the point and its nearest neighbors in that matrix
04:15 < manish7294_> ?
04:15 < rcurtin> right, exactly
04:16 < rcurtin> if we cache the distance results, we can avoid the recalculation, does that make sense?
04:16 < manish7294_> But I saw knn seach code and it reinitialize distance every time.
04:16 < manish7294_> If I got it right here it is
04:16 < manish7294_> arma::Mat<size_t>* neighborPtr = &neighbors; arma::mat* distancePtr = &distances; if (!oldFromNewReferences.empty() && tree::TreeTraits<Tree>::RearrangesDataset) { // We will always need to rearrange in this case. distancePtr = new arma::mat; neighborPtr = new arma::Mat<size_t>; } // Initialize results. neighborPtr->set_size(k, referenceSet->n_cols); distancePtr->set_size(k, referenceSet->n_col
04:16 < rcurtin> right, and the same with the neighbors matrix
04:17 < manish7294_> Ah! indentation!
04:17 -!- manish7294 [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has quit [Ping timeout: 260 seconds]
04:17 < rcurtin> but in Impostors() you are extracting the results of that neighbors matrix into the outputMatrix object
04:17 < manish7294_> Right
04:17 < rcurtin> no worries, I know the code you are talking about :)
04:18 < rcurtin> so the idea would be, also extract the distances into some other output matrix
04:18 < manish7294_> but if knn reinitialize distance everytime, so how would it help
04:18 < rcurtin> and then they can be used by the other parts of EvaluateWithGradient()
04:18 < manish7294_> Right, I got that idea but worrying about knn search code
04:19 < rcurtin> yeah, I am not sure I understand why that is a problem though
04:19 < manish7294_> distancePtr = new arma::mat;
04:19 < manish7294_> distancePtr->set_size(k, referenceSet->n_cols);
04:20 < manish7294_> These are the two lines there un starting of search code
04:20 < rcurtin> right, but what I'm saying is the exact same thing is done for the neighbors matrix
04:20 < rcurtin> yet you use the neighbors matrix just fine
04:21 < manish7294_> So, basically we can use the previous distance matrix to relieve knn search from some calculation, right?
04:21 < rcurtin> ah, sorry I think I see the confusion now
04:21 < rcurtin> the idea is not to give the KNN object something that will help the search
04:22 < rcurtin> the idea is to store the distances output from the KNN object so that we can avoid some metric.Evaluate() calls later in the EvaluateWithGradient() function
04:22 < manish7294_> Right, thanks got the point
04:22 < rcurtin> sure, hope that clarified it
04:22 < rcurtin> let me know if not
04:22 < manish7294_> Thanks for keeping up this late :)
04:23 < rcurtin> sure, it's no problem :)
04:23 < rcurtin> I will head to bed now if there's nothing else for now
04:23 < manish7294_> Ya, I got these two ideas while reading that comment but just got too deeo into the obe I was talking about. :)
04:23 < rcurtin> it's ok, I know how it goes :)
04:24 < manish7294_> good night :)
04:24 < rcurtin> I would say 'good night' but it is morning for you, so good morning :)
04:24 < manish7294_> :)
04:24 -!- manish7294_ [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has quit [Quit: Page closed]
05:15 -!- vivekp [~vivek@unaffiliated/vivekp] has left #mlpack ["Leaving"]
06:28 < ShikharJ> zoq: Sorry for troubling you again, but can we merge the two PRs now? That would also help us in our code for GAN Optimizer and WGAN.
06:53 < zoq> ShikharJ: Sure what do you think about adding a simple test?
06:54 < ShikharJ> zoq: Test for the GANs?
06:54 < zoq> ShikharJ: Batch support.
06:55 < zoq> ShikharJ: Ahh, I see we already test GAN with batchSize > 1
06:55 < ShikharJ> zoq: What I was thinking of doing was to uncomment the GANMNISTTest that we have, and set some low hyperparameters.
06:56 < zoq> ShikharJ: Agrred that sounds reasonable.
06:56 < ShikharJ> zoq: Now with the batch support PR, it takes really less time to compute something like a batch of 10, for one epoch, 20 pre-training and 50 maximum inputs.
06:57 < zoq> ShikharJ: Okay the batch support is merged, do you like to incoperate the test in the DGAN PR?
06:58 < zoq> ShikharJ: We can also open a new PR.
06:58 < ShikharJ> zoq: Sure, I'll uncomment all the tests and change the test documentation a bit there. I'm guessing some merge conflicts would also arise in the DCGAN PR after batch support is merged.
06:59 < zoq> ShikharJ: yes
06:59 < zoq> ShikharJ: okay, modifiying the test is a good idea, let's do that :)
07:00 < ShikharJ> zoq: Really happy with the work we've achieved. I'll also tmux a session to see how we currently fare against other libraries!
07:03 < zoq> ShikharJ: Yeah, all this really nice additions and improvements.
07:29 < Atharva> zoq: I have been facing a strange issue since yesterday.
07:29 < Atharva> Certain gradient check tests in ANNLayerTest fail or pass based on their position in the file among other tests.
07:29 < Atharva> With no code changed
07:31 < Atharva> Also, similar issue is, if in the GradientLinearLayerTest, I change the loss to meansquared, then Atrous Convolution test fails
07:33 < Atharva> What I found out was the model.Gradient() call from this tests return all zeros when they fail, but I can't figure out why, nothing else is changing.
07:36 < ShikharJ> Atharva: I also found an issue like that sometime back, though it wasn't showing up on Travis so I ignored it.
07:37 < Atharva> ShikharJ: So the tests don't give any problems on Travis?
07:38 < Atharva> I might as well ignore it then.
07:38 < ShikharJ> They didn't for me. But keep in mind this was some time back. The codebase has changed considerably from thereon.
07:38 < Atharva> I will try and push a commit once and see if they fail.
07:40 < ShikharJ> zoq: Could we have access to the Appveyor builds, they don't seem to have auto branch cancellation feature, and I had pushed a couple of unnecessary builds that I wish to cancel.
09:01 -!- seeni [~seeni@220.225.141.25] has joined #mlpack
09:08 -!- seeni [~seeni@220.225.141.25] has quit [Quit: seeni]
09:28 < zoq> ShikharJ: hm, I thought every mlpack member should be able to start/stop the job, did you use the same github login?
09:29 < zoq> Atharva: What version (last commit) do you use?
09:29 < ShikharJ> zoq: Yes.
09:30 < zoq> ShikharJ: hm, let me disable/enable the setting.
09:30 < zoq> ShikharJ: Okay, can you test again?
09:31 < ShikharJ> zoq: I'll need a running job for that.
09:34 < Atharva> commit 86219b18b5afd23800e72661ab72d0bde0fd7a99
09:34 < ShikharJ> zoq: I still can't cancel the build https://ci.appveyor.com/project/mlpack/mlpack/build/%235195
09:34 < Atharva> merge e08e761 2554f60
09:36 < zoq> ShikharJ: strange, perhaps Atharva could test it as well?
09:36 < zoq> ShikharJ: I can also cancle the build
09:37 < Atharva> zoq: Sorry, what should I test?
09:38 < ShikharJ> zoq: Please cancel all the Implement DCGAN Test builds apart from the latest one https://ci.appveyor.com/project/mlpack/mlpack/history
09:38 < ShikharJ> zoq: There should be two builds
09:40 -!- seeni [~seeni@220.225.141.25] has joined #mlpack
09:47 < zoq> Atharva: Can you test if you are able to cancel the build: https://ci.appveyor.com/project/mlpack/mlpack/build/%235195
09:53 < Atharva> I don't see any options to cancel the build.
09:53 < Atharva> I logged in with the mlpack account
10:04 < Atharva> zoq: Is there a way to do this from the terminal?
10:05 < jenkins-mlpack> Project docker mlpack nightly build build #356: STILL UNSTABLE in 2 hr 51 min: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/356/
10:18 < seeni> i got this error while building " from Cython.Distutils import build_ext
10:18 < seeni> ModuleNotFoundError: No module named 'Cython'
10:18 < seeni> ". But i have Cython installed. How to fix this
11:00 -!- seeni [~seeni@220.225.141.25] has quit [Quit: seeni]
11:26 < zoq> Atharva: thanks for testing, perhaps there is a way to stop the build from the terminal.
11:33 < zoq> ShikharJ, Atharva: Pretty sure it works now.
11:46 < ShikharJ> zoq: Yeah it does, thanks zoq!
12:31 -!- manish7294 [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has joined #mlpack
12:32 < manish7294> rcurtin: I have added matlab benchmarking scripts and have updated the comment accordingly: https://github.com/mlpack/mlpack/pull/1407#issuecomment-398772089
12:32 < manish7294> It seems we can't use a custom k value with matlab lmnn implementation, though I have not dig in the reason behind.
12:34 < manish7294> And the matlab run is taking a way lot amount of memory
12:39 < manish7294> rcurtin: It's regarding the tree building optimization, I have noticed that total tree building time is always very low(merely half a second on letters dataset). So, do you think this optimization will be efficient?
12:50 < manish7294> And regaring the distance caching --- We need to calculate the distance after every iteration as metic.Evaluate() is called on transformed dataset(which changes after every iteration), but taking from your idea, we can avoid this calcuation at least the times(decided by range parameter) when we call impostors (here we will need to cache distance every time impostors is called) and then use it instead for metric.Evaluate. Does i
13:10 < ShikharJ> zoq: I have tmux'd a session, let's see if it shows any improvement over the 3 day runtime that we saw earlier.
13:31 < Atharva> sumedhghaisas: I know we decided on Thurdays 8pm ist, but is possible for you at 10pm ist?
13:32 < Atharva> or about 9:30?
13:32 < manish7294> rcurtin: Just a bumpy thought. It may sound weird, but I am writing it anyway :) ---- Regarding your bounds idea, we are facing the problem of deciding a particular value for it right? Is it possible to have adaptive bounding value just like the adaptive step size.
13:37 < ShikharJ> zoq: As expected, the smaller GAN tests pass within the time bound, can we also merge the DCGAN PR now?
13:50 < zoq> ShikharJ: Okay, left some comments regarding the test.
13:51 < ShikharJ> zoq: Cool.
14:12 -!- manish7294 [8ba7a9fb@gateway/web/freenode/ip.139.167.169.251] has quit [Ping timeout: 260 seconds]
14:29 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
14:36 < sumedhghaisas> Atharva: Hi Atharva
14:36 < sumedhghaisas> Sure. 10pm works for me as well.
14:36 < sumedhghaisas> If you get free earlier let me know
15:36 -!- travis-ci [~travis-ci@ec2-54-227-123-80.compute-1.amazonaws.com] has joined #mlpack
15:36 < travis-ci> manish7294/mlpack#29 (lmnn - d05cfd3 : Manish): The build has errored.
15:36 < travis-ci> Change view : https://github.com/manish7294/mlpack/compare/8a6709f089b7...d05cfd31cc5e
15:36 < travis-ci> Build details : https://travis-ci.com/manish7294/mlpack/builds/76957776
15:36 -!- travis-ci [~travis-ci@ec2-54-227-123-80.compute-1.amazonaws.com] has left #mlpack []
15:50 < rcurtin> manish7294: a couple comments, sorry that I was not able to respond until now
15:51 < rcurtin> don't worry about a lack of custom k---if the MATLAB script doesn't support it, it's not a huge deal
15:51 < rcurtin> and I am not surprised it takes a huge amount of memory
15:52 < rcurtin> for the tree building optimization, you are right, in some cases tree building can be fast (depends on the dataset)
15:53 < rcurtin> at the same time, unless you've modified the code, it isn't counting the time taken to build the query trees
15:54 < rcurtin> on, e.g., MNIST, tree building takes a much longer time
15:54 < rcurtin> so I think it will be a worthwhile optimization on larger datasets
15:54 < rcurtin> for the distance caching, you are right---we can only avoid the calculation exactly when Impostors() is called
15:55 < rcurtin> for the bumpy thought, I'm not sure I fully understand---for bounding values, the bound will depend on | L_t - L_{t + 1} |_F^2, which is fast to calculate
16:05 -!- manish7294 [~yaaic@2405:205:2480:faee:b8e3:2ab7:1231:84bf] has joined #mlpack
16:07 < manish7294> rcurtin: Regarding bumpy thought - If I am right we need to bound that expression under some value like "exp < b",then as per my understanding this b is varies a lot from one dataset to other. Some my earlier comment was about this b
16:07 < manish7294> *so my earlier comment
16:09 < rcurtin> it may vary, but I think it may not be all that much
16:09 < rcurtin> basically the quantity I am talking about bounding is 'eval' in 'eval < -1'
16:10 < rcurtin> I think it will not be hard to adapt the bounds from the notes that I wrote to show that the last iteration's 'eval' can be used to make a lower bound on this iteration's eval
16:11 < rcurtin> I think it will look something like 'eval_t < eval_{t - 1} + \| L_t - L_{t + 1} \| * (some function of \| x_i \| and \| x_l \| or something like this)'
16:12 < rcurtin> but I need to compute the exact value, unless you'd like to do that theory part :)
16:13 < manish7294> Ah! I mixed it up with other thing, so you can see what will happen if I go doing that part right ;)
16:15 < rcurtin> I think there are so many optimizations that we confuse ourselves a little bit talking about them... do you think that it would be easier if we went ahead, merged the LMNN code, then opened issues for each possible optimization?
16:15 < rcurtin> then each optimization could be handled separately in its own PR, making discussion a lot easier (I think)
16:15 < rcurtin> at least from my end I am always getting mixed up which part we are talking about :)
16:15 < manish7294> It would be lot better than current situation; )
16:16 < rcurtin> right, ok, so then let me know when you've got the current code pushed, and I'll review it and we can do the merge in the next handful of days
16:16 < manish7294> I will push the modified distance cache then after build pass you can go ahead
16:17 < manish7294> sure
16:17 < rcurtin> right, that sounds good. you should have a review in a handful of hours; I think any remaining issues will be little ones, like documentation or options for lmnn_main.cpp
16:18 < rcurtin> and then I'll also open issues for each of the possible optimizations and we can discuss there which are good ideas, which are bad ideas, and which are worth implementing :)
16:18 < manish7294> ya there may be many of those :)
16:18 < manish7294> great
16:18 < rcurtin> I'm not too worried about the timeline, since the actual LMNN implementation you did was super fast, I think it will be similar for BoostMetric
16:19 < rcurtin> most of the time we've spent has been optimization, but I just need to glance at the timeline again and make sure I don't suggest 1000 optimizations that there's not time for :)
16:19 < rcurtin> as you know it's already quite fast by comparison, I just know that there is more there :)
16:19 < manish7294> though optimazation took time but I think it's worth it
16:21 < manish7294> Do you think we have achieved something at least a bare minimum for a workshop? :)
16:24 < rcurtin> almost---it's hard to publish a paper if it is just a fast implementation, but when we can start adding clever bounds and computation reductions, we have something much better
16:24 < rcurtin> so if we can start caching the query trees, and computing when no impostors will change (even if that only happens for some datasets), that plus what we already have is definitely something novel and publishable, I think
16:24 < manish7294> I hope we achieve that :)
16:25 < rcurtin> and given the speedups we are already showing, the experiments section will look great almost regardless
16:25 < rcurtin> in some cases it looks like 50x-100x with comparable resulting kNN accuracies
16:26 < manish7294> ya there are some datasets
16:27 < manish7294> I think balance is most visible one
16:30 < Atharva> sumedhghaisas: Hey Sumedh
16:35 < sumedhghaisas> Atharva: Hi Atharva
16:35 < sumedhghaisas> How are things going?
16:36 < Atharva> I took a lot of time to debug the gradient check of ReconstructionLoss, but it's done now.
16:36 < Atharva> What's next?
16:37 < sumedhghaisas> haha... thats great. :) gradient errors are the worst
16:37 < Atharva> yeah they are
16:37 < Atharva> but the most important :P
16:37 < sumedhghaisas> Did you send the CL for NormalDistribution layer?
16:37 < sumedhghaisas> sorry no layer...
16:37 < sumedhghaisas> just NormalDistributiom
16:38 < Atharva> what do you mean by CL?
16:38 < Atharva> sorry
16:38 < sumedhghaisas> Also we can perform JacobianTest for NormalDistribution log prob
16:38 < sumedhghaisas> ahh... PR
16:38 < sumedhghaisas> at work we have CLs
16:38 < sumedhghaisas> so usually I get confused
16:38 < Atharva> oh, okayy
16:39 < Atharva> Yeah, I opened a new PR
16:39 < Atharva> please take a look at it when you get time, I had to change a lot of things because the input can also be negative
16:41 < sumedhghaisas> hmm... I see
16:42 < sumedhghaisas> I think this class is becoming too specific to ANN module
16:42 < sumedhghaisas> maybe we should move it inside the module
16:42 < sumedhghaisas> for now
16:42 < sumedhghaisas> until we figure out how to generalize it for outer dists
16:42 < Atharva> It would have to be because we have to keep the ReconstructionLoss layer generic for other distributions
16:42 < Atharva> yeah
16:43 < sumedhghaisas> We could keep the ReconstructionLayer generic for ANN dists
16:43 < sumedhghaisas> for now
16:44 < Atharva> I didn't get it, do you mean that we create a seperate folder for ANN dists
16:45 < sumedhghaisas> yes... 'dists' in ANN folder
16:45 < Atharva> okay
16:46 < Atharva> So, next we will do a Jacobian test, after that?
16:46 < sumedhghaisas> And regarding the PR, adding softplus depending on the input is wrong, input can be sometimes positive and sometimes negative
16:47 < sumedhghaisas> Add a boolean, default it to True, which determines if softplus is added or not
16:47 < Atharva> okayy
16:47 < Atharva> got it
16:49 < sumedhghaisas> also is there a specific reason you are not using the implemented SoftplusFunction?
16:49 < Atharva> Yes, I tried that first but the build kept on errorring. I guess it's because the dists are core files
16:50 < Atharva> It said Softplus wasn't defined
16:52 < sumedhghaisas> thats weird
16:52 < sumedhghaisas> could you send me the line you used to import the file?
16:52 < sumedhghaisas> ahh I see
16:52 < Atharva> Just to make sure I wasn't making any mistakes, I will try that again.
16:52 < Atharva> okay
16:52 < sumedhghaisas> thats due to the circular dependency maybe
16:53 < sumedhghaisas> that should go away when you will move it inside the ANN module
16:53 < sumedhghaisas> if it doesn't let me know
16:53 < Atharva> <mlpack/methods/ann/activation_functions/softplus_function.hpp>
16:53 < Atharva> Yeah
16:53 < Atharva> It will probably go away in the ann folder
17:44 < sumedhghaisas> zoq: is there some way we could bypass a line for static code check?
17:47 < manish7294> rcurtin: I have pushed the cached distance related changes.
17:54 -!- travis-ci [~travis-ci@ec2-54-234-115-112.compute-1.amazonaws.com] has joined #mlpack
17:54 < travis-ci> manish7294/mlpack#30 (lmnn - 620ee59 : Manish): The build has errored.
17:54 < travis-ci> Change view : https://github.com/manish7294/mlpack/compare/d05cfd31cc5e...620ee5987fa2
17:54 < travis-ci> Build details : https://travis-ci.com/manish7294/mlpack/builds/76976477
17:54 -!- travis-ci [~travis-ci@ec2-54-234-115-112.compute-1.amazonaws.com] has left #mlpack []
18:15 < rcurtin> manish7294: sounds good, I'll review them soon when I have a chance
18:46 < zoq> sumedhghais: We can ignore files, but not lines at least not now. But we don't have to wait for a green build, we know that we can ignore some issues.
18:54 < ShikharJ> zoq: Great news, with the new BatchSupport changes, I'm able to train on the full dataset within 10 hours. This is almost twice as fast as the expected time with Tensorflow on a desktop CPU (though with a server grade cpu we can still expect around 30~40% relative speedup)!
18:55 < ShikharJ> zoq: I'll post the results on the BatchSupport PR.
18:55 < zoq> ShikharJ: Great news indeed :)
19:08 -!- manish7294 [~yaaic@2405:205:2480:faee:b8e3:2ab7:1231:84bf] has quit [Ping timeout: 265 seconds]
19:16 < Atharva> zoq: Is it okay if in a PR, I manually add some changes I need from another PR and then later remove them when the other PR is merged?
19:20 < zoq> Atharva: Don't see a problem.
19:21 < Atharva> zoq: Okay, thanks.
20:08 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
--- Log closed Fri Jun 22 00:00:14 2018