mlpack IRC logs, 2018-02-02

Logs for the day 2018-02-02 (starts at 0:00 UTC) are shown below.

>
February 2018
Sun
Mon
Tue
Wed
Thu
Fri
Sat
 
 
 
 
1
2
3
4
5
6
7
8
9
17
18
19
20
21
22
23
24
25
26
27
28
--- Log opened Fri Feb 02 00:00:54 2018
02:19 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 240 seconds]
02:48 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 240 seconds]
02:53 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
02:57 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 268 seconds]
02:59 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
03:33 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
03:39 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 256 seconds]
03:40 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
05:30 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 264 seconds]
05:38 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
07:10 -!- travis-ci [~travis-ci@ec2-54-234-205-100.compute-1.amazonaws.com] has joined #mlpack
07:10 < travis-ci> ShikharJ/mlpack#52 (GAN - b3c3da2 : Shikhar Jaiswal): The build has errored.
07:10 < travis-ci> Change view : https://github.com/ShikharJ/mlpack/compare/8548f786b80c...b3c3da25787a
07:10 < travis-ci> Build details : https://travis-ci.org/ShikharJ/mlpack/builds/336429755
07:10 -!- travis-ci [~travis-ci@ec2-54-234-205-100.compute-1.amazonaws.com] has left #mlpack []
07:25 -!- travis-ci [~travis-ci@ec2-54-198-205-61.compute-1.amazonaws.com] has joined #mlpack
07:25 < travis-ci> ShikharJ/mlpack#50 (master - a16363a : Marcus Edel): The build has errored.
07:25 < travis-ci> Change view : https://github.com/ShikharJ/mlpack/compare/629ca69116f9...a16363a9c9cf
07:25 < travis-ci> Build details : https://travis-ci.org/ShikharJ/mlpack/builds/336426247
07:25 -!- travis-ci [~travis-ci@ec2-54-198-205-61.compute-1.amazonaws.com] has left #mlpack []
07:58 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 276 seconds]
08:01 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
08:54 -!- pvskand [~skand@14.139.9.9] has joined #mlpack
09:43 -!- pvskand [~skand@14.139.9.9] has quit [Ping timeout: 252 seconds]
10:13 -!- brni [cb6ef212@gateway/web/freenode/ip.203.110.242.18] has joined #mlpack
11:06 -!- brni [cb6ef212@gateway/web/freenode/ip.203.110.242.18] has quit [Ping timeout: 260 seconds]
11:15 -!- travis-ci [~travis-ci@ec2-54-234-205-100.compute-1.amazonaws.com] has joined #mlpack
11:15 < travis-ci> ShikharJ/mlpack#53 (RBM - ebf8187 : Shikhar Jaiswal): The build has errored.
11:15 < travis-ci> Change view : https://github.com/ShikharJ/mlpack/compare/df8ebc26eb27...ebf81878a76e
11:15 < travis-ci> Build details : https://travis-ci.org/ShikharJ/mlpack/builds/336489277
11:15 -!- travis-ci [~travis-ci@ec2-54-234-205-100.compute-1.amazonaws.com] has left #mlpack []
12:03 -!- pvskand [~skand@117.252.3.34] has joined #mlpack
13:42 -!- pvskand [~skand@117.252.3.34] has quit [Ping timeout: 248 seconds]
14:38 -!- alsc [~alsc@host172-21-dynamic.244-95-r.retail.telecomitalia.it] has joined #mlpack
14:38 < alsc> hi there, zoq are you there?
14:39 < alsc> I am trying to get the types and weights out of a FNN model, with OutputVisitor.... messy
15:03 < rcurtin> alsc: you could use FNN::Parameters() but I am not sure that gets you exactly what it is you need
15:06 < rcurtin> it would be possible to add some kind of FFN::Get<LayerType>(size_t) function that returns a layer, then you could do FFN::Get<LayerType>(size_t).Parameters(), but you would need to know the layer type itself when you called that
15:07 < rcurtin> that could be done via boost::variant::get(), which would throw an exception if the wrong LayerType was specified for a layer
15:16 < alsc> rcurtin: thanks yeah I ended up using variant::get
15:17 < alsc> it's kind of clumsy as I am just interested in linear layers weights and so I am relying on the auto& linearLayer = get<Linear<arma::mat, arma::mat>*>(layers[li]); not throwing an exception
15:18 < alsc> but it seems to work. now the only thing that's kind of unexpected is that n_rows and n_cols of .Parameters() isn't what I thought
15:18 < rcurtin> I guess I am not sure how we could make it less clumsy though, I think the best we could give would basically be "auto& linearLayer = network.Get<Linear<>>(li)"
15:18 < alsc> like: I constructed a network with the following lines
15:18 < alsc> hold on
15:19 < rcurtin> yeah, the linear layer looks to store memory all in one row: "weights.set_size(outSize * inSize + outSize, 1);"
15:19 < alsc> ahhh
15:19 < alsc> so is that the last one the biases?
15:20 < rcurtin> hmm, I see that internally it has a 'weight' and 'bias' member that would be much more suited for what you want
15:20 < rcurtin> but those aren't made accessible through a function or anything
15:20 < alsc> in fact it wasn't a multiple of the layer size, let me check
15:20 < rcurtin> but yeah, the last outSize parameters are the biases
15:21 < alsc> uhmm super weird. where's that weights.set_size(outSize * inSize + outSize, 1) ?
15:21 < rcurtin> linear_impl.hpp:35
15:22 < rcurtin> I think that's the constructor though, maybe one of the visitors is doing something else weird to it
15:22 < alsc> ah yes
15:22 < alsc> I get back to you with a pastebin, one sec
15:32 < alsc> yeah that's it!
15:32 < alsc> this made me spot a bug in my code actually, hehe
15:34 < alsc> ah there's the assignment of n_rows and n_elems in the 2nd constructor
15:34 < alsc> weight = arma::mat(weights.memptr(), outSize, inSize, false, false);
15:35 < alsc> kind of stiff because Reset is called at training time.. in this case I am loading with boost::archive and I get all the dimensions squashed
15:35 < alsc> (I am coding a vanilla decoder network in pure C that can use these coefficients)
15:37 < alsc> shall I just call Reset(); on line 36?
15:42 < rcurtin> no, I think that one of the visitors when FFN::Reset() is called calls Reset()
15:42 < rcurtin> I guess I am a little confused about how the linear layer you are getting has the wrong size, can you tell me more of what the issue is?
16:02 < alsc> ah, well it's squashed in 1-dim
16:03 < rcurtin> yeah; if there was public access to the 'weight' and 'bias' members I think that those would be in the format you expect
16:03 < alsc> ahh I see!
16:03 < alsc> ok, I'll add it
16:03 < alsc> I already had to expose FNN::network btw
16:04 < rcurtin> yeah, I guess you could add a wrapper around variant::get<> if you wanted
16:04 < rcurtin> and we could merge that also
16:05 < alsc> Weights() and Biases() ?
16:05 < rcurtin> nah, I'd just go with the capitalized version of what the internal member is called, so Weight() and Bias() (that would match the rest of the mlpack code)
16:05 < rcurtin> if that works for you :)
16:06 < rcurtin> (the other option is to change the internal names, which I guess is just fine, Linear<> is super simple anyway)
16:06 < alsc> hehe yeah
16:08 < alsc> ok testing it
16:24 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
16:30 < alsc> rcurtin: ok works
16:30 < alsc> the wrapper to variant::get<> you mean as a tempate method of FNN ?
16:30 < rcurtin> yeah, like
16:30 < rcurtin> template<typename LayerType>
16:31 < rcurtin> FFN::GetLayer(const size_t i) { return get<LayerType>(...) }
16:31 < rcurtin> or something like that
16:31 < alsc> returning the vector< is handy though, because one can know how many layers there are
16:31 < alsc> + getNumberOfLayers then?
16:34 < rcurtin> yeah, I would agree with that
16:34 < rcurtin> I might avoid returning the vector<> though because someday the underlying implementation may change and then the API would have to change
16:35 < alsc> yeah sure
16:52 -!- pvskand [~skand@117.220.161.114] has joined #mlpack
17:19 -!- vmg27 [31cf30e1@gateway/web/freenode/ip.49.207.48.225] has joined #mlpack
17:22 < vmg27> I am trying to run tests on mac and getting the following error
17:22 < vmg27> aBoostTest/PerceptronSerializationTest": signal: SIGABRT (application abort requested) /mlpack/src/mlpack/tests/serialization.hpp:215: last checkpoint
17:23 < alsc> rcurtin: here https://github.com/mogees/mlpack/commit/7ec84b6b915d46e20ffa17213b5c2a7bb02df5a8
17:23 < alsc> and here https://github.com/mogees/mlpack/commit/5886db7d27e53ec51e88cf818d2db6e4840e0393
17:23 < alsc> I have no idea how to cherry pick a PR
17:24 < vmg27> error : libc++abi.dylib: terminating with uncaught exception of type boost::archive::archive_exception: input stream error-Undefined error: 0 unknown location:0: fatal error: in "AdaBoostTest/PerceptronSerializationTest": signal: SIGABRT (application abort requested) /mlpack/src/mlpack/tests/serialization.hpp:215: last checkpoint
17:24 < vmg27> any help?
17:27 < alsc> sorry, no idea, but unknown location looks like i has to do with the paths
17:27 < alsc> it*
17:38 -!- sameeran [3123dd99@gateway/web/freenode/ip.49.35.221.153] has joined #mlpack
17:38 -!- sameeran [3123dd99@gateway/web/freenode/ip.49.35.221.153] has left #mlpack []
17:48 < zoq> vmg27: Do all Serialization tests fail or just the Adaboost? e.g. SparseCodingTest
17:51 < zoq> alsc: Looks good, not sure about the getNumberOfLayers name, do you think NetworkSize works as well?
17:52 -!- alsc [~alsc@host172-21-dynamic.244-95-r.retail.telecomitalia.it] has quit [Quit: alsc]
17:53 < vmg27> Yeah.. SparseCodingTest is failing too
17:55 < zoq> How did you install mlpack and boost?
17:57 -!- alsc [~alsc@host172-21-dynamic.244-95-r.retail.telecomitalia.it] has joined #mlpack
17:57 < alsc> yeah sounds good, shall I?
17:58 < zoq> alsc: Do you like to open a PR or should I cherry pick the changes from your repo?
17:59 < alsc> I don't know how to to a PR with non-consecutive commits
17:59 < alsc> whats the difference?
18:00 < alsc> ok last commit d0c44f6b17314991e84ed11de1f10aa24e9682ff renames it
18:00 < zoq> Cherry pick can be used to pull a single commit. What about creating another branch and redo the changes over there.
18:01 < alsc> branching from?
18:01 < vmg27> I installed boost by brew and followed steps to build mlpack in the website except for installing dependencies
18:01 < alsc> my master is not up to date with mlpack's
18:03 < zoq> alsc: I see, and can you update the master branch: git remote add upstream https://github.com/mlpack/mlpack.git && git fetch upstream && git checkout master && git rebase upstream/master?
18:03 < zoq> alsc: If not I guess it#s easier to cherry pick the commit
18:03 < alsc> it will have lots of conflicts....
18:03 < alsc> yes please
18:03 < zoq> alsc: Okay, let me do this later today, does this sound good?
18:03 < alsc> yup sure
18:04 < alsc> I have all that part we talked about last time already committed on master
18:04 < alsc> termination policies
18:04 < zoq> ah nice
18:04 < alsc> so it has diverged quite a lot from mlpack's master
18:05 < alsc> in fact I am using it quite a lot... I am passing a lambda as termination policy so it's handy, local to the calling code... computing validation accuracy, savind models, and plotting from there
18:05 < zoq> I see, actually we were talking abotu the feature here: https://github.com/mlpack/models/pull/5
18:06 < zoq> vmg27: Yeah no need to run make install.
18:07 < zoq> vmg27: I'll see if I can reproduce the issue on my mac later, can you tell me the boost version you are using?
18:07 < alsc> in the digit recognizer iddue? where?
18:07 < alsc> issue
18:08 < alsc> anyway, it's here https://github.com/mogees/mlpack/commit/4adbc90454c8ec2057b9dcbd32ed29c788815423
18:08 < zoq> https://github.com/mlpack/models/pull/5#discussion_r163564015
18:08 < zoq> okay, thanks
18:09 < vmg27> boost version : 1.66
18:09 < alsc> https://github.com/mogees/mlpack/blob/43e68eba94354f43136119f016a3e0a5430394aa/src/mlpack/core/optimizers/sgd/termination_policies/default_termination.hpp
18:10 < zoq> vmg27: And I guess macOS 10.13.2
18:10 < vmg27> yes
18:10 < alsc> zoq: I have refactored some of the default parameters of SGD into the default termination policy... you'll follow easily. let me know if I can be of help
18:11 < zoq> alsc: I guess, for the termination feature your plan is to open a PR?
18:11 < alsc> probably yes, I should have branched though
18:12 < alsc> what do you think could be the best way?
18:13 < alsc> maybe I could branch to something where I revert to what's in mlpack/master
18:13 < alsc> then cherry pick into there
18:13 < zoq> as long as you only work on a single feature you can easily use the master branch, after the feature is merged, update is easy and in the future you probably should open a new branch :)
18:14 < alsc> fact is that I have trimmed a lot of CMake and testing from my master, the stuff I don't need
18:14 < alsc> maybe I'll just fork a new one into my personal account, change stuff in there
18:14 < alsc> and send PRs from there
18:14 < zoq> that's also an option
18:15 < alsc> next week
18:15 < zoq> sounds good
18:30 -!- ImQ009_ [~ImQ009@unaffiliated/imq009] has joined #mlpack
18:32 -!- alsc [~alsc@host172-21-dynamic.244-95-r.retail.telecomitalia.it] has quit [Quit: alsc]
18:33 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Ping timeout: 256 seconds]
19:18 -!- pvskand [~skand@117.220.161.114] has left #mlpack ["Ex-Chat"]
19:21 -!- vmg27 [31cf30e1@gateway/web/freenode/ip.49.207.48.225] has quit [Quit: Page closed]
21:05 -!- witness [uid10044@gateway/web/irccloud.com/x-lkeswescjdxpchwz] has joined #mlpack
21:19 -!- addy [3d0c4bd1@gateway/web/freenode/ip.61.12.75.209] has joined #mlpack
22:06 -!- addy [3d0c4bd1@gateway/web/freenode/ip.61.12.75.209] has quit [Ping timeout: 260 seconds]
23:50 -!- ImQ009_ [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
--- Log closed Sat Feb 03 00:00:55 2018