mlpack IRC logs, 2017-03-09

Logs for the day 2017-03-09 (starts at 0:00 UTC) are shown below.

March 2017
--- Log opened Thu Mar 09 00:00:26 2017
--- Day changed Thu Mar 09 2017
00:00 < zoq> Never heard of D.A.F, one band I really like is Moderat, kinda different and probably more modern.
00:01 -!- delfo_ [~bruno@] has joined #mlpack
00:02 < zoq> "A New Error"
00:02 < zoq> just a cool name for a song
00:05 < rcurtin> Moderat, I will add that to my list
00:05 < rcurtin> but I don't have any headphones on this trip, so it will have to wait until I am home...
00:46 -!- delfo [~delfo@] has joined #mlpack
00:47 -!- delfo [~delfo@] has quit [Client Quit]
00:52 -!- thanhdng [80b88449@gateway/web/freenode/ip.] has joined #mlpack
01:05 -!- delfo_ [~bruno@] has quit [Quit: WeeChat 1.7]
01:06 -!- delfo_ [~bruno@] has joined #mlpack
01:06 -!- delfo_ [~bruno@] has quit [Client Quit]
01:06 -!- delfo_ [~bruno@] has joined #mlpack
01:11 -!- thanhdng [80b88449@gateway/web/freenode/ip.] has quit [Quit: Page closed]
01:31 -!- chvsp [cb6ef207@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
01:37 -!- arunreddy [~arunreddy@] has joined #mlpack
01:41 -!- arunreddy [~arunreddy@] has quit [Client Quit]
01:53 -!- mikeling [uid89706@gateway/web/] has joined #mlpack
02:04 < mikeling> rcurtin: ping
02:06 -!- delfo_ [~bruno@] has quit [Quit: WeeChat 1.7]
02:06 < rcurtin> mikeling: I am here but not able to help with your compilation error, I have a lot of work I need to do and a paper deadline on Monday
02:06 < rcurtin> I would advise experimenting with different syntax possibilities and seeing how this changes the error, which could possibly help guide you to a solution
02:06 < mikeling> rcurtin: oh, I see.
02:06 < rcurtin> but I am assuming you were going to ask about that :)
02:07 < mikeling> yep, you are absolutely right :D No worried, I will keep working on it
02:08 < rcurtin> yeah, sorry that I can't help more right now
02:09 < mikeling> it's ok ;)
02:09 < rcurtin> I think, based on glancing at it, that the fix will probably a be a simple syntax change, but figuring out the right change to make with gcc's errors can be very hard sometimes...
02:59 -!- kaamos [c000fc3b@gateway/web/freenode/ip.] has joined #mlpack
03:00 < kaamos> hi
03:00 < zoq> kaamos: Hello there!
03:03 < kaamos> I'm a MSc candidate student from Canada and I'm looking forward to doing GSoC this summer. This will be my first time doing GSoC and I'm really interested in some of the project ideas that you have posted. Could you please help me out on the application and the process?
03:05 < zoq> kaamos: Have you seen: The Student Manual is also quite helpful:
03:06 < kaamos> I understand that your primary codebase is C++, and I don't have much background in C++. However, I have most of my undergrad in C, and I have used Java, Python, and JS throughout my internships, employment, and reasearch. Is still okay that I apply?
03:08 < kaamos> Thanks for the links! No I hadn't seen that page before, but I did review your github profile (that's what was linked from your GSoC profile).
03:11 < zoq> kaamos: Depending on the project, basic knowledge is sufficient in either case you should be willing to dive into various aspects of C++.
03:22 < kaamos> Happy to learn!
03:22 < kaamos> One more thing: I don't have much planned for the summer, however, I am considering a two week vacation. Is that okay?
03:23 < kaamos> Unfortunately, it falls within the 3-month work period and not May
03:24 < kaamos> I'm sure I can put some extra hours over the weekends on other days and make up for it somehow :)
03:24 < kaamos> *weeks
03:31 < zoq> kaamos: As long as you discussed that with your mentor upfront, showed progress and probably can compensate the time, I think this is fine. Also, it's a good idea, to notice that in your proposal.
03:31 < zoq> mikeling: I just glanced over the patch and probably take a closer look at it tomorrow but since you changed the template parameters of SplitIfBetter don't you have to specify the value of UseWeights if you call e.g. "double dimGain = NumericSplitType<FitnessFunction>::SplitIfBetter("
03:33 < kaamos> Cheers mate! I look forward to it!
03:34 -!- kaamos [c000fc3b@gateway/web/freenode/ip.] has quit [Quit: Page closed]
03:37 < mikeling> zoq: yep, sorry, I should call it like "NumericSplitType<FitnessFunction>::SplitIfBetter< UseWeights >". I guess I just pay too much attentions on the Evaluate functions :)
03:37 < mikeling> thank you!
04:07 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 260 seconds]
04:23 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has joined #mlpack
04:52 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
05:11 -!- doublegamer26 [0e8b2669@gateway/web/freenode/ip.] has joined #mlpack
05:12 < doublegamer26> Hello. I would like to know how I can contribute to the community. All help is appreciated. :)
05:13 < rcurtin> doublegamer26: hi there, we get this question a lot, so we made a page for it:
05:13 < rcurtin> maybe that will be helpful to you :)
05:16 < doublegamer26> Thank you.
05:16 -!- vinayakvivek [uid121616@gateway/web/] has joined #mlpack
05:22 -!- doublegamer26 [0e8b2669@gateway/web/freenode/ip.] has quit [Quit: Page closed]
05:52 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 240 seconds]
06:38 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
07:23 -!- diehumblex [uid209517@gateway/web/] has joined #mlpack
08:02 -!- witness_ [uid10044@gateway/web/] has joined #mlpack
09:34 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has quit [Quit: Page closed]
10:06 -!- adi_ [cb737672@gateway/web/freenode/ip.] has joined #mlpack
10:13 -!- madhudeep [af652162@gateway/web/freenode/ip.] has joined #mlpack
10:41 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has joined #mlpack
10:45 -!- witness_ [uid10044@gateway/web/] has quit [Quit: Connection closed for inactivity]
10:49 -!- madhudeep [af652162@gateway/web/freenode/ip.] has quit [Quit: Page closed]
10:50 -!- tejank10 [3b5f0447@gateway/web/freenode/ip.] has joined #mlpack
10:55 -!- vinayakvivek [uid121616@gateway/web/] has quit [Quit: Connection closed for inactivity]
11:02 -!- adi_ [cb737672@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
11:40 -!- tejank10 [3b5f0447@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
11:54 -!- frankbozar [8c7188da@gateway/web/freenode/ip.] has joined #mlpack
11:54 -!- frankbozar [8c7188da@gateway/web/freenode/ip.] has left #mlpack []
12:51 -!- chvsp [cb6ef217@gateway/web/freenode/ip.] has joined #mlpack
13:03 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
13:52 -!- sicko [~sicko@] has joined #mlpack
14:15 -!- chvsp [cb6ef217@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
14:28 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 258 seconds]
14:50 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has joined #mlpack
14:55 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
15:00 -!- ironstark [75d35a9b@gateway/web/freenode/ip.] has joined #mlpack
15:10 -!- ironstark [75d35a9b@gateway/web/freenode/ip.] has quit [Quit: Page closed]
15:15 -!- tejank10 [3b5f0447@gateway/web/freenode/ip.] has joined #mlpack
15:18 -!- ironstark [~ironstark@] has joined #mlpack
15:20 -!- ironstark [~ironstark@] has quit [Quit: Leaving]
15:21 -!- ironstark [~ironstark@] has joined #mlpack
15:23 -!- ironstark [~ironstark@] has quit [Client Quit]
15:46 < tejank10> Hello, I was trying to compile and run the tests provided. But I am getting errors pertaining to boost library, to be specific from its variant directory. Can anybody please help me?
15:57 -!- mikeling [uid89706@gateway/web/] has quit [Quit: Connection closed for inactivity]
16:04 < rcurtin> tejank10: I can try to help, but you'll need to provide more information like the error message, etc. :)
16:05 -!- K4k [~K4k@unaffiliated/k4k] has left #mlpack ["WeeChat 1.6"]
16:09 < zoq> mikeling: Two more issues; everytime you use 'FitnessFunction::Evaluate<UseWeights>(...)'' it should be 'FitnessFunction::template Evaluate<UseWeights>(...)'
16:09 < zoq> mikeling: and I think you missed the weights parameter in one of the SplitIfBetter function calls. Let us know if that solves the errors you see.
16:11 < zoq> let's see if he checks the logs
16:17 < tejank10> Thanks @rcurtin. Following are some errors which I am encountering while compiling ann_layer_test.cpp
16:18 < tejank10> boost/variant/detail/make_variant_list.hpp:40:46: error: wrong number of template arguments (33, should be at least 0) typedef typename mpl::list< T... >::type type; ^
16:18 < tejank10> boost/variant/variant.hpp:2332:43: error: using invalid field ‘boost::variant<T0, TN>::storage_’ return internal_apply_visitor_impl( ^
16:19 < tejank10> boost/variant/variant.hpp:2334:13: error: return-statement with a value, in function returning 'void' [-fpermissive] );
16:19 < tejank10> I am running boost 1.58
16:20 < zoq> tejank10: And you used 'make' or 'make test' do build the tests?
16:22 < tejank10> no
16:24 < zoq> some g++ command line?
16:28 < tejank10> yes
16:33 < zoq> tejank10: I wouldn't say you can't build the test cases with g++ but it's somewhat sophisticated. It's easier to use make:
16:34 < rcurtin> tejank10: I agree with zoq (sorry for the slow response, I had to drive to work)
16:35 < rcurtin> it's always easier to just 'make mlpack_test' to build the tests
16:35 < rcurtin> if you want to see what kind of command-line arguments to g++ are necessary, you can run 'VERBOSE=1 make' but even then be aware that the invocation of g++ there depends on lots of .o files that get built in different calls to g++
16:37 -!- vinayakvivek [uid121616@gateway/web/] has joined #mlpack
16:48 < tejank10> Thanks @zoq and @rcurtin! I shall try this now :)
17:02 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has quit [Quit: Page closed]
17:13 -!- ironstark [~ironstark@] has joined #mlpack
17:13 < ironstark> hi, i wanted to contribute to mlpack to develop essential deep learning modules, can someone give me pointers on where to start?
17:14 < ironstark> thanks in advance
17:14 < ironstark> i have already built mlpack and tried out some programs with it
17:14 < ironstark> are there any bugs/ideas i can work on to begin with?
17:23 < zoq> ironstark: Hello, have you searched through the list archives ( for other messages about the deep learning modules project? There is a bunch of information that has been written about this project over the past.
17:23 < zoq> ironstark: Check the issues on github maybe you find something interesting, it's kinda difficult to keep enough issues open, so another idea is to dig around in the codebase and see if you can find something that can be improved. Also We are always open for intersting new algoritihms.
17:26 < ironstark> cool, i'll get back in some time
17:26 < zoq> ironstark: Sounds good :)
17:55 -!- tejank10 [3b5f0447@gateway/web/freenode/ip.] has quit [Quit: Page closed]
17:57 -!- shihao [80b49718@gateway/web/freenode/ip.] has joined #mlpack
18:05 -!- deepanshu_ [uid212608@gateway/web/] has joined #mlpack
18:06 -!- Nax [6f44656c@gateway/web/freenode/ip.] has joined #mlpack
18:13 -!- Nax__ [6f44656c@gateway/web/freenode/ip.] has joined #mlpack
18:16 -!- Nax [6f44656c@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
18:17 -!- Nax__ [6f44656c@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
18:18 < shihao> I have a question about issue#921:
18:18 < rcurtin> shihao: sorry, I had not responded to that
18:18 < rcurtin> let me do that now...
18:19 < shihao> rcurtion: Hi!
18:19 < shihao> rcurtion: I think I figured that out
18:20 < rcurtin> oh? ok, I will still comment anyway, because the way to fix it is actually somewhat complex...
18:20 < rcurtin> or maybe there is a quick workaround you found out?
18:20 < shihao> rcurtion: I see that in load_impl.hpp file, there is only one function to load data
18:20 < shihao> But it can only load into matrix.
18:20 < shihao> rcurtion: Is that right?
18:21 < rcurtin> I just merged another PR that Lakshya worked on that can load column and row vectors too
18:21 < rcurtin> make sure you're looking at the up-to-date git master branch
18:21 < shihao> That's great!
18:22 < shihao> I guess in test programs there are a lot of this kind of situation.
18:26 < rcurtin> I added a comment, I hope it is helpful
18:27 < rcurtin> sorry if you read it and it feels like this is a much harder task than originally thought... :)
18:35 < shihao> rcurtion: No worry, it a meaningful improvement and I can learn a lot of thins :)
18:42 -!- kris2 [~kris@] has joined #mlpack
18:43 < kris2> getting the gradients of the last layer or the loss function. I am doing this arma::vec grad = model.Model()[model.Model().size()-1].Gradient();
18:43 < kris2> but it shows a error saying the layer function dosen't have the Gradient method
18:44 < zoq> kris2: Are you sure the last layer has a Gradient function? How does your model look like?
18:45 < kris2> The last layer is also model.Add<LogSoftMax<>>();
18:46 < zoq> kris2: There is no Gradient function for LogSoftMax check src/ann/layer/log_softmax.hpp.
18:48 < kris2> FFN<MeanSquaredError<>,RandomInitialization> model; but the last layer should be mean squared error
18:51 < kris2> or am i wrong
18:52 < kris2> check here
18:52 < kris2>
18:53 < rcurtin> ok, it looks like the time has come to move the build server masterblaster from its home in LA to a new home in Springfield, Oregon
18:53 < rcurtin> I think that I can minimize the downtime to be ~3 days max; I'll see if I can get it to be lower than that
18:58 -!- chvsp [cb6ef215@gateway/web/freenode/ip.] has joined #mlpack
18:59 < kris2> zoq: can you have a look at the gist
19:10 < kris2> I can confirm that even Backward() gives the same error
19:15 < zoq> rcurtin: perfect timing :)
19:16 < zoq> kris2: MeanSquaredError isn't stored in std::vector<LayerTypes> network.
19:17 < zoq> kris2: Just the ones you added with Add(...).
19:18 < kris2> so if wanted to the gradients of the last layer how should i go about it
19:18 < kris2> i though of using backwardvisitor
19:19 < zoq> That depends on the model, but arma::vec grad = model.Model()[model.Model().size() - 2].Gradient(); should work in your case.
19:19 < kris2> with input from apply_visitor(OutputParameter(), model.Model()[model.Model().size()-1]
19:20 < kris2> i can confirm that gives the same error
19:21 < kris2> has no member gradient error
19:22 -!- shihao [80b49718@gateway/web/freenode/ip.] has quit [Quit: Page closed]
19:24 < zoq> kris2: Yeah, I see there is no visitor class, should I write one for you?
19:27 < kris2> zoq: basically i need the gradient of the loss layer. also we have the gradient_visitor. implemented.
19:27 < kris2> i think we could use that for gradients of any layer.
19:28 < kris2> that implements the gradients function
19:28 < kris2> we just have to provide the input. I am not sure what the inputs mean there.
19:28 < zoq> The gradient_visitor executes the Gradient function (Gradient(...), but what you like is the gradient (Gradient()) right?
19:30 < kris2> Yes. but we could provide the input from the previous layer to the Gradient(...) visitor and essential to would work the same right.
19:31 < kris2> *essentially it would work the same
19:34 < zoq> The problem is, even then the gradient visitor does not expose the gradient, because it's just used internally. I have an idea, let me write it down.
19:37 < zoq> see my comment
19:39 < zoq> I will update the gradient visitor, so that it also works, but maybe this works for you at the moment?
20:07 < kris2> Would that give the gradients for the output layer(Mean squared error)
20:08 < kris2> no right
20:08 < kris2> how could we get the gradients for them
20:22 < zoq> kris2: it does
20:23 < kris2> ohh, but they don't have the gradient() function ??
20:26 < zoq> The gradient of layer x can be accessed via the previous layer.
20:27 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 260 seconds]
20:28 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
21:11 -!- travis-ci [] has joined #mlpack
21:11 < travis-ci> mlpack/mlpack#1992 (master - 07b3707 : Marcus Edel): The build is still failing.
21:11 < travis-ci> Change view :
21:11 < travis-ci> Build details :
21:11 -!- travis-ci [] has left #mlpack []
21:12 -!- chvsp [cb6ef215@gateway/web/freenode/ip.] has quit [Quit: Page closed]
21:14 < kris2> zoq: if we want to get the weights of the layer connecting hidden layer to output layer then can i do something like this weights = boost::apply_visitor(OutputParameterVisitor(), model.Model()[model.Model().size - 2]);
21:15 < kris2> because the Output parameter visitor gives the trainable parameters here that would be weights ?/
21:18 < zoq> kris2: OutputParameterVisitor returns the output of layer x (e.g. input*w -> OutputParameterVisitor()) and ParametersVisitor returns the weights/trainable parameter (e.g. w) of layer x.
21:19 < kris2> so i can use weights = boost::apply_visitor(ParameterVisitor(), model.Model()[model.Model().size - 2]);
21:19 < zoq> kris2: ParametersVisitor not ParameterVisitor and model.Model().size() not model.Model().size
21:20 < kris2> yes.
21:20 < kris2> thanks
21:20 < zoq> kris2: here to help :)
21:22 -!- deepanshu_ [uid212608@gateway/web/] has quit [Quit: Connection closed for inactivity]
21:22 < kris2> zoq: but i think we discussed earlier the ParametersVisitor does not always give a matrix for every layer type.
21:23 < zoq> kris2: If layer x does not implement the Parameters() function the return value of ParametersVisitor is an empty matrix.
21:26 < kris2> layer1->layer2->layer3. ParametersVisitor for layer2 would give the forward layer weights
21:26 < kris2> also is there any work around if a layer does not implement the Parameters function
21:27 < zoq> there is ParametersVisitor returns an empty matrix if a layer does not implement the Parameters function
21:43 -!- benchmark [] has joined #mlpack
21:43 -benchmark:#mlpack- MLP_BACKWARD (--input_size=50000 --hidden_size=5000 --output_size=100) | None 2.83 (old) => 2.67 (new) => -0.16 (diff) |
21:43 -benchmark:#mlpack- MLP_FORWARD (--input_size=50000 --hidden_size=5000 --output_size=100) | None 0.43 (old) => 0.43 (new) => -0.00 (diff) |
21:43 -benchmark:#mlpack- Benchmarks 2 of 2 passed.
21:43 -!- benchmark [] has quit [Client Quit]
22:12 -!- travis-ci [] has joined #mlpack
22:12 < travis-ci> mlpack/mlpack#1993 (master - 847b5ac : Marcus Edel): The build is still failing.
22:12 < travis-ci> Change view :
22:12 < travis-ci> Build details :
22:12 -!- travis-ci [] has left #mlpack []
22:28 -!- travis-ci [] has joined #mlpack
22:28 < travis-ci> mlpack/mlpack#1994 (master - e153611 : Marcus Edel): The build is still failing.
22:28 < travis-ci> Change view :
22:28 < travis-ci> Build details :
22:28 -!- travis-ci [] has left #mlpack []
22:33 -!- diehumblex [uid209517@gateway/web/] has quit [Quit: Connection closed for inactivity]
22:55 < zoq> hm, just noted that the RecurrentNetworkTest takes 20 minutes on masterblaster and 30 seconds on savannah, has to be correlated with the number of cores.
22:56 < rcurtin> hm, it could be that masterblaster is very heavily loaded when it runs the test?
22:57 < rcurtin> I know the sun systems were swapping and I had to reduce the number of executors
22:58 < zoq> no, I checked the jenkins build history and also tested it manually
22:59 < zoq> hm,
22:59 < zoq> 29sec
23:04 < rcurtin> I see what you mean, #2636 took 20 minutes and #2630 took 29 seconds, both on masterblaster, both when the load average on masterblaster should have been low
23:04 < rcurtin> I wonder, if this is the result of high variance in, e.g., the number of iterations before SGD convergence criterion is being reached
23:05 -!- vinayakvivek [uid121616@gateway/web/] has quit [Quit: Connection closed for inactivity]
23:05 -!- kris2 [~kris@] has left #mlpack []
23:06 < zoq> In this case the number of iterations are fixed.
23:09 < rcurtin> hmmm, very strange then
23:09 < zoq>
23:09 < zoq> Can we see the load at build times?
23:09 < rcurtin> if you like, add it to #922 :)
23:10 < rcurtin> hmm, maybe you can add to the build instructions, 'cat /proc/loadavg' right before mlpack_test is run
23:11 < zoq> Maybe the load at 2630 or 2631 the load was high
23:15 < zoq> We could test this out, can we stop the matrix build it's stucked right now; and then start the matrix and commit build at the same time?
23:16 < rcurtin> yeah, sure
23:16 < rcurtin> the sun builds are hanging for some reason, I haven't isolated the failure
23:17 < zoq> We just need some load at the same time we run the commit job
23:17 < rcurtin> yeah, easy to test, do you want to do that or should I?
23:17 < zoq> I can do it
23:18 < zoq> here we go :)
23:19 < rcurtin> hehe, load average 35
23:47 < rcurtin> seems like there are some issues still with dealgood where the build fails, I'll see if I can fix those
23:50 < zoq> Maybe someone takes up the "Build testing" idea, docker could make things easier.
23:54 < rcurtin> yes, I sure hope so! (I hope someone reads these logs and sees that I am really interested in that project getting done too!)
--- Log closed Fri Mar 10 00:00:36 2017