mlpack IRC logs, 2017-05-16

Logs for the day 2017-05-16 (starts at 0:00 UTC) are shown below.

May 2017
--- Log opened Tue May 16 00:00:08 2017
01:23 -!- mikeling [uid89706@gateway/web/] has joined #mlpack
01:49 -!- sumedhghaisas [81d70333@gateway/web/cgi-irc/] has quit [Quit: - A hand crafted IRC client]
02:11 -!- chenzhe [~Thunderbi@2620:101:c040:7f7:6912:554c:3703:8da9] has joined #mlpack
02:26 -!- chenzhe [~Thunderbi@2620:101:c040:7f7:6912:554c:3703:8da9] has quit [Ping timeout: 255 seconds]
02:44 -!- chenzhe [~Thunderbi@2620:101:c040:7f7:6953:243d:4b6c:65c9] has joined #mlpack
03:07 -!- chenzhe [~Thunderbi@2620:101:c040:7f7:6953:243d:4b6c:65c9] has quit [Ping timeout: 240 seconds]
05:53 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
11:59 -!- mikeling is now known as mikeling|brb
14:28 < rcurtin> zoq: I see you are working with the benchmark checkout job, it looks like you got it to unstable last night
14:29 < rcurtin> I tried to fix the XML read failure problem by stripping invalid characters from the reports with sed, but it looks like that was not successful
14:29 < rcurtin> I think the \\x1b may need to be changed to \x1b, I am not sure
14:36 < zoq> rcurtin: If I remember right I have seen the problem before, but I could resolve the issue by using the latest build from the master branch:
14:38 < rcurtin> oh, ok---are you installing that as part of the build script, or should I use pip on each of the benchmark systems to bring it up to date?
14:38 < rcurtin> although I guess if the master branch is required I would need to manually check out and install, pypi only has releases I think
14:40 < zoq> I did it as part of the build process
14:41 < zoq> but I guess if you like to use pip that could also work, not sure if the master branch is necessary, that's just what I used
14:42 < rcurtin> I think xmlrunner is already installed through the master branch, so apparently that version is not new enough
14:42 < rcurtin> I don't see where xmlrunner is getting installed... I don't see anything in the Makefile or in the Jenkins build configuration
14:42 < rcurtin> maybe I am looking in the wrong place? :)
14:43 < zoq> it's the last block in checkout - all nodes '# xmlrunner'
14:44 < rcurtin> oh! I commented all of that out
14:44 < rcurtin> I should uncomment it I suppose :)
14:44 < rcurtin> previously it built all of the libraries, so I commented those out, I guess I commented xmlrunner out too
14:45 < rcurtin> so when this build fails, I'll uncomment it and try again
14:45 < zoq> I think the build already failed because mrpt_intall isn't executable ....
14:46 < rcurtin> ah, oops :)
14:48 < zoq> okay, fixed, if you like restart the job
14:51 < rcurtin> ok, let's try that...
14:51 < rcurtin> I adapted the xmlrunner build a little bit to build into libraries/xmlrunner/ not libs/xmlrunner/, and install to the same place as the other Python packages
14:52 < zoq> sounds good
14:54 < zoq> hm, we should put the build before the checks
14:55 < zoq> let me change that real quick
14:56 < rcurtin> oh! right :)
14:59 -!- shikhar [~shikhar@] has joined #mlpack
15:15 -!- mikeling|brb is now known as mikeling
16:37 < zoq> rcurtin: Looks like there is some problem with matlab on some nodes.
16:37 < zoq> rcurtin: I can take a closer look unless you have an idea.
16:51 -!- shikhar [~shikhar@] has quit [Quit: WeeChat 1.4]
17:04 < rcurtin> I bet I just have to restart the tunnels
17:05 < rcurtin> getting back from lunch now... will take care of it in a few minutes
17:30 < rcurtin> ok... we will see if it works this time :)
17:42 -!- gtank [sid147973@gateway/web/] has quit [Ping timeout: 260 seconds]
17:43 -!- gtank [sid147973@gateway/web/] has joined #mlpack
17:51 -!- chenzhe [] has joined #mlpack
17:52 -!- benchmark [] has joined #mlpack
17:52 -benchmark:#mlpack- MLP_BACKWARD (--input_size=50000 --hidden_size=5000 --output_size=100) | None -2.00 (old) => -2.00 (new) => 0.00 (diff) |
17:52 -benchmark:#mlpack- MLP_FORWARD (--input_size=50000 --hidden_size=5000 --output_size=100) | None -2.00 (old) => -2.00 (new) => 0.00 (diff) |
17:52 -benchmark:#mlpack- PERCEPTRON (--max_iterations 1000) | iris 0.02 (old) => 0.03 (new) => 0.01 (diff) | oilspill 0.14 (old) => 0.14 (new) => -0.00 (diff) | ecoli 0.03 (old) => 0.03 (new) => 0.00 (diff) |
17:52 -benchmark:#mlpack- NMF (--rank 6 --seed 42 --update_rules multdiv) | isolet 0.00 (old) => -1.00 (new) => -1.00 (diff) |
17:52 -benchmark:#mlpack- NMF (--rank 6 --seed 42 --update_rules als) | isolet 48.79 (old) => 46.43 (new) => -2.37 (diff) |
17:52 -benchmark:#mlpack- NMF (--rank 6 --seed 42 --update_rules multdist) | isolet 0.00 (old) => -1.00 (new) => -1.00 (diff) |
17:52 -benchmark:#mlpack- Benchmarks 7 of 8 passed.
17:52 -!- benchmark [] has quit [Client Quit]
17:54 -!- nish21 [75c8755c@gateway/web/freenode/ip.] has joined #mlpack
17:56 -!- chenzhe [] has quit [Ping timeout: 272 seconds]
18:33 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 240 seconds]
19:13 < zoq> rcurtin: Looks like you solved the issue :)
19:37 < rcurtin> :)
20:38 -!- sumedhghaisas [81d70337@gateway/web/cgi-irc/] has joined #mlpack
20:54 < rcurtin> hey sumedh, I was thinking of what you were saying about iceland when you were there, that it was always freezing cold and rainy
20:54 < rcurtin> I was wishing that was the case here as I walked around in the early summer heat :(
21:17 < sumedhghaisas> rcurtin: Hey Ryan, you are in Iceland right now?
21:18 < sumedhghaisas> it must be always sunny this time
21:18 < rcurtin> no, not in iceland, I was wishing that I was :)
21:18 < rcurtin> just in hot Atlanta
21:18 < sumedhghaisas> ohh worse... haha
21:19 < sumedhghaisas> Edinburgh is still cold
21:20 < sumedhghaisas> but now I see some sunny days here
21:20 < rcurtin> I looked at the temperature for today, it looks very nice to me
21:21 < sumedhghaisas> yeah... today it was beautiful. till like 7
21:21 < sumedhghaisas> now its getting colder
21:22 < sumedhghaisas> thats sad... cause I gave an exam today and still studying for the next one... I hate the weather right now. everyone is out enjoying while I am reading stupid neuroscience encoding and decoding
21:23 < rcurtin> hehe, and I guess by the time that you have free time, it won't be nice anymore
21:24 < sumedhghaisas> I am hoping it is... I am taking a week trip to spain before the coding starts
21:24 < rcurtin> nice, where in spain are you planning to go?
21:24 < rcurtin> I went to Granada for NIPS one year, it was beautiful
21:25 < sumedhghaisas> Vigo... one of my friend here has a big house right next to the beach there
21:25 < sumedhghaisas> no travelling
21:25 < sumedhghaisas> just 7 days of beach fun
21:25 < rcurtin> oh wow, that looks like it will be really nice
21:26 < sumedhghaisas> yeah... thats going to be a sweet break. All this math is making me crazy. I am dreaming gaussian and poisson distributions now
21:27 < sumedhghaisas> wow... granada is so beautiful ... its like a city on a hill
21:31 < rcurtin> yeah, unfortunately I spent most of the time locked up in a hotel room working on mlpack :(
21:31 < rcurtin> that was the conference where it was announced in 2011, and when I arrived there the library was nowhere near ready...
21:35 < sumedhghaisas> ohh... work always comes in the middle. Any summer plans?
21:35 < sumedhghaisas> apart from Gsoc :P
21:42 < rcurtin> ICML is in Sydney, so I will be traveling there at the end of July and taking some time to explore :)
21:42 < rcurtin> but I will have to answer my GSoC emails too :)
21:43 < sumedhghaisas> I also wanted to ask you some C++ question... sorry for ruining the awesome discussion... haha. So I am spending whateever time I get working on the neural network architecture we have. Currently all the layers right now are stored in a vector.
21:43 < sumedhghaisas> but technically we know the layers at compile time
21:43 < rcurtin> it's ok, let's see if I can answer it :)
21:43 < sumedhghaisas> remember the code I sent you about the architecture?
21:43 < rcurtin> hmm, that was a while ago, let me pull it up...
21:43 < rcurtin> ok, I have it up
21:44 < sumedhghaisas> it compiles the layers at compile time
21:44 < sumedhghaisas> but its too much to get it in this architecture..
21:44 < rcurtin> right, you specify the layers A, B, D as NeuralNetwork<A, B, D>
21:45 < sumedhghaisas> yup... which can also be done without bothering the user by introducing a create network function
21:45 < sumedhghaisas> But how much do you think that will affect the performance?
21:45 < sumedhghaisas> cause changing the architecture will involve lot of work now...
21:46 < rcurtin> I have not tried, but I think it's difficult for the user if they have to deal with these very long types to address their network
21:46 < sumedhghaisas> ohh no that can be fixed...
21:46 < rcurtin> I think that the visitor paradigm and boost::variant<> may work in such a way that there is no extra overhead
21:46 < rcurtin> but I am not sure about that, I have not checked
21:47 < sumedhghaisas> you mean the extra overhead of inheritance?
21:47 < sumedhghaisas> or the vector?
21:47 < sumedhghaisas> cause it also involves ... table lookup call
21:47 < rcurtin> I am not sure, I haven't looked into it
21:48 < rcurtin> in either case the first thing to do would be an empirical comparison, to see if it is worth digging in deeper
21:48 < rcurtin> the boost variant code and the visitor code is very complex, so it might behave in unexpected ways and provide better performance than expected
21:48 < sumedhghaisas> for the user ... the code I sent also has a function to create a network which return the network object created... which the user can auto it
21:48 < sumedhghaisas> yes... thats what I am trying to do... but how?
21:49 < rcurtin> you could build two networks of identical structure, then feed points through them and time that
21:49 < rcurtin> one network with the mlpack ANN code, one network with what you have built
21:52 < sumedhghaisas> yes but for that I have to built the entire framework around my framework ... exactly like the current framework... haha... okay so thats the only option
21:53 < sumedhghaisas> okay wait... now I am confused how variant works here...
21:53 < sumedhghaisas> I thought its just a better way to iterate the vector
21:53 < rcurtin> yeah... I am not certain of this, it has been a while since I have dug into variant
21:54 < rcurtin> but what I remember is that I was incredibly impressed and variant does things I did not think were possible
21:54 < sumedhghaisas> yeah... its static time unions
21:54 < sumedhghaisas> impressive
21:55 < rcurtin> yeah
21:55 < rcurtin> in the case of the mlpack ANN code, I am not sure exactly how optimized the result is
21:55 < rcurtin> maybe zoq knows, but I am not 100% sure, I have not tested it myself
21:56 < sumedhghaisas> okay... anyways I have couple other changes to do before I get to seriously consider it. Talked to zoq abou
21:56 < sumedhghaisas> *about it
21:57 < sumedhghaisas> also thats softmax thing? I checked many codes online ... no one implements the reduced form... why?
21:57 < rcurtin> I have no idea, I haven't had any time to look into it
21:58 < sumedhghaisas> and from the point of view of optimization... the current form is linear invariant right... the reduced form is not... so it should be better for optimization right?
22:00 < rcurtin> I am not sure, I have not had any time to think about it in quite some time
22:01 < sumedhghaisas> ahh me too... I am so looking forward to summer. I want to get back to implementing things.
22:02 < rcurtin> I know the feeling, in many ways I haven't been able to write as much code as I like :)
22:07 < rcurtin> ok, I think I am headed out for now... talk to you later!
22:18 < sumedhghaisas> yeah... talk to you later
22:39 -!- nish21 [75c8755c@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
22:52 -!- mentekid [~yannis@] has quit [Quit: Leaving.]
22:52 -!- mentekid [] has joined #mlpack
23:00 -!- mikeling [uid89706@gateway/web/] has quit [Quit: Connection closed for inactivity]
--- Log closed Wed May 17 00:00:09 2017