mlpack IRC logs, 2017-03-08

Logs for the day 2017-03-08 (starts at 0:00 UTC) are shown below.

March 2017
--- Log opened Wed Mar 08 00:00:33 2017
00:03 < kris1> zoq: Parameters are the weights of a layer right so its a matrix
00:03 < kris1> so if passing model.Model()[1] should return a matrix not a row vector.
00:10 < zoq> kris1: right, the problem is that for some layers Parameters does not return the internal weight matrix instead it returns all trainable parameters. I guess an easy solution would be to introduce a function that returns the input and output size. Another solution would be to run the network for a single iteration and use the output to figure out the input and output size, but that would be slower. I have to
00:10 < zoq> think about it.
00:14 < zoq> Until now there was no need to provide a function to get the input and output size.
00:16 < zoq> What I don't like about the idea is that you have to implement the functions for each layer that implements the Parameters function.
00:18 < zoq> Right now, a minimal trainable layer has to provide Forward, Backward, Gradient and Parameters and non-trainable layer just Forward and Backward.
00:27 < kris1> zoq: did you get a chance to look to at nestrov accelrated gradient pr
00:28 < kris1> i think we should implement it as separate method.
00:29 < zoq> Not today, I'll take a look at it tomorrow. Instead of an extra policy?
00:29 < kris1> my future work would be to add apply_momentum to it that would provide the vanilla momentum update to every optimizer method
00:30 < kris1> Yes i am not in favor of the extra policy because i looked at the implementation of caffe. they have a seprate nag method
00:30 < zoq> that's the one right?
00:30 < kris1> Yes
00:32 < kris1> vanilla nag would take the gradients that update parameters for any optimizer given to it
00:32 < zoq> I think providing another policy would minimize code duplication, without a performance cost. But I'll take a look at the PR tomorrow.
00:33 < kris1> that what i want do in the future. Though every optimizer can have a specialized template for there version of nag as well
00:33 < kris1> Oh yes no problem i just wanted to discuss the design with you.
00:34 < kris1> zoq: I will look into what you said about the parameters.
00:35 < zoq> But if we provide nag as policy, wouldn't that allow any other optimizer to use it?
00:36 < zoq> Isn't that what you meant, implement nag as seperate method?
00:36 < kris1> Oh yes you mean to say that every policy could implement the way the want to implement nag and we could have template base class.
00:37 < zoq> I think we mean the same thing :)
00:37 < kris1> Yes ....:)
00:38 -!- brn_ [~bruno@] has joined #mlpack
00:38 < zoq> Let me take a look at the PR tomorrow and we can discuss over there, what do you think?
00:39 < kris1> Great.
00:52 < kris1> zoq: okay now i get your point regarding the parameters. okay so i would have to implement a method input to every layer which would be similar to the inputwidth() fucntion that cnn have right.
00:53 < zoq> kris1: Yeah, it's like the InputWidth() function for the conv layer.
01:05 -!- mikeling [uid89706@gateway/web/] has joined #mlpack
02:32 < kris1> zoq: Every layer has InputParameter(). but we really don't set these inputParameter. I think we could use the args of forward() function to initialize these values.
02:35 -!- brn_ [~bruno@] has quit [Ping timeout: 240 seconds]
02:37 -!- brn_ [~bruno@] has joined #mlpack
02:40 < zoq> kris1: I thought about the solution, but to use the InputParameter to specify the input and output size, is not really intuitive for a user. But I agree, that would reduce the minimal function set.
02:42 < kris1> maybe i could add new variables then like fanin and fanout. and FanIn(){return fanin}. But these would have to initialize inside the forward function in every layer.
02:44 < kris1> but i would have to this for every forward function in every layer. Is there work around that
02:44 < kris1> zoq:
02:45 < zoq> I haven't really thought about a better solution but adding InputSize() and OutputSize to each layer that implements the Parameters() function, is defently an idea that works.
02:46 < zoq> The input and output size is know at construction time, so we can use the constructor initialization list to set the parameters.
02:48 < zoq> e.g. for the linear layer InputSize returns inSize and OutputSize returns outSize
02:52 < kris1> zoq: but that is not true for all layers.
02:52 < kris1> it would be better if we use the ones in forward function.
02:53 < kris1> eg sequential dosen't have a constructor having inputsize and outputsize
02:57 < zoq> Yeah, we implement the InputSize and OutputSize functions only for a layer where we know the size, the sequential layer is special because it's an container like the FFN class, that can hold different layer. It's meant to be used in a case where you have to split into two branches.
02:59 < zoq> Split:
03:02 < kris1> oh okay. I see....makes sense....but i could give you another example like add would we find out fanin for that
03:05 -!- aditya_ [~aditya@] has joined #mlpack
03:06 -!- drewtran [4b8e60c6@gateway/web/freenode/ip.] has joined #mlpack
03:07 < zoq> kris1: That is a good point, and somewhat tirck, I agree. So the inputSize of Add is the outputSize of the previous layer right? If Add provides the Parameters() but does not implement the InputSize function, we could asume that the InputSize is the outputSize of the previous layer. I haven't thought this through, but that could be a solution, what do you think?
04:04 -!- aditya_ [~aditya@] has quit [Ping timeout: 240 seconds]
04:10 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has joined #mlpack
04:25 -!- brn_ [~bruno@] has quit [Quit: WeeChat 1.7]
04:26 -!- brn_ [~bruno@] has joined #mlpack
04:27 -!- brn_ [~bruno@] has quit [Client Quit]
04:27 -!- delfo_ [~bruno@] has joined #mlpack
04:28 -!- delfo_ [~bruno@] has quit [Client Quit]
04:29 -!- delfo_ [~bruno@] has joined #mlpack
04:29 -!- delfo_ [~bruno@] has quit [Client Quit]
04:29 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
04:30 -!- delfo_ [~bruno@] has joined #mlpack
05:09 -!- kris1 [~kris@] has quit [Quit: Leaving.]
05:14 -!- pretorium [~kost2906@] has joined #mlpack
05:25 -!- delfo_ [~bruno@] has quit [Ping timeout: 240 seconds]
05:25 -!- delfo_ [~bruno@] has joined #mlpack
05:53 -!- delfo_ [~bruno@] has quit [Quit: WeeChat 1.7]
06:57 < mikeling> Hello, Does anyone still around? :)
07:00 < mikeling> I got an error looks like compiler failed to found my function declaration and return an error like But I have no idea why it happened because I believe I had declare it appropriate. Here is my gist for patch
07:05 -!- bhairav [6b816b79@gateway/web/freenode/ip.] has joined #mlpack
07:14 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 240 seconds]
07:25 -!- bhairav [6b816b79@gateway/web/freenode/ip.] has quit [Quit: Page closed]
07:39 -!- drewtran [4b8e60c6@gateway/web/freenode/ip.] has quit [Quit: Page closed]
07:50 -!- daivik [73f91219@gateway/web/freenode/ip.] has joined #mlpack
07:51 -!- daivik [73f91219@gateway/web/freenode/ip.] has quit [Client Quit]
07:53 -!- vinayakvivek [uid121616@gateway/web/] has joined #mlpack
08:26 -!- daivik [~daivik@] has joined #mlpack
08:30 -!- daivik [~daivik@] has left #mlpack ["WeeChat 1.4"]
08:31 -!- kris2 [~kris@] has joined #mlpack
08:50 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
08:55 -!- pretorium [~kost2906@] has quit [Read error: Connection reset by peer]
09:10 -!- daivik [daivik@gateway/shell/xshellz/x-urygosigvmmbsthk] has joined #mlpack
09:12 -!- daivik [daivik@gateway/shell/xshellz/x-urygosigvmmbsthk] has left #mlpack []
09:17 -!- anubhavb1_ [728fbb0a@gateway/web/freenode/ip.] has joined #mlpack
09:19 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 240 seconds]
09:20 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
09:25 -!- kris3 [~kris@] has joined #mlpack
09:28 -!- kris2 [~kris@] has quit [Ping timeout: 260 seconds]
09:34 -!- kris3 [~kris@] has quit [Ping timeout: 240 seconds]
09:42 -!- kris2 [~kris@] has joined #mlpack
10:22 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 260 seconds]
10:45 -!- vinayakvivek [uid121616@gateway/web/] has quit [Quit: Connection closed for inactivity]
11:06 -!- kartik_ [73f840a9@gateway/web/freenode/ip.] has joined #mlpack
11:07 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has joined #mlpack
11:08 < kartik_> hi <zoq> I am stuck with the cma-es implementation .. we are optimising a black box function for lets say super mario game..
11:09 < kartik_> now we have in input JSON string input of {0,1,2,3} 169 of them and we have to find its output that are 5 button values according to bangs code..
11:10 < kartik_> but in CMA-ES we have a black box function and given its dimensions , we are able to find the covariance matrix ..
11:11 < kartik_> then 1st how to use the covariance matrix ? and 2nd , am i working on single variate or multi variate CMA-ES for super mario ..
11:11 < kartik_> thanks
11:14 -!- anubhavb1_ [728fbb0a@gateway/web/freenode/ip.] has quit [Quit: Page closed]
11:22 -!- aditya_ [~aditya@] has joined #mlpack
12:05 < zoq> kartik_: Hello, first of all the CMA-ES works on topologically fixed neural networks, so for the sake of simplicity let's say we have a two-layer neural network with 169 inputs and 5 outputs.
12:05 < zoq> kartik_: The parameters of the networks are sampled from a multivariate Gaussian distribution, the next step is to evaluate the network and to calculate the fitness, by using the input from the game-screen; at each step, the network outputs one of the 5 actions.
12:05 < zoq> kartik_: When all networks have been evaluated, the mean of the multivariate Gaussian distribution is recalculated as a weighted average of the networks with the highest fitness.
12:06 < zoq> kartik_: At the same time, you update your Covariance matrix (at time = 0 it's the Identity matrix with size N x N where N is the name of parameters/network weights).
12:06 < zoq> kartik_: The covariance matrix is a bias to move in the direction of the most valuable network. Take a look at: for the exact equation to update the covariance matrix.
12:12 < kartik_> so <zoq> in Bang's NEAT and CNE this was acheived by pertubation , cross over, mutation and speciating which here is done by moving the mean to new location close to optimum.. and recalculating the covariance?
12:13 < kartik_> also at new location using gaussian distribution to sample out the new offsprings .. denoted by lambda?
12:16 < kartik_> ive already read that wonderful tutorial .. you suggested me this in January i guess.. now i have started working on this project again this month.. apologies for the delay ..
12:26 -!- Vishal_ [0e619a5d@gateway/web/freenode/ip.] has joined #mlpack
12:26 < kartik_> that cleared everything to me except one thing.. covariance is a square matrix having weights and for the given neural network what dimension should it be of ?
12:29 < kartik_> also i would make this compatible with Bang Lui's neural net implementation .. was just thinking when will his implementation be merged in the main repository .. :D he has documented it superbly
12:37 -!- Vishal__ [0e619a5d@gateway/web/freenode/ip.] has joined #mlpack
12:37 < zoq> kartik_: Yes; NEAT doesn't work on fixed topology networks, instead it will find an optimal topology and parameter set using offspring, mutation and crossover. So you might start with a single layer neural network and could end with a 5 layer neural newwork, where some of the units from layer x are connected with units from another layer.
12:37 < zoq> kartik_: If I remember right lambda is the number of populations/networks sampled from the multivariate Gaussian distribution.
12:38 < zoq> kartik_: The covariance matrix is of size N x N where N = parameter size = network weights.
12:38 -!- Vishal__ [0e619a5d@gateway/web/freenode/ip.] has left #mlpack []
12:38 < zoq> kartik_: Making it compatible with Bang's code, is a great idea, that way you could reuse part of his work. But we somehow introduced a bug, so that the method wasn't able to solve the Mario task, it's kinda strange because in an earlier stage the network was able to solve all implemented tasks e.g. CartPole, MountainCar, XOR, etc. I already started to look into the issue and I guess solved some issues, but
12:38 < zoq> got distracted by some other things. It's definitely on my list to finish this.
12:41 < kartik_> yes its for the number of population ..
12:41 < kartik_> what kind of bug?
12:42 < kartik_> i used the code myself to check mario implementation and it worked fine
12:44 < kartik_> it took me 16 hours .. and i have done it back in december .. i even blogged about it
12:51 < kartik_> and <zoq> im still not completely clear with the dimension of covariance matrix.. but i guess ill figure that out ..
12:51 < zoq> kartik_: At some point in the optimization process, the method gets stuck until the point where only one population is left. The method should be able to resolve the issue, but somehow the function that calculates the similarity between genomes isn't able to produce another population.
12:51 < zoq> kartik_: oh, okay, do you remember which version you used for the test? I remember that we tested multiple instances at the same time, for like 7 days ...
12:53 < kartik_> oh no.. it was in december.. i reinstalled latest ubuntu after it and it got removed..
12:54 < zoq> kartik_: oh okay, maybe I should give it another try, maybe it got solved by it's own :)
12:54 < zoq> kartik_: About the covariance matrix: let's say your network has one layer with two inputs and two outputs and two hidden units, you have 8 weights so N = 8.
12:55 < kartik_> yup.. oh ohkae .. that was silly of me
12:55 < kartik_> thanks ..
12:56 < kartik_> ill ping u after some code on cmaes and also then will try for the bug fix..
12:56 < kartik_> thanks for everything..
13:00 < zoq> Here to help, btw. we still have the video recording where the method was able to solve the Mario task:
13:07 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has quit [Quit: Page closed]
13:20 -!- chvsp [cb6ef208@gateway/web/freenode/ip.] has joined #mlpack
13:29 < kartik_> finally got downloaded :D .. that was cool ..
13:29 -!- kartik_ [73f840a9@gateway/web/freenode/ip.] has quit [Quit: Page closed]
13:48 -!- Vishal_ [0e619a5d@gateway/web/freenode/ip.] has quit [Quit: Page closed]
14:18 -!- chvsp [cb6ef208@gateway/web/freenode/ip.] has quit [Quit: Page closed]
14:46 -!- chvsp [cb6ef215@gateway/web/freenode/ip.] has joined #mlpack
15:05 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has joined #mlpack
15:09 < chvsp> zoq: I was writing the code for BatchNorm layer. I couldn't understand what this line means in the Serialize function : ar & data::CreateNVP() . Could you please help me out with this. Thanks
15:10 < rcurtin> chvsp: that's boost::serialization (or a wrapper around it)
15:10 < rcurtin> if you go read the boost serialization documentation, it should make sense
15:10 < rcurtin> the only thing to keep in mind after that,
15:11 < rcurtin> is that data::CreateNVP() is a special mlpack replacement for BOOST_SERIALIZATION_NVP(),
15:11 < rcurtin> and mlpack uses a Serialize() function instead of serialize()
15:11 < rcurtin> if you want more details on what is going on there, after you read the serialization docs, see src/mlpack/core/data/serialization_shim.hpp
15:12 < rcurtin> (but be warned, that file is kind of crazy)
15:13 < chvsp> rcurtin: Boost serialisation- Will look into it thanks.
15:13 < chvsp> serialization_shim.hpp - Will try, thanks for the warning though... :)
15:14 < rcurtin> yeah, no huge need to understand these things in detail, just the basics should suffice to understand what it does
15:15 < chvsp> rcurtin: Another thing I wanted to know. For BatchNorm there is a different forward pass for both train and test runs. I couldn't get any ideas, how to carry it out.
15:16 < rcurtin> I'm not particularly familiar with that code, so unfortunately I can't say for that one
15:16 < rcurtin> I know that sometimes your training passes will be different than your test passes
15:17 < rcurtin> like i.e. with dropout, where you perform dropout at training time but not test time
15:19 -!- thyrix [2d4c4a21@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
15:21 < chvsp> Cool! I went through the dropout code. There is this parameter called deterministic and we can have different passes conditioned on this variable. I will try to include this in my code.
15:21 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
15:46 -!- chvsp [cb6ef215@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
15:47 -!- shihao [80b4953b@gateway/web/freenode/ip.] has joined #mlpack
16:29 -!- bobby_ [0e8bf2c3@gateway/web/freenode/ip.] has joined #mlpack
16:31 -!- vinayakvivek [uid121616@gateway/web/] has joined #mlpack
16:36 -!- bobby_ [0e8bf2c3@gateway/web/freenode/ip.] has quit [Quit: Page closed]
16:40 -!- biswesh [0e8b9b18@gateway/web/freenode/ip.] has joined #mlpack
16:41 -!- biswesh [0e8b9b18@gateway/web/freenode/ip.] has quit [Client Quit]
17:08 -!- shihao [80b4953b@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
17:37 -!- mikeling [uid89706@gateway/web/] has quit [Quit: Connection closed for inactivity]
17:48 -!- dineshraj01 [~dinesh@] has joined #mlpack
17:48 < kris2> zoq: Could you look at this
17:49 -!- arunreddy [~arunreddy@] has quit [Quit: WeeChat 1.4]
17:53 -!- tempname [1b05353c@gateway/web/freenode/ip.] has joined #mlpack
17:57 -!- light_ [0e8b55c4@gateway/web/freenode/ip.] has joined #mlpack
18:04 < zoq> kris2: See my comments.
18:11 < kris2> zoq: got it thanks. I was misinterpreting template <class LayerType, class... Args> void Add(Args... args) function.
18:19 -!- tempname [1b05353c@gateway/web/freenode/ip.] has quit [Quit: Page closed]
18:19 -!- chvsp [cb6ef207@gateway/web/freenode/ip.] has joined #mlpack
18:21 -!- light_ [0e8b55c4@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
18:55 -!- daivik [daivik@gateway/shell/xshellz/x-sbonfafhxtpneosp] has joined #mlpack
18:57 -!- daivik [daivik@gateway/shell/xshellz/x-sbonfafhxtpneosp] has left #mlpack []
19:33 -!- s1998 [0e8bc409@gateway/web/freenode/ip.] has joined #mlpack
19:39 < chvsp> Hi zoq:, rcurtin: I read about the serialisation and have understood what it does. I couldn't understand which variables to serialise? Is there any criteria for selection of variables?
20:00 -!- aditya_ [~aditya@] has quit [Ping timeout: 240 seconds]
20:00 < zoq> chvsp: Every parameter needed for the reverse deconstruction. You can ask yourself, which parameter do I have to save to for the reconstruction e.g. for the Dropout layer we need ratio and rescale everything else is calculated at runtime for the linear layer we have to save the weights the input and output size.
20:03 < zoq> chvsp: One scenario where serialization is used is to save the model to a file e.g. XML.
20:08 < chvsp> zoq: So in the case of batchnorm, the scale and shift vectors need to be stored. Will the mean and variance of the training set be stored also? Because at test time, we use the mean and variance of the training set.
20:10 < zoq> Yes, we also have to save the mean and variance parameter, since we don't know what someone does once the model is loaded, he could continue to train the model or use it for prediction.
20:11 < zoq> You could also say, that you save the current state of the method/model.
20:28 < kris2> any good ways to the value in the hidden layer at each iteration.
20:29 < kris2> when i call model.predict(x). can i get the values in hidden layers also.
20:29 -!- AL3x3d [bc1b6a8d@gateway/web/freenode/ip.] has joined #mlpack
20:38 -!- AL3x3d [bc1b6a8d@gateway/web/freenode/ip.] has quit [Quit: Page closed]
20:39 < kris2> could we do a boost::apply_visitor(getparameters, model.Model()[i])
20:43 < zoq> kris2: hm, that would mean a layer can access the Model, right?
20:44 < kris2> zoq: i think we could a forwardvisitor(input, output).
20:45 < kris2> i actually trying to implement the vanilla policy gradients, so thats why need the hidden values for the whole episode
20:46 < zoq> And the base container e.g. FFN calls the forwardvisitor function?
20:48 < kris2> sorry i don't understad....just give me a minute
20:49 -!- dineshraj01 [~dinesh@] has quit [Read error: Connection reset by peer]
20:51 < zoq> What I did for the recurrent visual attention model was to split the network into two branches. One branch for the input and another branch for the actual computation and merge it back where I needed the input and the output of the previous layer.
20:51 < zoq> What you also could do is to implement some function e.g. Input and use a visitor that is called in each iterations, which updates the input.
21:18 < chvsp> Ok got it thanks
21:33 -!- delfo_ [~bruno@] has joined #mlpack
21:39 < s1998> Since there is no pruning method in current implementation, I would like to implement a Reduced Error Pruning on decsion tree.
21:40 < s1998> Preferably this one :
21:40 < s1998> Can I go ahead with it ?
21:45 -!- vinayakvivek [uid121616@gateway/web/] has quit [Quit: Connection closed for inactivity]
21:45 < zoq> s1998: It might take some time before rcurtin answers the question, you can always check the irc logs:
21:46 < s1998> Sure :)
21:48 -!- s1998 [0e8bc409@gateway/web/freenode/ip.] has quit [Quit: Page closed]
21:48 < rcurtin> I'm in transit right now, it will be a few hours perhaps before I can respond
21:48 < rcurtin> too much travel...
21:51 < zoq> Hopefully some nice place, like Hawaii :)
21:54 < kris2> sorry for the late reply. 1) And the base container e.g. FFN calls the forwardvisitor function. No any function lets say bookeeping() could call the boost::apply_visitor(forwardvistitor(input, output)). Yes i agree we are doing double computation here. Thats my only concern.
21:55 < kris2> zoq: also one more thing when we say parameters do we mean W or mean like in the outputParameterVisitor gives the output.
21:56 < kris2> When you implement a some function......... that updates the input. This would also suffer from the same problem of double computation. as i previously describe.
21:56 < zoq> kris2: Parameters = all trainable parameter in most cases the weights.
21:58 < kris2> ohhh okay then even the outputParameters make sense.
21:58 < zoq> We could save a reference instead of a copy, but yes there is a small overhead.
22:02 < kris2> i think you misunderstood. i need the hidden state values at every forward pass iteration.
22:04 < kris2> should i elaborate more.
22:04 < zoq> ah, I did
22:04 < kris2> for input i am just pushing it to std::vector<arma::vec>
22:05 < kris2> ok, that why i was using the forward vistor for getting the hidden state values.
22:06 < zoq> Couldn't you use the OutputParameter for that?
22:08 < kris2> hmm okay so the outputParameter gives wTx for every layer if i am not wrong.
22:08 < zoq> for the linear layer yes
22:08 < zoq> and you like x right?
22:10 < zoq> Which is either InputParameter or OutputParameter, depending from where you start: for (size_t i = 0; i < network.size(); ++i) { outputParaemter = boost::apply_visitor(OutputParameterVisitor(), network[i]; }
22:10 -!- delfo_ [~bruno@] has quit [Quit: WeeChat 1.7]
22:11 < kris2> yes exactly what i was thinking thanks
22:13 < kris2> a side question i was reading the reinforcement learning project idea again. The deliverable for the project is just to implement those algorithm for provide cli interface to the user that they can tune the parameters. I think its the latter.
22:18 < kris2> zoq: also i wanted to know by when should we submit the proposals.
22:24 < chvsp> Hi kris2: could you please give me brief introduction of what a visitor is? I think I would need it too, to extract the gradients from a layer.
22:26 < kris2> chvsp: Just read this I think this explaints it in the best possible way.
22:27 < chvsp> Ok thanks
22:27 < kris2> i think you were implementing the batchnorm right. where did you require visitor just curious.
22:30 < chvsp> I want to get the gradients flowing through the layer, to test the layer. I was wondering how to go about it. You case seemed similar, hence I asked
22:35 < zoq> kris2: The mentioned algorithms are examples, if you have some other interesting methods in mind, I'm open for a discussion.
22:35 < zoq> kris2: The plan is to implement some but not all of them and to provide an interface, so that someone could write a new task e.g. You have some sensor that measures the room temperature and you'd like to know at which time you could open the window without wasting too much energy, or you have this game where you can't find a solution, so you like to use some machine learning to figure it out :)
22:35 < zoq> kris2: The application phase opens March 20? and ends April 3? ... not sure what the exact dates are, you should check the timeline. Anyway, you can submit and update your proposal in that timeframe, also you can give us the chance to take a look at the proposal and we can give you some feedback. But, If you submit your proposal like 3 days before the deadline, we can't guarantee that we have the chance to
22:35 < zoq> give you some feedback.
22:37 < zoq> chvsp: If you have e.g. Linear<> layer(10, 10); you can access the gradients with layer.Gradients().
22:38 < kris2> Aaah thanks....Then maybe i should start right from now itself then. Shouldn't leave it till the last moment.
22:39 < zoq> chvsp: If you have e.g. Linear<> layer(10, 10); you can access the gradients with layer.Gradients(). Once you don't know the layer type you have to use some of the visitors.
22:39 < zoq> kris2: Good idea, as I said you can update your proposal as often as you like in that timeframe.
22:40 < chvsp> zoq: Then why are visitors used in the first place? You declare the network architecture before hand, hence you must know the layers and their types.
22:44 < chvsp> Oh I didn't know that the proposals could be edited. I too will start preparing one.
22:45 < chvsp> I mean edited after submission.
22:46 < zoq> chvsp: You some kind lose that knowledge if you put multiple types (Linear, Sigmoid, Dropout, etc.) into one container. E.g. Linear<> layer(10 ,10); template<typename T>DoSomething(T& layer) T could be anything, so you need some abstraction that allows you to call a specific function for specifc types, since not every layer implements the same function set.
22:47 < kris2> chvsp: The explanation is trying to separate the data structure from the algorithm. For different layer types we could have implement lets say weight parameters. These would give the weight parameters independent of the layer type.
22:48 < zoq> chvsp: Yeah, I think the rule that you can update your proposal was introduced two years ago.
22:54 < rcurtin> zoq: nope, just a trip to the Symantec office in LA... not very exciting to sit in the cubicles here
22:54 < rcurtin> but I will go racing this weekend so that will make the trip worth it :)
22:56 < rcurtin> s1998: (hopefully you check the logs) I think that sounds like a good thing to implement, but consider implementing the 'PruningStrategy' as a template policy class, just like CategoricalSplitType and NumericSplitType
22:56 < chvsp> zoq: When we do FFN<> model; model.Add <Layer1>; we don't have the Layer1 object with us. Hence to access such objects you use visitor. Is that what you wanted to convey?
22:57 < rcurtin> s1998: instead of citing the paper you did in the code, though, I'd consider citing Quinlan's original:
22:57 < rcurtin> strange journal name for that paper... "international journal of man-machine studies"
22:58 -!- kris2 [~kris@] has left #mlpack []
22:59 < chvsp> rcurtin: "man studies machine" would have made sense... :D
23:04 < rcurtin> I wondered if it was a Kraftwerk reference... 'die Mensch-Maschine"
23:10 < zoq> chvsp: I think we mean the same thing, yes :)
23:10 < zoq> rcurtin: Now listen to some Kraftwerk songs :)
23:13 < chvsp> Kraftwerk die Mensch Maschine - they just seem to repeat the same thing over and over. Its good nonetheless. :)
23:15 < zoq> I agree, it's kinda catchy
23:15 < rcurtin> I like Kraftwerk, the repetitiveness does not bother me at all :)
23:16 < zoq> Does anyone have other music recommendations?
23:20 -!- chvsp [cb6ef207@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
23:31 < rcurtin> hehe, if we are thinking in the genre of Kraftwerk, I have also been listening to some Moebius & Plank
23:33 -!- diehumblex [uid209517@gateway/web/] has quit [Quit: Connection closed for inactivity]
23:44 -!- chvsp [cb6ef207@gateway/web/freenode/ip.] has joined #mlpack
23:46 < zoq> Another german electronic music band, but this time I can't say I know a single song.
23:55 < rcurtin> yet another german electronic music band I found some time back that I liked was Deutsche-Amerikanische Freundschaft (D.A.F.), I think they mostly recorded in the late 80s/early 90s
23:56 < rcurtin> very minimal electronica, kind of like some of the more quiet tracks from Kraftwerk
--- Log closed Thu Mar 09 00:00:26 2017