mlpack IRC logs, 2018-03-20

Logs for the day 2018-03-20 (starts at 0:00 UTC) are shown below.

March 2018
--- Log opened Tue Mar 20 00:00:00 2018
01:15 -!- mrcode [~thelaughi@] has joined #mlpack
02:15 -!- rajeshdm9 [6f5d9d73@gateway/web/freenode/ip.] has joined #mlpack
02:35 -!- csoni [~csoni@] has joined #mlpack
02:42 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
03:16 -!- rajeshdm9 [6f5d9d73@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
04:41 -!- ravikiran0606 [0e8ba10d@gateway/web/freenode/ip.] has joined #mlpack
04:42 -!- ravikiran0606 [0e8ba10d@gateway/web/freenode/ip.] has quit [Client Quit]
05:05 -!- csoni [~csoni@] has joined #mlpack
05:11 -!- mrcode [~thelaughi@] has quit [Quit: Leaving.]
05:12 -!- MystikNinja [] has joined #mlpack
05:14 -!- MystikNinja [] has quit [Client Quit]
06:42 -!- csoni2 [~csoni@] has joined #mlpack
06:42 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
06:48 -!- rajeshdm9 [0e8b9b18@gateway/web/freenode/ip.] has joined #mlpack
06:57 -!- ironstark_ [sid221607@gateway/web/] has quit [Quit: Connection closed for inactivity]
06:58 -!- csoni [~csoni@] has joined #mlpack
06:59 -!- csoni2 [~csoni@] has quit [Read error: Connection reset by peer]
07:03 -!- csoni [~csoni@] has quit [Ping timeout: 268 seconds]
07:57 -!- rajeshdm9 [0e8b9b18@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
09:57 -!- csoni [~csoni@] has joined #mlpack
10:01 -!- csoni [~csoni@] has quit [Ping timeout: 240 seconds]
10:09 -!- csoni [~csoni@] has joined #mlpack
10:11 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
10:42 -!- donjin_master [9d25970e@gateway/web/freenode/ip.] has joined #mlpack
10:44 < donjin_master> Hello everyone, i want to submit my proposal on reinforcement learning . Can anyone suggest me what should i have to write in the proposal
10:45 < donjin_master> Like i want to work on double DQN Algorithm does i have to explain that how will i am going to implement this algorithm in summer and the proper timelone of the work schedule
10:46 < donjin_master> Correct me if a i am wrong ...
10:46 < donjin_master> Will i have to give the proper code in my proposal for the algorithm
10:46 -!- donjin_master [9d25970e@gateway/web/freenode/ip.] has quit [Client Quit]
11:17 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 260 seconds]
11:19 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
11:29 < zoq> donjin_mast: Hello, should be helpful.
12:21 -!- donjin_master [9d258b13@gateway/web/freenode/ip.] has joined #mlpack
12:22 < donjin_master> Thanks zoq i have gone through the application guide i am going to draft a proposal as soon as possible ..
12:22 -!- donjin_master [9d258b13@gateway/web/freenode/ip.] has quit [Client Quit]
12:26 -!- csoni [~csoni@] has joined #mlpack
12:26 -!- haritha1313 [2ff7e196@gateway/web/freenode/ip.] has joined #mlpack
12:27 < haritha1313> @rcurtin: @zoq: I am writing my proposal based on neural collaborative filtering. And as per discussions in mailing list, I tried to benchmark it against existing cf algorithms in mlpack.
12:29 < haritha1313> Since the implementation available for ncf uses hit ratio and ndcg as parameter, and mlpack uses rmse, I calculated rmse for ncf. On comparison ncf is giving 1.6 (approx) rmse and most algorithms in mlpack are giving rmse greater than 2.
12:29 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
12:30 < haritha1313> Do you think these results are enough to check whether it is worth implementing? Or should I calculate hit ratio in mlpack's cf for comparison?
12:32 < zoq> haritha1313: I think this is just fine.
12:32 < haritha1313> @zoq: Thanks :)
12:33 < zoq> Thanks for taking the time to do the comparison.
12:43 -!- haritha1313 [2ff7e196@gateway/web/freenode/ip.] has left #mlpack []
13:34 -!- prem_ktiw [~quassel@] has joined #mlpack
14:01 -!- csoni [~csoni@] has joined #mlpack
14:07 -!- sujith_ [0e8ba0e9@gateway/web/freenode/ip.] has joined #mlpack
14:16 -!- csoni2 [~csoni@] has joined #mlpack
14:16 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
14:17 -!- csoni [~csoni@] has joined #mlpack
14:17 -!- csoni2 [~csoni@] has quit [Read error: Connection reset by peer]
14:19 -!- satyam_2401 [uid282868@gateway/web/] has joined #mlpack
14:25 -!- rf_sust2018 [~flyingsau@] has joined #mlpack
14:37 -!- prem_ktiw [~quassel@] has quit [Ping timeout: 245 seconds]
15:01 -!- sujith_ [0e8ba0e9@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
15:04 -!- rf_sust2018 [~flyingsau@] has quit [Quit: Leaving.]
15:13 -!- rf_sust2018 [~flyingsau@] has joined #mlpack
15:14 -!- csoni [~csoni@] has quit [Ping timeout: 264 seconds]
15:20 -!- csoni [~csoni@] has joined #mlpack
15:39 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
15:56 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
15:58 -!- IAR [c488018e@gateway/web/freenode/ip.] has joined #mlpack
15:58 -!- IAR [c488018e@gateway/web/freenode/ip.] has quit [Client Quit]
16:05 -!- IAR_ [c488018e@gateway/web/freenode/ip.] has joined #mlpack
16:08 -!- IAR [~IAR@] has joined #mlpack
16:08 -!- csoni [~csoni@] has joined #mlpack
16:08 -!- IAR_ [c488018e@gateway/web/freenode/ip.] has quit [Client Quit]
16:11 -!- IAR [~IAR@] has quit [Read error: Connection reset by peer]
16:28 -!- satyam_2401 [uid282868@gateway/web/] has quit [Quit: Connection closed for inactivity]
16:34 -!- yashsharan [6741c40a@gateway/web/freenode/ip.] has joined #mlpack
16:34 < yashsharan> @zoq Did you get the chance to review my draft?thanks.
16:37 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
16:56 -!- csoni [~csoni@] has joined #mlpack
16:58 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
17:16 -!- Prabhat-IIT [6725c961@gateway/web/freenode/ip.] has joined #mlpack
17:18 < Prabhat-IIT> zoq: Please look into the SAGA optimizer pr as its on the verge of ready if we can handle the random seed issue :)
17:19 < Prabhat-IIT> zoq: Btw what do you think about the topology of particle in PSO that I've elaborated in my draft?
17:24 -!- donjin_master [9d259d22@gateway/web/freenode/ip.] has joined #mlpack
17:24 -!- csoni [~csoni@] has joined #mlpack
17:25 < donjin_master> Can someone suggest me how can i implement reinforcement learning algorithm in my machine with using cartpole and mountain car environment
17:25 -!- donjin_master [9d259d22@gateway/web/freenode/ip.] has quit [Client Quit]
17:32 -!- csoni [~csoni@] has quit [Ping timeout: 264 seconds]
17:46 -!- sumedhghaisas [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has joined #mlpack
17:53 -!- Abkb [a0ca2502@gateway/web/freenode/ip.] has joined #mlpack
17:55 -!- Abkb [a0ca2502@gateway/web/freenode/ip.] has quit [Client Quit]
17:57 < Atharva> sumedhghaisas: the way mlpack optimizer object works is that it expects the loss function as the last layer of the network object. In variational autoencoders, the loss function is the sum of kl divergence and reconstruction loss
17:58 < Atharva> Which usually is negative log likelihood or mean squared
17:59 -!- rf_sust2018 [~flyingsau@] has quit [Quit: Leaving.]
17:59 < Atharva> Should I propose to define a new layer/loss function including both or should i try to find out a way to combine two cost/layer objects?
18:03 -!- csoni [~csoni@] has joined #mlpack
18:09 -!- sourabhvarshney1 [73f840a9@gateway/web/freenode/ip.] has joined #mlpack
18:12 -!- rf_sust2018 [~flyingsau@] has joined #mlpack
18:12 < sumedhghaisas> @Atharva: Hi Atharva
18:13 < sumedhghaisas> Let me see if I understand your question correctly.
18:15 -!- sumedhghaisas2 [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has joined #mlpack
18:15 -!- sumedhghaisas [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has quit [Read error: Connection reset by peer]
18:15 < sumedhghaisas2> @Atharva: sorry for that... connection problem
18:15 < Atharva> No problem
18:16 < sumedhghaisas2> okay so in the current class FFN and RNN the loss function can only be dependent on the last layer output
18:17 < sumedhghaisas2> although KL divergence will be dependent on the middle layer Z ... is that correct?
18:17 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
18:17 < Atharva> Yeah, that’s correct
18:18 < sumedhghaisas2> okay. Yes, I have considered this problem while proposing VAE framework.
18:19 < Atharva> when we call the optimize function from the train function, we just pass in the entire network as the first argument, for VAEs, we need to optimize the parameters wrt to the combined loss function
18:19 < sumedhghaisas2> if we want to make it work with current framework there are couple of options
18:20 < sumedhghaisas2> but all of them would involve changing lot of pre-existing code.
18:20 < Atharva> Okay, what are the options?
18:23 < sumedhghaisas2> hmmm... although the same problem will be faced by GANs, there are 2 networks to optimize
18:24 < sumedhghaisas2> we have to look in their code to figure out the way they are optimizing 2 networks. The same will apply for our Encoder and Decoder. what you think?
18:26 < Atharva> Sorry, I am not quite sure about how GANs are trained, but both the encoder and decoder of the VAE are trained as one network, correct me if I am wrong
18:28 < sumedhghaisas2> Yes you are right. But the problem we are facing might be similar to the problem faced in GAN. let me see if I can explain this properly.
18:30 < sumedhghaisas2> Ahh but that is only if the GANs implementation in MLPACK is using variational approximations. I see.
18:30 < sumedhghaisas2> Sorry :) I am not sure about the GAN implementation in MLPACK either.
18:30 < sumedhghaisas2> so the options...
18:31 < sumedhghaisas2> One preferred would be to implement a separate class which supports training with variationql approximations, technique which involved KL in the loss.
18:32 < sumedhghaisas2> *involves
18:33 < sumedhghaisas2> Another is to change the current framework such that the loss is dependent on each layer. Some sort of visitor which takes loss term from each layer.
18:34 < Atharva> Do you mean that we will have a class which will already have the KL loss and then to his class we will also pass the reconstruction loss, then this combined cost will be optimized.
18:34 < Atharva> The above doubt is for the first option
18:35 < sumedhghaisas2> umm. Not really.
18:36 < sumedhghaisas2> so FFN and RNN both do not support Variational loss, which may defined over middle layer.
18:37 < sumedhghaisas2> so the first option involves implementing a new framework altogether which handles Variational losses better.
18:37 < Atharva> Oh
18:37 < Atharva> But that doesn’t seem to be a good solution just to fit in a type of loss
18:38 < sumedhghaisas2> I agree. So the second option...
18:38 < Atharva> The second option, instead of a visitor that takes loss from every layer(which we don’t need) can’t we maintain a pointer to the middle layer and take loss only from this
18:40 < sumedhghaisas2> Yes we can. but where will you maintain a pointer? I mean how will you tell FFN class to maintain such a pointer?
18:41 < sumedhghaisas2> The problem with FFN class is that it treats all the layers equally.
18:41 < Atharva> So what I am planning to do is, instead of having the Variational autoencoder as one FFN object, I will have the encoder as one, the decoder as one, and the middle layer(that samples latent variables) as a different layer object altogether
18:41 < Atharva> These objects will be inside the VAE class which i will define
18:42 < sumedhghaisas2> Ahh yes. Basically that's the first option. :)
18:43 < sumedhghaisas2> So the newly created class will become parallel to FFN and RNN
18:43 < sumedhghaisas2> this way if other frameworks use vational inference, they will use this newly created framework for training....
18:44 < Atharva> Oh, okay
18:45 < Atharva> Yeah, okay, so in order to train this network, we will meed to modify a lot of code
18:46 < sumedhghaisas2> The first option involves lot of work but in my opinion is long lasting. Although we should talk to @zoq and @rcurtin about this before proceeding. They might have a better idea about this.
18:47 < sumedhghaisas2> various GAN uses Variational generator to speed up the training or that's what I have heard
18:47 < sumedhghaisas2> there this class can be used.
18:47 < Atharva> Sorry I didn’t get the first option when you were explaining it.
18:47 < sumedhghaisas2> zoq will know more this than me.
18:48 < sumedhghaisas2> don't worry :) I am not very good at explaining either.
18:48 < sumedhghaisas2> let's be happy that now we are on same page
18:48 < Atharva> Yeah :)
18:49 < Atharva> I do think this is a really good option
18:50 < sumedhghaisas2> Although we must be careful to make it as generic as possible. Lot of genertive models are hierarchical
18:50 < Atharva> Because when using VAEs, we should have the freedom to use the encoder and decoder independently, especially for advanced users
18:50 < sumedhghaisas2> as in they may have hierarchical latents in them
18:50 < Atharva> Yes I understand, it needs to be very generic
18:50 < sumedhghaisas2> thus the loss might be dependent on various KL divergences
18:52 < Atharva> Okay, do you mean there should be a way to keep track of multiple sampling layers which can have KL divergence
18:52 < sumedhghaisas2> Also I am not sure that the current FFN supports using distributions as output
18:52 < sumedhghaisas2> @Atharva: yes precisely.
18:54 < Atharva> About that, the latent variable sampling layer which i plan to implement will take care of the reparametrization trick used to train VAEs
18:54 < Atharva> I mean..
18:54 < sumedhghaisas2> We need not implement such a hierarchical VAE, we need to make sure someone who wants to can use our API without much of changes.
18:55 < Atharva> Sorry I didn’t quite get that
18:56 < sumedhghaisas2> I mean we should design the class such that there can be multiple sampling layers
18:57 < Atharva> I meant that the encoder will output means standard deviations and pass them to the sampling layer, this layer will then sample a point from standard normal. By multiplying this by the variance and adding mean we have a sample
18:58 < Atharva> I think this layer could be used anywhere even multiple times within the network
18:59 < sumedhghaisas2> Yes that should work. Also the final loss shoukd be defined over KL of all these layers.
18:59 < Atharva> We can maintain pointers to all such layers we have in the network and take KL divergences from them
18:59 < Atharva> Exactly
18:59 < Atharva> So, is this all a good plan
19:01 < sumedhghaisas2> We should still think about this and try to come up with different options if possible, before the coding actually begins.
19:01 < sumedhghaisas2> to make sure we are not missing anything
19:02 < Atharva> Yeah, but only 7 days remain for the proposal deadline, what do you think I should put as a plan there?
19:02 < sumedhghaisas2> discussion it with zoq and rcurtin will also give us some more ideas and perspectives
19:03 < sumedhghaisas2> as the API is tentative, mentioning the gist of the discussion is also an option. :)
19:04 < Atharva> Also, can you point me to some paper/blog post with models that require multiple sampling layers
19:04 < Atharva> Okay, i will do that, thank you! :)
19:05 < sumedhghaisas2> PixelVAE uses PixelCNN layer as decoder in VAE.
19:05 < sumedhghaisas2> for cifar they train a hierarchical model indeed.
19:05 < sumedhghaisas2>
19:06 < Atharva> Oh, okay, so I guess it’s really important to make it as generic as possible so that every possible VAE model can be made using the framework
19:08 -!- sumedhghaisas2 [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has quit [Read error: Connection reset by peer]
19:09 -!- sumedhghaisas [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has joined #mlpack
19:09 < Atharva> zoq:
19:09 < Atharva> rcurtin:
19:11 < Atharva> It will be very helpful if you could give your opinions regarding this discussion. Please do when you get the time.
19:11 < sumedhghaisas> @Atharva: Yes. I am just starting with a meeting so might be delayed to respond, I will try to reply after the meeting.
19:12 < Atharva> Yeah no problem, i will work on what we have discussed till now
19:24 -!- csoni [~csoni@] has joined #mlpack
19:31 -!- csoni [~csoni@] has quit [Read error: Connection reset by peer]
19:33 -!- IAR [~IAR@] has joined #mlpack
19:54 -!- IAR [~IAR@] has quit [Quit: Leaving...]
20:17 -!- sumedhghaisas [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has quit [Read error: Connection reset by peer]
20:18 -!- sumedhghaisas [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has joined #mlpack
20:22 -!- sumedhghaisas [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has quit [Ping timeout: 256 seconds]
20:23 -!- sumedhghaisas [~yaaic@] has joined #mlpack
20:28 -!- sumedhghaisas [~yaaic@] has quit [Ping timeout: 256 seconds]
20:32 -!- sourabhvarshney1 [73f840a9@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
20:33 -!- sumedhghaisas [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has joined #mlpack
20:35 -!- sumedhghaisas2 [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has joined #mlpack
20:35 -!- sumedhghaisas [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has quit [Read error: Connection reset by peer]
20:41 -!- sumedhghaisas2 [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has quit [Read error: Connection reset by peer]
20:43 -!- sumedhghaisas [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has joined #mlpack
20:45 -!- rf_sust2018 [~flyingsau@] has quit [Quit: Leaving.]
20:45 -!- sumedhghaisas [~yaaic@2a00:79e0:d:fd00:3dcd:8341:6116:2a18] has quit [Read error: Connection reset by peer]
20:46 -!- sumedhghaisas [~yaaic@] has joined #mlpack
20:51 -!- sumedhghaisas [~yaaic@] has quit [Ping timeout: 248 seconds]
20:52 -!- sumedhghaisas [~yaaic@] has joined #mlpack
20:58 -!- yashsharan [6741c40a@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
21:00 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Read error: Connection reset by peer]
21:13 -!- sumedhghaisas [~yaaic@] has quit [Read error: Connection reset by peer]
21:14 -!- ojasava [3d108c82@gateway/web/freenode/ip.] has joined #mlpack
21:14 -!- sumedhghaisas [] has joined #mlpack
21:14 -!- ojasava [3d108c82@gateway/web/freenode/ip.] has quit [Client Quit]