mlpack IRC logs, 2018-06-25

Logs for the day 2018-06-25 (starts at 0:00 UTC) are shown below.

June 2018
Sun
Mon
Tue
Wed
Thu
Fri
Sat
 
 
 
 
 
1
2
3
4
5
6
7
8
23
24
25
26
27
28
29
30
--- Log opened Mon Jun 25 00:00:18 2018
01:25 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
06:49 -!- vivekp [~vivek@unaffiliated/vivekp] has quit [Ping timeout: 264 seconds]
07:47 -!- vivekp [~vivek@unaffiliated/vivekp] has joined #mlpack
09:49 < jenkins-mlpack> Project docker mlpack nightly build build #360: STILL UNSTABLE in 2 hr 35 min: http://masterblaster.mlpack.org/job/docker%20mlpack%20nightly%20build/360/
11:12 < zoq> haritha1313: The output dimension of the embedding layer looks strange.
13:13 -!- witness_ [uid10044@gateway/web/irccloud.com/x-alfgblzgqtcrfcne] has joined #mlpack
14:16 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has joined #mlpack
14:38 -!- haritha1313 [0e8bf0fb@gateway/web/freenode/ip.14.139.240.251] has joined #mlpack
14:39 < haritha1313> zoq: Hi, could you please give me the input parameters for which there is trouble?
14:40 < zoq> haritha1313: Hey, here you go https://gist.github.com/zoq/08c3035ba5b742aff6d5fd4f6ef71214
14:46 < haritha1313> Here I suppose the emedding layer is supposed to give an output of 20X10 where 20 is the embedding size and 10 is the input vector's size
14:48 < zoq> haritha1313: right
14:51 < haritha1313> I think the network is giving the right dimensions then. Sorry, I'm not able to point out what is wrong.
14:53 < haritha1313> The output after using merge model is of dimension 100X2, and this is because of the flattening of input by concat layer, as we discussed yesterday.
14:53 < haritha1313> Is there anything I am missing?
15:01 < zoq> haritha1313: The issue I see is that, the concat layer returns 200 x 2 but the linear layer after the concat one expects: network.Add<Linear<> >(20, 5);
15:02 < zoq> This could be a failure on my side.
15:03 < zoq> I would expect a single sample as output (..., 1)
15:03 < zoq> So, there might be need for a flatten layer, or an option to flatten the output for the concat layer.
15:36 < haritha1313> Sorry for the delay. I had gone for dinner.
15:36 < haritha1313> Yes, that (20, 5) was written expecting concat layer to give 20X10 output.
15:37 < zoq> haritha1313: Ahh, I see, that makes sense :)
15:38 < haritha1313> After yesterday's discussion I worked on it so that the 200X2 output uses the subview layer for flattening.
15:39 < haritha1313> The way we discussed earlier that subview will convert each batch into a single vector, so I thought that could just flatten it for us.
15:40 < zoq> Right, using subview should work
16:38 -!- travis-ci [~travis-ci@ec2-54-90-120-246.compute-1.amazonaws.com] has joined #mlpack
16:38 < travis-ci> mlpack/mlpack#5153 (master - a2abf9d : Marcus Edel): The build has errored.
16:38 < travis-ci> Change view : https://github.com/mlpack/mlpack/compare/dd797f5d6346...a2abf9d81491
16:38 < travis-ci> Build details : https://travis-ci.org/mlpack/mlpack/builds/396448340
16:38 -!- travis-ci [~travis-ci@ec2-54-90-120-246.compute-1.amazonaws.com] has left #mlpack []
17:01 -!- haritha1313 [0e8bf0fb@gateway/web/freenode/ip.14.139.240.251] has quit [Ping timeout: 260 seconds]
20:14 -!- ImQ009 [~ImQ009@unaffiliated/imq009] has quit [Quit: Leaving]
20:34 < ShikharJ> zoq: Sorry, for reaching out late. Are you there?
20:35 < zoq> ShikharJ: I'm here.
20:36 < ShikharJ> zoq: I was re-thinking whether there's a need for implementing two separate modules for Weight Clipping and Gradient Penalty methods for WGAN.
20:37 < ShikharJ> zoq: They both would require us to probably make certain changes to the Evaluate function and the Gradient routine of the WGAN.
20:39 < zoq> ShikharJ: Do you think we could do something like: https://github.com/mlpack/mlpack/blob/master/src/mlpack/core/optimizers/sgd/update_policies/gradient_clipping.hpp
20:40 < ShikharJ> zoq: Their pseudocode implementations are pretty different.
20:40 < ShikharJ> zoq: I'm not sure if the existing gradient_clipping class can be re-used. I'll have to investigate.
20:41 < zoq> I think, it's not the same, I was just thinking about the idea to implement this as an update policy for the optimizer class.
20:42 < zoq> We could even combine multiple methods using a parameter pack.
20:42 < ShikharJ> zoq: I see, in gradient_clipping, first the clipping is done and then the update is done. This is not the same with the original wgan algorithm.
20:42 < ShikharJ> zoq: Take a look at page 8 here (https://arxiv.org/pdf/1701.07875.pdf).
20:45 < zoq> Here they clip w after the update step
20:46 < zoq> I don't mind to implement this explicitly for the GAN class.
20:47 < ShikharJ> zoq: Exactly, and not to forget, clipping is done only in the discriminator, so this has to be done while inside Gradient / Evaluate (wherever we calculate the gradients for the discriminator).
20:47 < zoq> It might be too specific for the optimizer class policy.
20:47 < zoq> You are right.
20:49 < ShikharJ> zoq: I'll formulate a basic API over the next couple of days, and maybe we can discuss further then.
20:50 < zoq> This sounds like a great idea to me, I guess we could reuse some ideas from the optimizer class (policy design), this might be useful to disable/enable certain features.
--- Log closed Tue Jun 26 00:00:20 2018