mlpack IRC logs, 2017-03-03

Logs for the day 2017-03-03 (starts at 0:00 UTC) are shown below.

March 2017
--- Log opened Fri Mar 03 00:00:26 2017
00:09 -!- flyingpot [~flyingpot@] has joined #mlpack
00:09 < zoq> it's not that easy 'make VERBOSE=1' sometimes helps
00:13 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 260 seconds]
00:21 < arunreddy> no luck
00:29 < zoq> Maybe rcurtin has an idea, I'll take a closer look at the issue tomorrow ... at least it works with variadic templates
00:30 < arunreddy> sure. thanks
01:52 -!- mikeling [uid89706@gateway/web/] has joined #mlpack
02:11 -!- flyingpot [~flyingpot@] has joined #mlpack
02:27 -!- Paritosh [67157d50@gateway/web/freenode/ip.] has joined #mlpack
02:27 -!- Paritosh [67157d50@gateway/web/freenode/ip.] has quit [Client Quit]
02:28 -!- keebu [67157d50@gateway/web/freenode/ip.] has joined #mlpack
02:47 -!- topology [3d0c28b1@gateway/web/freenode/ip.] has joined #mlpack
03:09 -!- hxidkd [~hxidkd@] has joined #mlpack
03:38 -!- hxidkd [~hxidkd@] has quit [Ping timeout: 240 seconds]
03:38 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 260 seconds]
03:41 -!- keebu [67157d50@gateway/web/freenode/ip.] has quit [Quit: Page closed]
03:57 -!- vinayakvivek [uid121616@gateway/web/] has joined #mlpack
04:17 -!- shihao [407978c3@gateway/web/freenode/ip.] has joined #mlpack
04:18 < shihao> Hi there! I'm curious whether mlpack or arma has 'logsumexp' function?
04:18 < shihao> Or we do it step by step?
04:21 -!- flyingpot [~flyingpot@] has joined #mlpack
04:29 -!- Thyrix [~Thunderbi@] has joined #mlpack
04:44 -!- aditya_ [~aditya@] has joined #mlpack
04:50 -!- kris2 [~kris@] has joined #mlpack
04:50 -!- kris2 [~kris@] has left #mlpack []
05:11 -!- Thyrix [~Thunderbi@] has quit [Quit: Thyrix]
05:45 -!- itachi [d2d4310a@gateway/web/freenode/ip.] has joined #mlpack
05:45 -!- itachi [d2d4310a@gateway/web/freenode/ip.] has left #mlpack []
05:45 -!- bipster [75cda65a@gateway/web/freenode/ip.] has joined #mlpack
06:24 -!- topology [3d0c28b1@gateway/web/freenode/ip.] has quit [Quit: Page closed]
06:36 -!- shihao [407978c3@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
06:48 -!- flyingpot_ [~flyingpot@] has joined #mlpack
06:50 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 246 seconds]
06:52 -!- Thyrix [~Thunderbi@] has joined #mlpack
06:53 -!- bipster [75cda65a@gateway/web/freenode/ip.] has quit [Quit: Page closed]
06:58 -!- hxidkd [~hxidkd@] has joined #mlpack
07:33 -!- hxidkd [~hxidkd@] has quit [Ping timeout: 240 seconds]
08:03 -!- govg [~govg@unaffiliated/govg] has quit [Ping timeout: 240 seconds]
08:11 -!- shikhar [67d49def@gateway/web/freenode/ip.] has joined #mlpack
08:13 -!- hxidkd [~hxidkd@] has joined #mlpack
08:15 -!- vinayakvivek [uid121616@gateway/web/] has quit [Quit: Connection closed for inactivity]
08:27 -!- junaid_m [0e8ba0f6@gateway/web/freenode/ip.] has joined #mlpack
08:35 -!- junaid_m [0e8ba0f6@gateway/web/freenode/ip.] has quit [Quit: Page closed]
08:44 -!- hxidkd [~hxidkd@] has quit []
09:10 -!- nikhilweee [~nikhilwee@] has joined #mlpack
09:26 -!- shikhar [67d49def@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
09:27 -!- diehumblex [uid209517@gateway/web/] has joined #mlpack
09:40 -!- flyingpot_ [~flyingpot@] has quit [Ping timeout: 256 seconds]
10:02 -!- sdf_ [cb6ef204@gateway/web/freenode/ip.] has joined #mlpack
10:05 -!- sdf_ [cb6ef204@gateway/web/freenode/ip.] has quit [Client Quit]
10:05 -!- arnav [67525043@gateway/web/freenode/ip.] has joined #mlpack
10:06 -!- Ajinkya__ [82d6400b@gateway/web/freenode/ip.] has joined #mlpack
10:09 < arnav> I would like to work on Augmented Recurrent Neural Networks project
10:12 < arnav> Plus I am even interested in Reinforcement learning project
10:12 < arnav> is it possible to work on both?
10:13 < arnav> And how much time would I have to take out for the same
10:13 < arnav> I just have basic knowledge of c++ but I am good with python
10:15 < arnav> And when will the project officialy start
10:15 < arnav> Can we start before the timeline
10:16 < arnav> Please share the exact problem to approach so that I can start researching
10:18 -!- Captainfreak [0e8b7204@gateway/web/freenode/ip.] has joined #mlpack
10:19 -!- Captainfreak [0e8b7204@gateway/web/freenode/ip.] has quit [Client Quit]
10:36 -!- flyingpot [~flyingpot@] has joined #mlpack
10:41 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 268 seconds]
10:43 -!- Sinjan_ [721de187@gateway/web/freenode/ip.] has joined #mlpack
10:47 < Sinjan_> <zoq> There are a few issues added by you. I want to work on them. But I am seeing in the comments section of the issues that there are already a few guys working on them. Can I too try to fix that issue? Or I should go for a separate one.
10:48 -!- arnav [67525043@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
11:04 -!- arnav [67525043@gateway/web/freenode/ip.] has joined #mlpack
11:07 -!- mikeling [uid89706@gateway/web/] has quit [Quit: Connection closed for inactivity]
11:14 -!- flyingpot [~flyingpot@] has joined #mlpack
11:39 < Sinjan_> I also would like to know that if I work on an issue, should it be one opened by the mentor I plan to work under or I can work on any issue.
11:43 -!- arnav [67525043@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
11:47 -!- vinayakvivek [uid121616@gateway/web/] has joined #mlpack
11:55 -!- Sinjan_ [721de187@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
11:57 -!- rutuja [ca87eec8@gateway/web/cgi-irc/] has joined #mlpack
12:05 -!- mikeling [uid89706@gateway/web/] has joined #mlpack
12:06 -!- SInjan_ [721de187@gateway/web/freenode/ip.] has joined #mlpack
12:19 -!- Ajinkya__ [82d6400b@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
12:23 -!- chvsp [cb6ef206@gateway/web/freenode/ip.] has joined #mlpack
12:31 -!- rutuja [ca87eec8@gateway/web/cgi-irc/] has quit [Quit: - A hand crafted IRC client]
12:46 -!- kris2 [~kris@] has joined #mlpack
13:06 -!- usama [6f44656c@gateway/web/freenode/ip.] has joined #mlpack
13:06 < usama> hey everyone
13:07 < usama> I had a question about arma::mat
13:07 < usama> when i print out a mat why does it print its TRANSPOSE and not the REAL mat
13:10 < chvsp> usama: Hi. Are you loading a dataset into arma::mat using data::Load?
13:11 < usama> YES
13:12 < chvsp> usama: So, data::Load() method loads the transpose of a dataset into the arma::mat.
13:12 < usama> is that going to affect how my data is interpreted by the algorithim???
13:13 < chvsp> It is not the fault of the arma::mat, its just that the data is stored in a transposed fashion
13:14 < usama> How can i cout<< the real mat
13:15 < chvsp> Just take a transpose of the matrix. Like cout << a.t();
13:15 < usama> Thanks!!!!!!!!!!! it worked
13:16 < usama> But in my opinion the << operator should print out the real data
13:16 < usama> it would be nice to see this changed
13:22 -!- Thyrix [~Thunderbi@] has quit [Quit: Thyrix]
13:26 < chvsp> Your welcome. :) I too felt it to be a bit counterintuitive at first but now, I personally don't think it is that big an issue. Although I would certainly like to know the actual reason why it is the way it is.
13:29 < usama> yes certainly
13:32 < usama> In the meantime i have detected a BUG!!! in the << operator for arma::mat
13:32 < usama> when you print out the data the first row isn't PRINTED!!!
13:44 < usama> never mind that
13:45 < usama> it was just my console buffer
13:50 -!- Thyrix [~Thunderbi@] has joined #mlpack
13:51 -!- kris2 [~kris@] has quit [Quit: Leaving.]
14:03 -!- chvsp [cb6ef206@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
14:24 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 268 seconds]
14:27 -!- Thyrix [~Thunderbi@] has quit [Remote host closed the connection]
14:31 -!- Thyrix [~Thunderbi@] has joined #mlpack
14:34 < zoq> usama: Elements are stored with column-major ordering (column by column), e.g. numpy is row-major (row by row).
14:36 -!- shikhar [67d49d1f@gateway/web/freenode/ip.] has joined #mlpack
14:57 < rcurtin> govg: here is the ICML paper I was working on:
14:57 -!- biswajitsc [75cda94b@gateway/web/freenode/ip.] has joined #mlpack
15:02 -!- SInjan_ [721de187@gateway/web/freenode/ip.] has quit [Quit: Page closed]
15:02 -!- Sinjan_ [721de187@gateway/web/freenode/ip.] has joined #mlpack
15:02 < Sinjan_> <zoq> There are a few issues added by you. I want to work on them. But I am seeing in the comments section of the issues that there are already a few guys working on them. Can I too try to fix that issue? Or I should go for a separate one.
15:03 < Sinjan_> I also would like to know that if I work on an issue, should it be one opened by the mentor I plan to work under or I can work on any issue.
15:07 -!- biswajitsc [75cda94b@gateway/web/freenode/ip.] has quit [Quit: Page closed]
15:09 -!- Thyrix [~Thunderbi@] has quit [Quit: Thyrix]
15:09 < usama> zoq why is it column major order?? it makes difficult to work with data such as the NaiveBayesClassifier works on column major
15:14 -!- biswajitsc [uid216412@gateway/web/] has joined #mlpack
15:15 < rcurtin> usama: Armadillo is column major because the tools that it is built on (LAPACK, BLAS) are column-major
15:15 < rcurtin> it doesn't make things more difficult, it is simply a different way of thinking about things
15:16 < rcurtin> when you have column-major data, the column in contiguous in memory
15:16 < rcurtin> so for a machine learning context it becomes most appropriate (due to memory ordering) to consider a single point as a column
15:19 < zoq> Sinjan_: Hello, I've seen your message, we can't respond to every message instantly, but we will respond, it could be some hours through. You can always check out the logs
15:19 < zoq> Sinjan_: Regarding your question, it depends on the issue, for some issues multiple contributions are possible or you can collaborate on an issue. Also, you don't have to work on an issue that is related to the project or was created by the mentor.
15:20 < zoq> Sinjan_: The issues are there for you to get familiar with the codebase. You don't have to make a contribution to be considered, so don't worry if you can't find anything. Also, we working on adding more issues on GitHub.
15:20 -!- flyingpot [~flyingpot@] has joined #mlpack
15:21 < rcurtin> usama: sorry, I did not finish the thought, I got interrupted :) anyway, since a single point is a column now, instead of the more typical "point as row" that is used in textbooks, papers, etc.,
15:21 < rcurtin> one has to sometimes do transposition on symbolic expressions before implementing them in armadillo
15:21 < rcurtin> but in the end, it's functionally identical
15:21 < rcurtin> just a different perspective that comes from the FORTRAN roots of BLAS and LAPACK
15:24 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 240 seconds]
15:25 < zoq> Sinjan_: Also, do we talk about some specific issue, like the optimizer issue?
15:26 -!- aashay [uid212604@gateway/web/] has quit [Quit: Connection closed for inactivity]
15:28 < zoq> Sinjan_: ahh, I see you commented on the optimizer issue.
15:28 < Sinjan_> zoq: I was planning to work on issue #893
15:29 < Sinjan_> But I noticed that there are three guys already involved in each of the three sub-projects. Can I still work on one of them?
15:29 < Sinjan_> Or should I choose a different one?
15:30 < Sinjan_> Other than that I am working on the implementation of policy gradients since I applied for Reinforcement Learning.
15:33 < zoq> Sinjan_: hm, you can work on the mentioned algorithms, but I don't think opening another PR does make sense. I would probably choose another issue. Working on policy gradients is nice, that's one thing where multiple contributions are nice and it shows us that someone is able to solve a problem.
15:36 -!- kesslerfrost [~textual@2405:204:18c:2092:ed57:bc0e:9fbd:d6a3] has joined #mlpack
15:37 < Sinjan_> Okay. Then I will work on another issue. Besides that I submitted a PR for issue #902 although that's one with P:trivial to get a hang of things. Also I will soon be done with the policy gradient implementation.
15:48 -!- Thyrix [2d4c4a21@gateway/web/freenode/ip.] has joined #mlpack
15:51 -!- TooObvious [1b3ef8bf@gateway/web/freenode/ip.] has joined #mlpack
15:51 -!- TooObvious [1b3ef8bf@gateway/web/freenode/ip.] has left #mlpack []
15:51 -!- HashCoder [1b3ef8bf@gateway/web/freenode/ip.] has joined #mlpack
15:53 -!- aashay [uid212604@gateway/web/] has joined #mlpack
15:53 < HashCoder> Hello , I would like to work on implementing Deep Learning Modules.Could anyone provide some tips on getting familiar with the existing neural networks library and understanding it
15:55 -!- usama [6f44656c@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
16:00 < zoq> HashCoder: Hello, the Deep Learning Modules project idea has been discussed on the mailing list before: and
16:00 < zoq> HashCoder: Note that there are many more posts on this in the mailing list archive to search for; those are only some places to get started.
16:01 < HashCoder> Hello zoq...Thanks a lot...I will get working :D
16:02 < zoq> HashCoder: Sounds good, let us know if you have any further questions.
16:03 < HashCoder> Yeah sure ...Thanks :D
16:09 -!- kris1 [~kris@] has joined #mlpack
16:11 -!- rajat503 [7a0fc87e@gateway/web/freenode/ip.] has joined #mlpack
16:12 < rcurtin> ok, the "new" mlpack build slave called "dealgood" is online now and connected to masterblaster; it has 16 cores, 48GB RAM
16:12 < rcurtin> the process is much faster setting it up at Georgia Tech than at Symantec...
16:12 < rcurtin> opening the firewall only took one day instead of one year :)
16:15 < zoq> arnav: I wouldn't recommend to work on both ideas, writing a good implementation for either one of the projects takes time and writing some good tests takes often take much more time. But If you are confident that you can handle both projects or can find a way to combine both, feel free to do so; you have to make sure the proposed timeline is reasonable.
16:15 < zoq> arnav: Google Summer of Code 2017 Timeline: You can start before the actual coding phase begins, in fact, a lot of people getting familiar with the codebase before that.
16:17 -!- aditya_ [~aditya@] has quit [Ping timeout: 256 seconds]
16:18 < zoq> rcurtin: Wow, everytime you came up with a new name; I go to and check where it is :)
16:19 < rcurtin> I was originally going to go with 'blackfinger', but the support guy I was working with suggested that 'blackfinger' was "too urbandictionary-able"
16:21 < zoq> Now I'm checking out urbandictionary .... I see
16:22 < rcurtin> yeah, I hadn't thought of that, but when I checked urbandictionary I understood his concern and went with 'dealgood' instead :)
16:23 -!- HashCoder [1b3ef8bf@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
16:26 -!- rajat503 [7a0fc87e@gateway/web/freenode/ip.] has quit [Quit: Page closed]
16:31 < kris1> zoq: can you explain this again i am not able to understand this
16:31 < kris1> what was wrong with implementation that i gave
16:32 < kris1> i did read the mail log but i got confused with vardiac variables.
16:35 -!- HashCoder [1b3ef8bf@gateway/web/freenode/ip.] has joined #mlpack
16:40 < kris1> line 33 you mention that use specified policy does that mean you will have to implement the Optimizer method for every policy. Or are we going to stepSize = step*size * Policy.decay_learning_rate.
16:40 < kris1> okay writing that i think i got your idea
16:40 -!- Sinjan_ [721de187@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
16:42 < zoq> kris1: There is nothing wrong with your implementation; some methods expect that the optimizer has a single template parameter (the type of the function), so you can't just use Optimizer<typename, typename> when it expects Optimizer<typename>.
16:42 < zoq> kris1: The idea is to create an alias and to use default values for the other template parameters. The variadic template idea is an extension, that allows us to pass an Optimizer with multiple template parameters, but in this case, we have to modify some of the existing code.
16:44 < kris1> oh right i get that. Yes that seems like a important issue.
16:44 -!- topology [3d0c4bd1@gateway/web/freenode/ip.] has joined #mlpack
16:45 < kris1> zoq: 1. The parameters in all the constructors of all the policy class have to have the same type and the number of params to each constructor must be equal right.
16:46 < zoq> kris1: right
16:46 < kris1> line 33 you mention that use specified policy does that mean you will have to implement the Optimizer method for every policy. Or are we going to stepSize = step*size * Policy.decay_learning_rate.
16:46 < kris1> is this line of thought correct
16:47 < zoq> Can you send me the link?
16:47 < zoq> not sure we are looking at the same file
16:47 < kris1>
16:49 -!- chvsp [cb6ef216@gateway/web/freenode/ip.] has joined #mlpack
16:50 < zoq> We are going with stepSize = step*size * Policy.decay_learning_rate. Every policy defines another approach to update the learning rate.
16:52 < zoq> kris1: At the end we have an alias for: NAdam<FunctionType, PolicyOne> another one for NAdaMax<FunctionType, PolicyTwo> ...
16:53 < zoq> kris1: The policy implements the update strategy and we can reuse the Basic NAdam optimizer class, which uses the Policy to update the learning rate.
16:57 < chvsp> Hi @zoq, I couldn't find any implementation of BatchNorm layer in the current codebase. I think it would be a great addition as many of the recent papers have it in their architectures. Your thoughts?
16:58 < zoq> chvsp: I agree, would be great to have an implementation.
17:01 [Users #mlpack]
17:01 [ aashay ] [ cult- ] [ indra ] [ lozhnikov ] [ shikhar ] [ wiking]
17:01 [ aman11dh ] [ diehumblex] [ K4k ] [ mikeling ] [ Thyrix ] [ zoq ]
17:01 [ arunreddy ] [ gtank ] [ kesslerfrost] [ nikhilweee] [ topology ]
17:01 [ biswajitsc] [ HashCoder ] [ kris1 ] [ qdqds ] [ vinayakvivek]
17:01 [ chvsp ] [ huyssenz ] [ layback ] [ rcurtin ] [ vivekp ]
17:01 -!- Irssi: #mlpack: Total of 27 nicks [0 ops, 0 halfops, 0 voices, 27 normal]
17:03 < rcurtin> arunreddy: I looked through the build log for the LogisticRegression problem you are having, are you sure that the LogisticRegression class hasn't been modified?
17:03 < rcurtin> the line that's confusing me is
17:03 < rcurtin> template<template<class> class typedef OptimizerType OptimizerType>
17:03 -!- Thyrix [2d4c4a21@gateway/web/freenode/ip.] has quit [Quit: Page closed]
17:03 < rcurtin> that doesn't appear in the master code, but it seems like it's in the gcc error messages, so I am thinking, maybe the code was modified?
17:04 < rcurtin> also, I clicked on your webpage, I see you will be at SDM 2017, I will be there also :)
17:06 < chvsp> zoq: I will look into it. I need to brush up on the nitty gritty of the math. I will code it, if possible, or else I will open an issue. Sounds good?
17:08 -!- chvsp [cb6ef216@gateway/web/freenode/ip.] has quit [Quit: Page closed]
17:09 < zoq> rcurtin: I tested it and I haven't modified the LogisticRegression code. I used:
17:09 < zoq>
17:09 < zoq> StandardSGD<LogisticRegressionFunction<> > sgdOpt(lrf); in the main.cpp file for a quick test and could see the same error.
17:09 < zoq> chvsp: Sounds good for me.
17:09 < rcurtin> ah, got it, let me try that
17:13 -!- topology [3d0c4bd1@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
17:13 -!- topology [3d0c4bd1@gateway/web/freenode/ip.] has joined #mlpack
17:14 -!- nate23 [67d98527@gateway/web/freenode/ip.] has joined #mlpack
17:17 < kris1> zoq: okay but i think we have to define the optimizer class again right. Because for sgd we already have the optimizer method implemented. we can't use its code directly. we will have to modify it and write for Optimizer(const PolicyType& policy)
17:18 < kris1> in the sgd.hpp file we will in the end also have to define the alias as sgd<functionType, PolicyType1> sgd_nodecay() etc
17:20 < zoq> kris1: Not sure I get you point, but yes you have to write another optimizer class for the NAdam case. We can't use the SGD class for Adam.
17:20 < kris1> I am implementing the learning decay rates for sgd not adam
17:20 < rcurtin> zoq: ok, I can reproduce this, this is strange...
17:21 < kris1> i think you are confusing this with something else.
17:22 < topology> rcurtin: i spent the last 2 days brushing up the fundamentals of kernels and KDE. i also read the section on KDE from your thesis.
17:22 < kris1> annealing decay rates for sgd. we talked about day before yesterday.
17:22 < topology> you referenced a paper by Gray & Moore there
17:22 < topology> "Nonparametric Density Estimation: Toward Computational Tractability"
17:22 < zoq> kris1: I think I am, sorry.
17:23 < rcurtin> topology: yeah, that was the original dual-tree KDE paper... I think that it is a bit difficult to follow but the ideas are all there...
17:25 < zoq> kris1: Okay, so yes we write an alias for the decay cases, just as we do with momentum.
17:26 < kris1> i have a better idea then implementing optimizer class can i not in the sgd class i just overload the the optimizer method with something like template<typename policy> optimizer(......, Policy P1) but i would have to implement the overloaded function for that and we would have basically a lot of code duplication.
17:26 < topology> so i guess "Far-field compression for fast kernel summation methods in high dimensions" by Bill March is where i should start?
17:26 < zoq> kris1: Does that answer your question?
17:26 -!- arunreddy_ [81b0c514@gateway/web/freenode/ip.] has joined #mlpack
17:27 < zoq> kris1: And yes we have to modify the SGD class, so that we can use different polcies.
17:27 < kris1> Because in both methods(Optimizer class and Overloading the optimizer method) there will code duplication
17:27 -!- mikeling [uid89706@gateway/web/] has quit [Quit: Connection closed for inactivity]
17:27 < arunreddy_> Hi rcurtin, I haven't made any changes to the code modifying SGD to StandardSGD.
17:28 < rcurtin> topology: once you feel that you understand dual-tree KDE, I'd consider just implementing that with no special modifications, because it'll get you more familiar with the code
17:28 < rcurtin> and then from there you can start to consider Bill's paper and other approaches
17:29 < arunreddy_> rcurtin: Awesome, we can meet at the conference. Count me in for your talk :)
17:29 < rcurtin> arunreddy_: yeah, zoq showed me the modification to make to reproduce
17:30 < zoq> kris1: You mean we have to duplicate the Optimizer function?
17:31 < rcurtin> sounds good about the talk, I will try and make it interesting :) I am looking forward to seeing the presentation for your paper also
17:31 < kris1> zoq: Just a min i will write a gist and send it. That would make things much clear
17:31 < zoq> kris1: okay
17:33 < zoq> arunreddy_: Have you pushed the momentum sgd code? Maybe kris could take a look?
17:35 < arunreddy_> rcurtin: you are welcome. me too.
17:36 < arunreddy_> zoq: Yes, its here..
17:37 < kris1> okay just a question can you overload a function like this double Optimize(arma::mat& iterate); template<typename Policy> double Optimize(arma::mat& iterate).
17:38 -!- huyssenz_ [uid215710@gateway/web/] has joined #mlpack
17:38 < kris1> or template<typename Policy> double Optimize(arma::mat& iterate, Policy P)
17:38 < kris1> I doubt we could do that.
17:39 < kris1> thanks arun i will have a look
17:39 < arunreddy_> kris1: how about adding stepSize = decayPolicyType.GetStepSize(...) at line 117 in
17:40 < arunreddy_> in the optimizer iteration code.
17:40 < zoq> kris1: arunreddy_ is working on momentum and I think you can do the same thing here is the link:
17:40 < zoq> yes
17:41 < arunreddy_> and the StandardSGD can be something like.. StandardSGD = SGD<FunctionType, EmptyUpdate, NoDecay>
17:41 < rcurtin> arunreddy_: zoq: minimum working example:
17:41 < rcurtin> or, I guess, "minimum failing example"
17:42 < rcurtin> if I change it to a.C<H>(h) it compiles fine
17:43 < rcurtin> implying that if logistic_regression_main.cpp was changed to read lr.Train<StandardSGD>(sgdOpt); it would compile
17:43 < rcurtin> but the compiler should be able to deduce the correct type...
17:43 -!- Netsplit *.net <-> *.split quits: huyssenz, nikhilweee
17:43 -!- huyssenz_ is now known as huyssenz
17:45 < kris1> arunreddy_: thats what i wanted to do, but i think we were not going to change template<typename functiontype> but i think in your implementation you have done that then yes i think we could do that.
17:47 < arunreddy_> rcurtin: perfect. It compiles now.
17:47 < arunreddy_> And checked for template<typename D, typename E, typename G, typename H> class F { };
17:47 < rcurtin> yeah, I just am not sure that is how it should be... intuitively the original code should compile
17:47 < arunreddy_> multiple policy scenario.. compiles fine..
17:47 < rcurtin> so now I want to see if this is a gcc bug or if it's correct according to the stadnard
17:48 < rcurtin> standard*
17:48 < zoq> We can't be the first one who encountered this, it's strange.
17:48 < zoq> btw. same with clang
17:48 < rcurtin> yeah, clang fails also
17:48 -!- qwe [6a338f13@gateway/web/freenode/ip.] has joined #mlpack
17:51 < kris1> zoq: check this out. This is what i meant
17:51 < kris1> arunreddy_: maybe you could also have a look....
17:52 < zoq> If this is standard conform, I'll probably modified all places where we do the same and go with variadic templates, manually specifying the optimizer type isn't that intuitive.
17:54 < zoq> kris1: Ah I see, yeah that would result into a lot of code duplication. And you have to modfiy the SGD class everytime you add another policy.
17:54 < zoq> kris1: Also, if you link against mlpack, you can't use your own policy.
17:55 < zoq> Ah I think you can
17:56 -!- topology [3d0c4bd1@gateway/web/freenode/ip.] has quit [Quit: Page closed]
17:56 < kris1> Another policy meaning adding something new to the SGD class yes other than learning rate. That would create a lot of duplication to sgd_impl.hpp
17:58 < kris1> This came to my mind for backward compatibility purposes otherwise i was also in the favor of using template<typename t1, typename t1> sgd for adding policies but that broke the existing code.
17:59 < kris1> I wish we had something similar to the decorator patter in python that would make life much easier though slower :)
18:00 < zoq> I think, as long as we provide some default parameters, we provide backward compatibility.
18:03 < arunreddy_> zoq: backward compatibility is a good point. So, the non-mlpack code with references to SGD is going to break.
18:04 < arunreddy_> Do we do something to support that (or) expect them to change to the default StandardSGD.
18:04 < kris1> zoq: Yes thats true. maybe something like this
18:06 < kris1> arunreddy_: but we have template<typename functiontype, typename momentum = default, typename decay = default1>.
18:06 < kris1> then even if somebody was using SGD<FunctionType> s() it would work right
18:10 < rcurtin> well, I don't like using people's time on stackoverflow, but I was not able to find any relevant documentation in the standard to answer my question so I decided to ask:
18:10 < rcurtin>
18:10 < arunreddy_> kris1: So optimizer type being used in other parts of the codebase.. like regularized_svd and logistic_regression expects only one "template template parameter"
18:10 < rcurtin> hopefully this will garner a response that can help us figure out what is actually going on here
18:12 < zoq> We could also do something similar as we did with PCA (line 145) but instead of a typedef we use an alias, which worked so well :)
18:12 < zoq> Here is te link:
18:12 < arunreddy_> kris1: function type param only. So we use StandardSGD alias with only one template param. Let me know if you need more clarity on this.
18:14 < arunreddy_> zoq: That looks a lot cleaner.
18:14 < arunreddy_> Avoids changes across the codebase.
18:19 < kris1> arunreddy:SGD<LogisticRegressionFunction<>> sgd(lrf, 0.005, 500000, 1e-10) does this break with default parameters set
18:19 -!- nate23 [67d98527@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
18:19 < kris1> arunreddy_:
18:20 < arunreddy_> kris1: Yes
18:20 < arunreddy_> zoq: Will typedef work with templates?
18:20 -!- HashCoder [1b3ef8bf@gateway/web/freenode/ip.] has quit [Quit: Page closed]
18:21 -!- HashCoder [1b3ef8bf@gateway/web/freenode/ip.] has joined #mlpack
18:21 < kris1> ohh....strange.....
18:22 -!- flyingpot [~flyingpot@] has joined #mlpack
18:23 < zoq> arunreddy_: Not sure what you mean.
18:23 < arunreddy_> you can refer to the following email thread..
18:23 -!- generic_name_ [1b053696@gateway/web/freenode/ip.] has joined #mlpack
18:24 < arunreddy_> template<typename DecomposableType> typedef SGDType<DecopmosableType, EmptyUpdate, NoDecay> SGD;
18:24 < arunreddy_> zoq
18:26 < zoq> unfortunately no
18:26 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 240 seconds]
18:27 < arunreddy_> so that makes the suggested approach unusable.
18:28 < zoq> We can't use typedef, we have to use "using", if that's what you mean.
18:28 < arunreddy_> yeah. we have to use using.
18:32 < kris1> arunreddy_: so right now the problem is that logistic regression etc are breaking or have we addressed that by using the StandardSGD alias.... Wont that require re-factoring of all the code that uses sgd
18:35 < arunreddy_> kris1: We have addressed it using StandardSGD alias. But unfortunately that was breaking some other parts of the code, rcurtin suggested the fix a while ago.
18:35 < arunreddy_> Tune into discussion on
18:36 < HashCoder> As no tickets are available for "Essential Deep Learning" modules could someone point out an easy bug for me to get started with.
18:46 -!- aashay [uid212604@gateway/web/] has quit [Quit: Connection closed for inactivity]
18:49 < zoq> HashCoder: You can, look through the list of open issues and see if there is any issue you think you can solve. The issues are generally tagged with difficulty.
18:50 -!- trapz [~jb^] has joined #mlpack
18:56 < kris1> Okay then i will wait till the issue get resolved till then i would work on something else.
18:57 < kris1> Maybe i could complete my xavier init method thats long pending....
19:08 -!- HashCoder [1b3ef8bf@gateway/web/freenode/ip.] has quit [Quit: Page closed]
19:22 -!- shihao [80b49470@gateway/web/freenode/ip.] has joined #mlpack
19:23 -!- flyingpot [~flyingpot@] has joined #mlpack
19:24 < shihao> Hi, is anyone there? I think I have finished issue#593 and I am not sure about the output formats of posterior.
19:25 < shihao> Should I add another command option and output posteriors to a file like results of classify?
19:26 < shihao> And how should I write a test for this enhancement? Thanks :)
19:27 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 240 seconds]
19:33 -!- diehumblex [uid209517@gateway/web/] has quit [Quit: Connection closed for inactivity]
19:42 -!- aditya_ [~aditya@] has joined #mlpack
19:43 -!- aditya_ [~aditya@] has quit [Client Quit]
19:43 -!- aditya_ [~aditya@] has joined #mlpack
19:46 -!- govg [~govg@unaffiliated/govg] has joined #mlpack
19:49 < govg> rcurtin: Nice, just saw the link. When do rebuttals start?
19:50 -!- aditya_ [~aditya@] has quit [Ping timeout: 260 seconds]
19:58 -!- kesslerfrost [~textual@2405:204:18c:2092:ed57:bc0e:9fbd:d6a3] has quit [Quit: kesslerfrost]
20:01 < rcurtin> govg: no idea, probably a couple months
20:02 -!- qwe [6a338f13@gateway/web/freenode/ip.] has quit [Quit: Page closed]
20:20 < kris1> zoq:any friendly intros to visitor patterns that are easy to understand and have code
20:20 -!- generic_name_ [1b053696@gateway/web/freenode/ip.] has quit [Quit: Page closed]
20:20 < kris1> basically want to implement a visitor
20:21 -!- shikhar [67d49d1f@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
20:26 -!- drewtran_ [4b8e60c6@gateway/web/freenode/ip.] has joined #mlpack
20:26 -!- drewtran [4b8e60c6@gateway/web/freenode/ip.] has joined #mlpack
20:26 -!- drewtran_ [4b8e60c6@gateway/web/freenode/ip.] has quit [Client Quit]
20:36 -!- arunreddy_ [81b0c514@gateway/web/freenode/ip.] has quit [Quit: Page closed]
20:39 -!- diehumblex [uid209517@gateway/web/] has joined #mlpack
20:41 -!- aashay [uid212604@gateway/web/] has joined #mlpack
20:58 -!- shihao [80b49470@gateway/web/freenode/ip.] has quit [Quit: Page closed]
21:00 < kris1> are there any tests for the visitor class
21:01 < kris1> :zoq
21:03 -!- Sinjan_ [721de187@gateway/web/freenode/ip.] has joined #mlpack
21:21 -!- shihao [80b49470@gateway/web/freenode/ip.] has joined #mlpack
21:23 < shihao> Hi guys, I have created PR for issue#593: Please kindly review it and let me know if there is any problem. Thanks :)
21:24 -!- flyingpot [~flyingpot@] has joined #mlpack
21:26 -!- shihao [80b49470@gateway/web/freenode/ip.] has quit [Client Quit]
21:29 -!- flyingpot [~flyingpot@] has quit [Ping timeout: 240 seconds]
21:53 < kris1> can you explain this
21:53 < kris1> HasParametersCheck<T, P&(T::*)()>::value
21:53 < kris1> what the use P&(T::*) here
21:55 -!- trapz [~jb^] has quit [Quit: trapz]
21:59 -!- shihao [6b4da106@gateway/web/freenode/ip.] has joined #mlpack
22:03 < kris1> zoq: Here is my implementation of the fanin_visitor can you take a look
22:04 -!- shihao [6b4da106@gateway/web/freenode/ip.] has quit [Ping timeout: 260 seconds]
22:04 < kris1> also i don't understand how we are checking the std::enable_if can you help with that
22:13 < kris1> forget the link
22:14 -!- Sinjan_ [721de187@gateway/web/freenode/ip.] has quit [Quit: Page closed]
22:17 -!- nikhilweee [~nikhilwee@] has joined #mlpack
22:20 < kris1> zoq: Look like i am only one active right now.Maybe when you get time you could answer these questions.
22:20 -!- kris1 [~kris@] has left #mlpack []
22:25 -!- vinayakvivek [uid121616@gateway/web/] has quit [Quit: Connection closed for inactivity]
23:12 -!- prasanna082 [~hermes@] has joined #mlpack
23:14 -!- prasanna082 [~hermes@] has quit [Client Quit]
23:15 -!- prasanna082 [~hermes@] has joined #mlpack
23:20 -!- trapz [~jb^] has joined #mlpack
23:22 -!- trapz [~jb^] has quit [Client Quit]
23:43 -!- trapz [~jb^] has joined #mlpack
--- Log closed Sat Mar 04 00:00:27 2017