[mlpack] GSoC 2014 simulated annealing optimizer

Zhihao Lou lzh1984 at gmail.com
Sun Mar 23 22:18:39 EDT 2014


Hi Ryan,

Thanks for the reply. It's my own fault missing the deadline. I should have
known better.

On the other side, would you please give me some instructions on how should
I start to contribute? Should I started with some branch or the trunk? My
guess is my code will belong to src/mlpack/core/optimizers/sa, and the
interface should closely resembles those existing optimizers. But what's
the proper way to get information about the problem being optimized, like
the number of dimension, or the range of each parameters? I can always get
these as parameters of the constructor, but it is better to get these
directly from the function itself, if that's possible.

Best regards,

Zhihao Lou


On Sun, Mar 23, 2014 at 8:53 PM, Ryan Curtin <gth671b at mail.gatech.edu>wrote:

> On Sun, Mar 23, 2014 at 11:22:30AM -0500, Zhihao Lou wrote:
> > Dear all,
> >
> > I'm a PhD student in Computer Science Department at the University of
> > Chicago, currently working on a scalable parallel simulated annealing
> > algorithm, which I recently presented a poster at SIAM Parallel
> Processing
> > 2014. (Anyone interested in my poster please feel free to let me know.
> The
> > file is a few megabytes so I'm not sure if I can sent it to the mailing
> > list.) I'm greatly excited when I learned you need someone to work on a
> > simulated annealing optimizer in your Google Summer of Code 2014 ideas
> > posting. I believe my understanding in the simulated annealing algorithm
> > itself, its parallelization and implementation can add greatly to your
> > project.
> >
> > On the other side, my committee requires me to have a lot more problems
> to
> > test against than the one test problem I shown in my poster. Working on
> > your project will help me to learn how to structure an optimization
> > algorithm to be a general purpose c++ library, which I urgently needed. I
> > believe such a collaboration will be mutually beneficial.
> >
> > Currently, I have a working C++ template based implementation, both
> serial
> > and parallel (using MPI), that aims at maximum modularity, rather than
> > performance. Naturally, as an on going research project, I'll have to
> test
> > the effects of different cooling schedule or move generation independent
> of
> > other components in the code. Besides the genetic pattern formation model
> > (which is the main reason behind this project), I tested it on Rastrigin
> > function http://en.wikipedia.org/wiki/Rastrigin_function and it can
> solve a
> > 1000-dimension problem in 40 minutes, with final results around 0.0005
> > given 0 as theoretical optimal. (And as I said, there're a lot to gain in
> > performance once the algorithm settles.)
> >
> > Although I never used or worked with your mlpack library, I'm confident I
> > can learn quickly. Any help on where should I started will be deeply
> > appreciated.
> >
> > In addtion, there's a question I'd like to ask, though. After glancing at
> > the paper presented at the BigLearning workshop, I'm feeling the
> > scalability in the paper means something different than what I'm familiar
> > with, i.e., parallel scalability. Can someone please elaborate a little
> bit
> > on this, and tell me what the ideal use case of mlpack would be?
> >
> > I believe I'll be the perfect fit for this GSoC14 project and looking
> > forward to work with you.
>
> Hi Zhihao,
>
> I'm sorry to say that the deadline for GSoC proposal submission was last
> Friday, and was not to be done on the mailing list anyway, but instead
> the GSoC Melange website.
>
> Scalability in the paper refers to the idea that the implemented
> algorithms are more scalable (i.e., mlpack implements O(n log n)
> algorithms instead of more common O(n^2) implementations).  This is
> different than the usual parallel meaning of the word but it was used to
> differentiate from "fast" or "efficient" (which only imply that the
> implementations are well-written, not better algorithms).  In the future
> it is hoped that mlpack can be made parallel, but it is difficult to do
> this, especially while maintaining API cleanliness.
>
> At the same time, it is not a requirement to be a GSoC student to
> contribute to mlpack, so feel free to contribute.
>
> Thanks,
>
> Ryan
>
> --
> Ryan Curtin    | "Are you or are you not the Black Angel of Death?"
> ryan at ratml.org |   - Steve
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cc.gatech.edu/pipermail/mlpack/attachments/20140323/260c2575/attachment-0003.html>


More information about the mlpack mailing list