[mlpack] Update on Gsoc after two weeks.

Abhinav Anand abhinav.anand2807 at gmail.com
Thu Jun 24 10:44:04 EDT 2021


Hey everyone,

This is supposed to be an update on what has been done in my Gsoc project
till now. Currently, I am going well ahead of my proposed plan as I already
started working on my project from the community bonding period. Here are
the list of PR's that have been made till now towards the project:

*Merged:*
1. faster forward pass of mean pool layer
<https://t.mailpgn.com/l/?u=9ed49a9e-df94-420f-92c7-311167782bb0&fl=https%3A%2F%2Fgithub.com%2Fmlpack%2Fmlpack%2Fpull%2F2956>
2. improved speed of mean_backward under certain condition
<https://t.mailpgn.com/l/?u=eaa77d81-5bdf-4e59-8af6-fafd4d31f4fb&fl=https%3A%2F%2Fgithub.com%2Fmlpack%2Fmlpack%2Fpull%2F2957>
3. Improved speed of lp forward pass
<https://t.mailpgn.com/l/?u=4bb26082-2fa5-40d0-9fd2-c961bb1af6ad&fl=https%3A%2F%2Fgithub.com%2Fmlpack%2Fmlpack%2Fpull%2F2989>

*Open:*
1. implemented channel shuffle
<https://t.mailpgn.com/l/?u=36f39a62-c67f-493a-bb4c-c25d5e560e58&fl=https%3A%2F%2Fgithub.com%2Fmlpack%2Fmlpack%2Fpull%2F2959>
(Done.
It has been approved to merge.)
2. Nearest Interpolation(new layer) and Bilinear Interpolation(Fix)
<https://t.mailpgn.com/l/?u=4e7ecd04-c0a7-473c-b414-ad9299e19db5&fl=https%3A%2F%2Fgithub.com%2Fmlpack%2Fmlpack%2Fpull%2F2951>

*Brief explanation:*
(1) For faster forward pass of *mean_pool* and *lp_pool*, I am using an
idea of prefix sums. I will briefly explain for *mean_pool* and it is
similar for *lp_pool* also:

The most time consuming part of forward pass is the getting average of a
kernel. Using brute force it takes *k*k *additions. And if there are
*N *kernels then
the total steps will be *N*k*k *additions. Using some pre computations we
can significantly reduce this number. Steps Involved:
* Take a copy on Input called *C*
* Do prefix sum over columns on this copy
* Do prefix sum over rows on this copy
Let's say a kernel's coordinates are *C(span(i, i + k), span(j, j + k).* Not
the sum of this kernel will be *C(i + k - 1, j + k - 1) + C(i - 1, j - 1) -
C(i - 1, j + k - 1) - C(i + k - 1, j - 1). *
So we can do this in just *3 operations.* This gives the speed boost to
forward_pass. Also note that the precomputation cost is of order *N*N*. But
if we use armadillo functions then this cost becomes negligible. We have
confirmed this from the experiments.

A detailed explanation can be viewed in approach 4 in the given link
<https://t.mailpgn.com/l/?u=7987fd0b-794b-4ff2-bab8-2d7a544c515e&fl=https%3A%2F%2Fleetcode.com%2Fproblems%2Frange-sum-query-2d-immutable%2Fsolution%2F>


(2) For Backward Pass:
I have given a detailed explanation in the comments. If interested you can
take a look there. A 1-D approach can be viewed here
<https://t.mailpgn.com/l/?u=fdd77a1a-6123-4709-8bd4-09eb276f5f0c&fl=https%3A%2F%2Fwww.geeksforgeeks.org%2Fconstant-time-range-add-operation-array%2F>.
I have extended it to a 2-D matrix.

*Current Blockers:*
The nearest interpolation is working fine and the bilinear layer is also
working fine after the fix. They match the output from the Pytorch. But for
some reason one test in GANS is failing due to changes in bilinear
interpolation layer. I haven't been able to figure thit out till now. I
will have to take a closer look.
.
*My thoughts and further plans for the project:*
Till now this has been an amazing experience. It feels great to see my
contribution will be used by many people around the world. As I am well
ahead in my project, now I will be mostly contributing over the weekends.
The next part of the project will be to implement *bicubic interpolation* and
to debug the *bilinear interpolation *to find what is causing the error in
the GANS testcase.
Also I will be posting my updates via mail.

Best regards,
Abhinav Anand
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://knife.lugatgt.org/pipermail/mlpack/attachments/20210624/7aa7831f/attachment.htm>


More information about the mlpack mailing list