[mlpack] Ready to use Models in mlpack (GSOC '21)

Aakash kaushik kaushikaakash7539 at gmail.com
Sun Mar 28 11:14:51 EDT 2021


Adding to the previous email it should be easily possible to add both the
version of deeplab_v3 which are with the resnet 50 and resnet 101 backbone
because they have the same blocks but just different params with the
different number of layers so I want to discuss the points mentioned in the
previous mail but also to add to that I want to discuss the scope of the
project should I keep it concretely to a data loader and to predefined but
not pre-trained models which are deeplab_v3 with resnet 50 and resnet 101
backbone.
For an iteration, I would like to propose adding the following to just make
it clear.
1. segmentation data loader
2. deeplabv3 (resnet50 and resnet101) predefined and not pre-trained

I know we previously talked about adding another model if time permits, but
I am not really sure about that, so I wanted to discuss these and see if I
should add the third model to my proposal and mention the time
constraint and just let it be a potential model or if I should completely
remove the idea of the third model.

Best,
Aakash

On Sun, Mar 28, 2021 at 2:33 PM Aakash kaushik <kaushikaakash7539 at gmail.com>
wrote:

> Hey Everyone,
>
> This is a continued mail regarding my proposal and I have been taking a
> deeper look and found out that PyTorch implements these segmentations
> models above their pre-existing pipelines of backbones and I proposed to
> implement 3 things 1 data loader for segmentation task and 2 models from
> which one would be a potential model only if time permits but the way I see
> for the present time they can be implemented in mlpack is by creating block
> and creating completed models because no such customizable backbones exist
> as of now that can be called. and so this opens another question as to such
> deeplab_v3 consists of three backbones in PyTorch which are resnet 50,
> resnet 101, and mobilenet v3 large and as coco is such a huge dataset and
> the tool for converting weights from torch to mlpack is a bit flack should
> I go with a proposal to include predefined models which can be trained
> rather than pre-trained models for now and if it permits we can add the
> weights for coco in the future. I think these were some of the doubts I had
> while I was writing the exacts of my proposal and I will be glad to have a
> discussion on it in terms of how concrete it is and if you guys see a
> problem on how they can be implemented or any other doubts or questions.
>
> Best,
> Aakash
>
> On Thu, Mar 18, 2021 at 5:35 AM Aakash kaushik <
> kaushikaakash7539 at gmail.com> wrote:
>
>> Hey Marcus,
>>
>> I totally got it and i think 1 data loader and 2 models from which 1 will
>> be a potential model if only time permits.
>>
>> Thank you for the feedback and help. :D
>>
>> Best,
>> Aakash
>>
>>
>> On Wed, 17 Mar, 2021, 9:33 pm Marcus Edel, <marcus.edel at fu-berlin.de>
>> wrote:
>>
>>> Yes, that’s what I had in mine, but at the end it’s your decision. About
>>> the
>>> model either is fine, you can select whatever you find more interesting.
>>>
>>> On 17. Mar 2021, at 10:45, Aakash kaushik <kaushikaakash7539 at gmail.com>
>>> wrote:
>>>
>>> HI Marcus,
>>>
>>> Thank you so much for reaching back, So just to clarify i would keep the
>>> deliverables to just two which will be:
>>>
>>> 1. Semantic segmentation dataloader in the format of COCO dataset .
>>> 2. One semantic segmentation model
>>>
>>> If I understood you correctly, will you be able to help me decide which
>>> kind of model I should add, should i go for a model that is more generally
>>> used such as U-Net or one from the above list that PyTorch has ?
>>>
>>> Best,
>>> Aakash
>>>
>>> On Wed, Mar 17, 2021 at 7:55 PM Marcus Edel <marcus.edel at fu-berlin.de>
>>> wrote:
>>>
>>>> Hello Aakash,
>>>>
>>>> thanks for the interest in the project and all the contributions; what
>>>> you proposed
>>>> looks quite useful to me and as you already pointed out would integrate
>>>> really well
>>>> with some of the existing functionalities.
>>>>
>>>> I guess for loading segmentation datasets we will stick with a common
>>>> format e.g.
>>>> COCO, and add support for the data loader and potentially add support
>>>> for other
>>>> formats later?
>>>>
>>>> One remark about the scope, you might want to remove one model from the
>>>> list, and
>>>> add a note to the proposal something along the lines of, if there is
>>>> time left at the end
>>>> of the summer, I propose to work on z, but the focus is on x and y.
>>>>
>>>> I hope what I said was useful; please don't hesitate to ask if anything
>>>> needs clarification.
>>>>
>>>> Thanks,
>>>> Marcus
>>>>
>>>> On 16. Mar 2021, at 00:16, Aakash kaushik <kaushikaakash7539 at gmail.com>
>>>> wrote:
>>>>
>>>> Hey everyone,
>>>>
>>>> My name is Aakash Kaushik <https://github.com/Aakash-kaushik> and I
>>>> have been contributing for some time specifically on the ANN codebase in
>>>> mlpack.
>>>>
>>>> And the project idea that is ready to use Models in mlpack peaks my
>>>> interest. So initially i would like to propose a data loader and 2 models
>>>> for semantic segmentation because i see that the data loaders for image
>>>> classification and object detection are already there and including a
>>>> couple of models and a data loader in GSOC for semantic segmentation will
>>>> open the gates for further contribution of models in all three fields as
>>>> they would only need to worry about the model and not loading the data and
>>>> also will have some reference models in that field
>>>>
>>>> So the data loader would be capable of taking up image segmentation
>>>> data that is the real image, segmentation map, segmentation map to class
>>>> mapping, and for the models i am a bit confused as if we want some basic
>>>> nets such as U-nets or a combination of both a basic net and state of the
>>>> art model, or two state of the art model. Pytorch supports couple of models
>>>> in the semantic segmentation fields which are:
>>>>
>>>> 1. FCN ResNet50, ResNet101
>>>> 2. DeepLabV3 ResNet50, ResNet101, MobileNetV3-Large
>>>> 3. LR-ASPP MobileNetV3-Large
>>>>
>>>> And so i should be able to convert their weights from pytorch to mlpack
>>>> by modifying the utility created by kartik dutt which is
>>>> mlpack-PyTorch-Weight-Translator
>>>> <https://github.com/kartikdutt18/mlpack-PyTorch-Weight-Translator>
>>>>
>>>> I am trying to keep the deliverables to just three which is a data
>>>> loader and 2 models as the GSOC period is reduced to just 1.5 months and
>>>> for these three things i would have to write tests, documentation and
>>>> example usage in the example repository.
>>>>
>>>> And before this work as we are in the process of removing boost
>>>> visitors from the ANN codebase and had couple of major changes to the
>>>> mlpack codebase the models repo wasn't able to keep up with it so my main
>>>> goal before GSOC starts would be to work on the PR that is to  Swap
>>>> boost::variant with vtable <https://github.com/mlpack/mlpack/pull/2777> and
>>>> then make changes to the code in models repo to adjust the change in boost
>>>> visitors, serialization and porting tests to catch2.
>>>>
>>>> I wanted to hear from you if this is the right path and if the number
>>>> of deliverables are right for this and help in choosing the exact models
>>>> that i should pick that would be the most helpful or beneficial to the
>>>> library.
>>>>
>>>> Best,
>>>> Aakash
>>>> _______________________________________________
>>>> mlpack mailing list
>>>> mlpack at lists.mlpack.org
>>>> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
>>>>
>>>>
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://knife.lugatgt.org/pipermail/mlpack/attachments/20210328/4639e25e/attachment-0001.htm>


More information about the mlpack mailing list