Build testing with Docker and VMs Week 8

The next steps included running a container using a pre-built image, building mlpack inside a container using a Jenkins plugin. I am using CloudBees Docker Custom Build Environment Plugin.

Apart from that, a matrix configuration is to be created to create images with different configurations and mlpack is to be installed, run and tested against each image.

Also, I am looking for missing library versions.

more...

Build testing with Docker and VMs Week 7

After resolving a lot of issues with clang and gcc compilers I ,finally, got them working i.e. installing mlpack successfully and completed running the test with no errors detected.

Apart from this, shell scripts are created to generate the Dockerfile automatically. The advantage of this is, I do not have to create Docker images of all configurations beforehand. I can run these scripts mentioning the desired armadillo, boost and gcc/clang versions and then, can create a docker image with the generated Dockerfile.

Now, I have to make Jenkins build mlpack inside a container. Also, I need to find and put tar balls of all different desired versions of armadillo, boost, gcc/clang on the Apache server.

more...

Build testing with Docker and VMs Week 6

After building first draft image with clang, there were a lot of problems with the Dockerfile. I need to install the boost, armadillo, clang into the recommended system directory.

So, once these changes were done, there are further errors occured with armadillo and for the first time with LAPACK.

Looking into those errors now!

Also, I made the suggested changes in Dockerfile with gcc. That too generated new errors. Will be looking into that too.

more...

Build testing with Docker and VMs Week 5

Passed in first evaluations, I am happy to continue the work in this project. The feedback is really good from the mentor and I'll take care of the points mentioned in it.

With the last PR, the Dockerfile which installs gcc, armadillo and boost from source is ready. However, there are some changes required for them to be ready to use. The docker image is tested with building mlpack inside a container run using this image. And, not only mlpack installed correctly, there were no errors while building the test.

After a lot of efforts and waiting, I was successfully able to create a docker image with a dockerfile which installs clang, armadillo and boost from source. clang here is used as an alternative to gcc. (By the way, installing clang takes more than 3 hours, that's why the wait). I am currently installing mlpack inside the container run using clang based docker image. There were a lot of warnings. So, I am looking into this only.

more...

Build testing with Docker and VMs Week 3 & 4

This time, I collected different versions of armadillo and boost libraries. Installed apache2 on masterblaster to enable the downloading of tar balls of these libraries inside docker container. Also, I successfully build armadillo and boost libraries inside the container. Now, since I know the process it will not take long to create a shell script and replicating the findings to create Dockerfile programmatically.

Apache2, while downloading the files, gave 403 error. So, I followed this blog to resolve the issue. Now, the versions of armadillo and the versions of boost can now be downloaded and different dockerfiles can be generated automatically for different versions.

Before creating the final shell script to generate the dockerfile programma- -tically, I am looking to make clear the process to install all the remaining iterations (different gcc and clang versions, cmake release modes, architectures) so that the shell script can be generated in one go.

Currently, I am trying to install gcc compiler from source. Once this is done, I am going to do the same for clang.

more...

Build testing with Docker and VMs Week 2

Building packages from scratch in order to make docker image with mlpack's dependencies inside it is found way too complicated to be pursued. That put the thought of starting with busybox's image out of question. So, the only option left (with minimal image size) is to make a debian based docker image.

Now, it was required to do some optimizations in order to reduce as much size as can be done because no one wants a docker image which is too much in size. Therefore, I followed this blog to reduce the size. This step is followed by making the Dockerfile script and installing necessary dependencies followed by hardening steps which was discussed in my previous blog.

As said by Stephen Hawking's, one of the basic rules of the universe is that nothing is perfect. Perfection simply doesn't exist. And at the same time, I follow Kim Collins, who beleived in striving for continuous improvement, instead of perfection. So, while looking for further ways of optimization, my mentor suggested --squash(ing) images with docker. After reading about this method, upgrading docker to v1.17 and enabling the tool, after squashing the image size is reduced from ~609 MB to ~ 510 mB. To read about squashing follow this link

The stage is set, the image is ready to run inside a container and build mlpack. While building I faced a large number of linking errors, all of which converge to a single solution, i.e., to install c++ armadillo library from source instead of using the package manager. After all this is done, now I am looking to add some suitable docker plugin on Jenkins server (the masterblaster) to enable me add build steps with Docker.

Stay tuned for more updates from coming weeks! Keep coding!

more...

Build testing with Docker and VMs

With getting selected in Google Summer of Code 2017 to work for the project mlpack this is my first major opportunity to contribute to the open source community. Getting started with the work I realized how amazing the mentors are in this program. I am really thankful for getting an opportunity to work with
Ryan Curtin for this project.

My first task was to run a simple docker container and building it using jenkins. The next step involved making a Docker image with all dependencies of mlpack in order to run a container in which one can build mlpack. This is done initially taking ubuntu:16.04 as the base image and installing dependencies on top of it. Now, as Ernest Hemingway once said "The first draft of everything is shit", so as this docker image with no hardening, susceptible to attacks, a non-acceptable size (too large ~ 420MB) and not matching the mlpack's coding standards.

The feedback from the mentor is really helpful and certainly improves the quality of the end product. Considering the feedback, I started with alpine:latest as the base image (initial size ~ 5MB), and tried installing all mlpack dependencies (boost-math boost-program_options boost-unit_test_framework boost-serialization arpack txt2man binutils-dev cmake g++ make git openblas lapack-dev doxygen) to reduce the size to ~ 200 MB i.e. a 2x optimization in size. To make the container more secure, instead of root a new user is added to the container. For hardening, unsetting of all setuid bits is done, followed this blog . And, finally following the coding standards set by mlpack.

More on alpine based docker image: As we know alpine linux is small with many packages missing in the package manager. While working with it, I realized that there is no package for armadillo (c++ linear algebra library) and I have to build it from scratch. After completing building mlpack on this alpine linux based container, while running the tests, it terminated with errors. It cannot find the libarmadillo.so.x installed on the system. On further investigation and help from the mentor, turns out Alpine ships with uclibc not glibc and this will cause many other issues and will produce a build environment too far away from what mlpack's users typically have. So, now I'll be trying building the image using another minimal base image (for eg. busybox with glibc).

Stay tuned for more updates. Coding is fun! so as reading this blog, isn't it?

more...