Docker and Gitlab Runners: The Dark Horse
Sometimes you can use a wrench as a hammer. It may not be what it’s intended to be used for, but it gets the job done.
Let’s start with a story. There was this one time where I had to deploy and run my applications on a few different server, and I had to install all of the dependencies and configure everything manually. And worse yet, every time something changes, I have to push those changes one by one by one. Is there a solution to this? Well, one of them may be Docker.
So what is docker? Docker is a service that you can use to make it easier to create, deploy, and run applications. It does this by using what’s called containers. Containers are somewhat like virtual machines, a simulated environment running on another machine, but the catch is that containers may run on a single Linux kernel. Due to this, a docker may run multiple containers at once, something that may be useful for the latter part to this story.
So what’s the advantage of these so called containers? Well, if you’re on another machine but you need to run your application on it without worrying about what kind of settings you may have put before, it can really speed you up. Docker takes care of requirements already met on the host computer, meaning only external stuff needs to be shipped with the application. Below are a few advantages you may get:
- Lighter and Faster Distributions
A container only needs to be shipped with enough of the OS, libraries, and other stuff that is required for the application to run. This means you can run more stuff faster on a single server. - Shared Resources
Containers share resources with each other, this may imply bad things, but it’s actually make things faster by requiring less stuff on the server and running multiple workload at once. - Easy Transfer
Containers are more portable than VMs as you can plug one out and put it in another host operating system to deploy quickly.
Docker has a few components to it, including:
- Docker image: Like VMs, these images have the minimum requirements and dependencies that the application would need to run. It’s tells the container what the settings are supposed to be for the application.
- Docker container: A concrete spawn of docker image, it is a full package readied to run the application. An image may produce multiple containers.
- Dockerfile: A file that serve to build docker images. Within it contains the commands necessary.
- Compose file: A file that combines several containers under one hood and run everything from the same kernel.
Now, here’s the plot twist, we don’t use docker on our project directly. Wow, shocking right? But here’s another plot twist. We do use docker to speed up our deployment, but not in a way you might imagine.
Running in the 90 Seconds
Ever thought what happens when you push a commit and your pipeline “runs”? Ever thought who’s actually running the test and getting you the coverage of your application? Ever thought the download speed or the running time on them was just awful?
Ever looked at the top of one such pipeline?
What’s this? The Gitlab Executor uses docker!? Well, no, not every single one. You might have just checked now to get disappointed that you didn’t. But some does, and it is also a really useful tool.
So what does this have to do with our project? Well, you can tell a gitlab runner to run using docker at it would run your application in a specialized container just for you. Some git websites provides these runners for you, but you may need something specialized or faster and not shared with others.
Well, fear not! For I will tell you how to utilize docker to set up your very own gitlab runner. This will be especially useful for the some of you who is on the same boat as I am, as university provided runners can sometimes be slow.
Preparing Your Runner
I assume you’re someone with no little to no linux expertise and is using Windows. If you’re using linux then you’re lightspeed ahead of where I am and should consider reading the actual documentation.
First, you will need to install WSL 2. Windows Subsystem for Linux is necessary as a backend engine for our Docker. You can find out how to install it here. I cannot cover it all on this article, but windows have made enabling WSL and installing linux easy and highly available through the Microsoft store.
After enabling WSL 2, you can install docker for windows by clicking here. It has everything you need including the docker engine and the docker compose. Just install it and run, it’ll take a second to boot up but once it boots up, all the preparation steps are done!
At first, there will be nothing there, that’s because there’s no image yet. First, you need to make a runner. Go here and download yourself a runner from step 2. Then create a folder called gitlab-runner somewhere in your system and put the download there. If you want to make multiple runners, make multiple folders inside that one and copy the file into each one.
Don’t forget to rename the exe to gitlab-runner.exe for ease of use. Now, open command prompt in administrator mode and navigate to that folder.
Run the codes below and check if the installation is successful.
gitlab-runner.exe install
#Alternatively use install -n "name" to register multiple runners
gitlab-runner --version
Because I already have a runner, it will fail on mine, but if I want to install another runner under a different name, I can use the -n flag
Now, enter
gitlab-runner.exe register
#Or gitlab-runner.exe register -n "name"
You will be prompted for a few things. First will be your gitlab instance url. For me, it’s “https://gitlab.cs.ui.ac.id”. If you use other gitlab instances, you can put your own, or simply just the normal “https://gitlab.com”. Second you will be asked to put your token. You can both the instance url and the token on your project’s settings > CI/CD > Runners
A little intermission, here is the page you will use to manage your runners too. You can see shared runners and disable them if you prefer using your own. You can also pause and delete your own runners from registration. The token is the one given by your instance.
Next up, you will be asked for a description, you can put a memorable name here as this would be the indicator of which one of your runner is running. After that, you will be asked for tags. You can just have this empty and press enter. Then you will be asked for an executor, this is where docker comes in. Put in “docker” and just press enter. Finally, you have to put in the default image, just put “ruby:2.6” in and voila, you’re done!
Now to start the runner, simply do
gitlab-runner.exe start
#or gitlab-runner.exe start -n "name"
Then check your CI/CD page again and…
…looks like we got them running!
g00 1st.f∞ To Victory
Let’s test how exactly they work by pushing a commit into a pipeline. Let’s push an old branch to another temporary branch.
And now we visit the pipeline and…
Looks like it uses one of my runner, Tamamo-Cross. Let’s see if we can get it to use Meishou-Dotou.
Success! Now we have our very own runner. But why exactly would you want this? Well, there are a couple of reasons. Number one is all of your images are stored locally, which means you can just reuse them if you need to. Think of it as two birds with one stone, getting both the image and running it at the same time. You can run them again, push, and pull using the variety of docker features.
Number two is that you don’t have to share runners with anyone, thus making your performance better, gated by your own machine, of course. Take a look at these.
Using shared runners can be a little… fluctuative. Of course a reputable site like gitlab.com have the best runners, but other instances of gitlab, especially your workplace and your university, may not be so great. Using runners will have you run the code at your own machine or your defined machine, which makes performance way better.
And that’s it. Now you know that dockers can be used to do other things than just share codes between peers, it can also make your git flow faster by using your own runner. You can even run a runner inside of a docker so they have the same environment! But that is a topic for another time.
Conclusion
Docker is a useful tool to exchange your code with your peers or even distribute them to a server without worry about the requirements. By using container, it is both light and powerful. But while it is what it is, we don’t have to blindly use it just because.
Our project never used Docker because we found no need to. We only deploy to heroku as a test, but then the application will only be deployed internally. Of course, this calls for a container to be created if we need to move it from one machine to another, as may be the case for our client in the future. But there’s no need to force your project to use it right away. We found another use for it though, and that is a sign of a great piece of tool.
You will do yourself a favor to start learning about docker, it might just come in handy for you, even in ways you wouldn’t expect.