Let’s say you are starting a new project. You have selected the language to be Go and the database Mongodb. You already have both set up in your computer be it a Mac or a Windows or a Linux machine. However, you had installed the tools a long time ago and both are now backdated. You want the new project to use the latest versions; you also need the older versions for the old (still running) projects you need to work on now and then. What do you do?
Buy new computers for each new project?
However, you can imitate a new computer for each project, using virtual machines. We spin up virtual machines for each project to give it its own environment according to the requirements. Docker is one way to achieve this. And currently a very popular one too.
If you don’t know what Docker is or how it works I suggest you to read their own overview of the software here.
Only independent container platform that enables organizations to seamlessly build, share and run any application, anywhere — from hybrid cloud to the edge.
In this article I will show a brief overview of a simple docker setup for Go application development.
To get started, you need Docker installed in your system. Get the community edition here.
Creating something too complex for this exercise will take focus away from the Docker configuration to application architecture and business logic. So I’ll stick to a “Hello world” implementation; let’s say everything else other than mainly the Docker part of the application is abstracted away.
This will start the application in a container named go-docker from the image named go-docker and mount current directory on the host to the /app directory on the container. Any changes in the host project directory will be reflected to the container /app directory due to the volume. And if there is a file change, reflex will automatically restart the application.
I use docker-compose with my projects as there are almost always different components involved (i.e., databases, swagger-ui, etc).
Here you can see we have four components to our system. The server which runs at port 1323 forwarded to port 8080 on the host. A swagger-ui container for api documentation. A Mongodb container and a Mongo Express container for database. The volume of the Mongodb data ensures our data is persisted even if the Mongo container is destroyed.
When running the system using docker-compose, we can access components from inside other components by just referring the service name. For example, to access the mongo container, we don’t need to use the container’s ip address, rather we can just use mongodb://mongo as the connection string.
If we set up auth in our Mongodb database, we could use the connection string as the following: mongodb://username:passwd@mongo
We can also pass environment variables to our containers as you may have already noticed. We are passing AWS credentials to the server container. You can create a .env file in the root directory and place the environment variables there. Make sure you add the file to .gitignore; you don’t want these secrets to be in the version control.
Docker compose will by default look for a .env file in the root directory of your project and pass those to the mapped ones on the docker-compose.yml file.
Production Docker Environment
For production, I build the image in two stages. The first stage builds the app, the built app and required files (i.e., configuration files) are copied to the second stage which runs the app in an alpine linux based container. Alpine linux has a tiny size and reduces the size of the final image drastically.
The final image size is 6.97MB only. This is absolutely fantastic because it reduces cost for storing these images in the cloud.
As a comparison, our development environment image size was 832MB. This is because it was a Debian 10 base image.
We can use Alpine linux for development environment as well if we want to, but it isn’t really necessary. We would also need to add several build tools which will eventually increase the image size.
Deploying to production
You can take your production image and deploy it to a swarm cluster. You can build the image and push it to a remote repository such as AWS’s ECR, and pull it from your server to update the service.
If you want to manage the cluster yourself, I highly recommend you go through the documentation for Docker Swarm; and use it.
Most of the time, I prefer running the cluster in a managed service such as AWS’s ECS. This saves both time and effort in creating and managing the cluster. A managed service also provides a lot of tools off the shelf such as viewing logs and metric. We can create revisions of task definitions and update the services with zero downtime.
I have been using Docker for a while now with multiple stacks and can assure you that it will ease the life of everyone in your team if used well. Getting new team members on-boarded can become a breeze when using Docker.
Ok, I admit it. The title might sounds weird because, how it is supposed to connect to an SQL service without a port, right? Well the catch is that there IS a port, but it is not an inbound port, therefore nothing is exposed. And it is fully managed by AW...
At Monstarlab, we are using SonarQube to gather metrics about the quality of our code. One of the metrics we were interested in is code coverage. However, just running sonar-scanner on the project will not upload the coverage data to our instance of Sonar...