Everything in life eventually comes to an end, including life itself. This is not that kind of post, though.
Microservices is the new fancy way of doing applications. Yet, most companies still have big and old monoliths in production. In fast evolving software of this size, it’s usual to have lines of code which are never executed in production. Production code coverage reports can help us find those lines.
At ContaAzul, we use the CI infrastructure a lot. We open several pull requests in several projects every day, and we block the merge until the build pass. We consider our master branches are sacred, and we can’t afford too much waiting to change them.
At ContaAzul, we have several old pieces of code that are still running in production. We are committed to gradually re-implement them in better ways.
As a DevOps/SRE, I spent a reasonable amount of time dealing with metrics and alerts.
Or: how to ship your app in a <20Mb container. Well, as you may know, there is a good amount of people now building microservices in Go and deploying them as Docker containers. I do not yet have a lot of experience with Go and Docker, but I’ll try to share what I learned while building and shipping an internal tool, here, at ContaAzul. The Go code example I will assume that you know at least a little bit of Go, and, for the sake of simplicity and brevity, I’ll just use a very basic example from the Go wiki:
At ContaAzul, we had 31 Windows machines powering our Selenium tests - one running the grid and 30 more running clients. Needless to say, this is very expensive. As we are already using Docker to run our builds (on Shippable), we decided to try it out to run Selenium tests too. It was no surprise that Selenium folks already made a ready-to-go set of Docker images. There is an image for the Selenium Grid itself, and the browser images - Chrome and Firefox, also with debug versions - which allow you to access them using VNC to “see what’s happening there”.
At the company I work Pull Requests are part of our culture. When someone opens a Pull Request, we do Code Review. If we think it’s OK, we comment “+1”, or “-1” otherwise. We usually only merge a PR when it has 3 or more “+1” comments. Part of this review is to check for tests. We used to manually look at our code coverage statuses, and see if our recently added lines are with enough coverage.