Installing Colima as a Docker Engine provider with Buildx and Compose plugins installed to build/run Spring-based projects.
Due to license restrictions for enterprises using the Docker Desktop tool, many enterprise developers must migrate to another solution. As a Linux CLI user, Colima is a better solution for building and running containerized applications. So, in this article, I’ll describe the steps to install and configure Colima with the required tools that most software engineers will use daily.
To complete this guide, you’ll need the following:
- MacOS with ARM64.
- Homebrew (package manager).
Remove the Docker Desktop cache.
After you delete Docker Desktop, you must remove some cached files in your system. In a MacOS, I created the following shell script to delete these files:
#!/bin/bash
# Files that must be deleted.
paths=(
"~/Library/Cookies/com.docker.docker.binarycookies"
"~/Library/Logs/Docker Desktop"
"~/Library/Application Support/Docker Desktop"
"~/Library/Caches/com.docker.docker"
"~/Library/Group Containers/group.com.docker"
"~/Library/Saved Application State/com.electron.docker-frontend.savedState"
"/Library/PrivilegedHelperTools/com.docker.vmnetd"
"/Library/LaunchDaemons/com.docker.vmnetd.plist"
"/usr/local/lib/docker"
"~/.docker"
)
# Loop to delete declared files.
for path in "${paths[@]}"; do
eval rm -rf $path
echo "Deleted: $path"
done
echo ""
echo "DONE."
After you execute this script, you can continue with the next steps.
Update/Clean your Homebrew installation.
Verifying if your <Homebrew> installation is current is a good idea. So first verify if you have any outdated packages:
$ brew outdated
If the previous command shows you any package obsolete, you can upgrade them with the following command:
$ brew upgrade
Finally, clean any cached file used by Homebrew in previous installations:
$ brew cleanup
Now, we can continue with the next steps.
Installing Docker Client.
Docker Engine and Client are different products. As the title of this tutorial suggests, Colima will be our Docker Engine provider for macOS systems, but we need a Docker Client to interact with the engine. So you can install only the client with the following command:
$ brew install docker
Do not install the <docker-compose> plugin using the <brew> command because docker plugins must be installed differently.
Try to execute the <docker version> command to verify the installation:
Notice that only the Docker client info is displayed. This is because we haven’t installed any Docker Engine yet.
Installing Colima Core dependencies.
If you install Colima directly using the <brew> command, it will download the required dependencies. Still, it’s a good idea to install some core dependencies independently so you can fix any problem beforehand.
The first dependency is QEMU, an open-source software virtualization tool that performs hardware emulation. In the container space, QEMU is essential in running containers on hardware platforms other than the host. This is especially relevant in environments where containers run on multiple processor architectures. So you can install it using brew:
$ brew install qemu
The other dependency is Lima, a tool that allows us to run Linux virtual machines on macOS, acting as a container environment similar to Docker Desktop. It benefits developers working on macOS who need a Linux environment for developing, testing, or running containerized applications. So we can install it with the following <brew> command:
$ brew install lima
If you have problems installing any of these tools, try to solve them before continuing with our next step in this journey.
Installing Docker Engine through Colima.
Now, execute the following command to install Colima:
$ brew install colima
If you have successfully installed Colima, then try to run it, configuring the service to start at every system login:
$ brew services start colima
However, the service began using the default values. So try to stop the service and start it again but using the following param to edit the default values:
$ colima stop
$ colima start --edit
Adjust the CPU, memory, and disk space values according to your needs.
NOTE: As I’m using AWS to deploy the services described in my tutorials, it could be a good idea to have in mind the values used by the AWS CodeBuild service so you can dimension your Colima settings accordingly for your testing purposes:
Before you end with the Colima settings updates, you must change the <network> configuration so Colima can assign an IP address to the virtual machine. This will be useful when running our Docker Engine:
If you use VIM as your default system editor, save and exit the file with the <wq!> command, and then the Colima service will start. If everything is ok, your Colima service must be started successfully:
Now you can run the <docker version> command one more time to see the results:
Notice that now we have information about the Docker Engine (server).
Try to execute the <docker context ls> command to see which context is using the Docker client:
The Docker client uses the Colima context, which uses a Unix socket to interact with the Docker Engine through REST APIs.
This is half of our journey because I’ve had problems building Docker images, which previously worked when using the Docker Desktop tool. So, let’s continue with the next part of the tutorial.
Running Integration Tests with Testcontainers.
Let’s use the project from my previous tutorial, which contains Spring Boot and Spring Function projects. Both projects use Tescontainers technology to execute integration tests with JUnit 5.
Clone the project from my GitHub repository, navigate to the project’s folder, and execute the following Maven command to run the integration tests, initially for the API microservice:
$ mvn test -f src/city-tasks-api/pom.xml
Then, you will receive the following error message:
The error message says that the Java Docker Client could not find the Docker Socket in the system. It expects the socket in the default “/var/run/docker.sock” path.
If you search the official Tescontainers documentation, you’ll find an essential page about container runtime configurations. As we’re using Colima, we must export the following environment variables:
export TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE=/var/run/docker.sock
export DOCKER_HOST="unix://${HOME}/.colima/default/docker.sock"
Try to export these variables as system global variables. I’m using ZSH, so I updated my “.zshenv” file by adding the previous environment variables. Then, execute the following command to update the changes:
$ source .zshenv
Return to the project directory and try to execute the Maven test command again:
$ mvn test -f src/city-tasks-api/pom.xml
You will notice that this time, the Java Docker Client is downloading the required Docker images to execute the integration tests:
In the end, all integration tests must be executed successfully:
Repeat the same for the Spring Function project:
$ mvn test -f src/city-tasks-events-function/pom.xml
Notice that the Java Docker Client is downloading the required Docker images to run the integration tests, and in the end, all tests must be executed successfully:
Let’s continue to deploy our backend projects locally to make more integration tests.
Installing Docker Compose plugin.
Working in the same project folder as in the previous section, we need to deploy our API and Lambda services locally, but this time, we must use Localstack.
Notice a “docker-compose.yml” file in the project’s root directory. Inside that file, you will find a service called <tasks-localstack> with the following definition:
tasks-localstack:
image: localstack/localstack:2.2.0
container_name: tasks-localstack
ports:
- "4566:4566"
env_file:
- ./utils/docker/env/localstack-dev.env
volumes:
...
depends_on:
- tasks-events-function
networks:
- hiperium-network
We need this LocalStack container to run our backend services locally, but we haven’t installed the Compose plugin yet. So, according to the official Docker documentation, let’s execute the following commands in a separate terminal tab/window:
IMPORTANT: navigate to the releases Git page and select the latest version with the appropriate microprocessor version for your system.
$ DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
$ mkdir -p $DOCKER_CONFIG/cli-plugins
$ curl -SL https://github.com/docker/compose/releases/download/v2.26.1/docker-compose-darwin-aarch64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
Then, you must add execution permissions to the downloaded binary:
$ chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose
Finally, test if the Compose plugin installation was successful:
$ docker compose version
You must see the plugin version in the output of the command like this:
Please return to the project’s terminal tab and try to deploy the entire Compose configuration declared for our backend services. But first, log in using the AWS SSO script and using the IDP profile:
$ hiperium-login
Then, execute the project’s main script and select option 1 to deploy the required services using Docker Compose:
This shell script first obtains some values from the Cognito IdP Service from AWS and applies them to different configuration files. Then, the script executes the Docker Compose command to start a cluster with the required services:
$ docker compose up --build
If these steps are executed successfully, you must see in the terminal window that the Docker Compose is building the API and Functions projects before deploying them locally:
This process takes significant time because the Spring projects use GraalVM to build Linux Native Executables.
If you want to wait until this process is finished, you must see the following output in your terminal indicating that Docker Compose deploys all required services successfully:
Notice how the <tasks-localstack> container prints all log messages in the console, creating the required AWS services for local testing.
Now, let’s try to deploy our backend services in AWS using an Infrastructure as a Code tool.
Installing Docker Buildx plugin.
In my previous tutorials, I used the AWS Copilot tool to deploy containerized services into a Fargate ECS cluster and SAM-CLI to deploy Lambda functions in AWS. I used Docker to build images in both cases and saved them in the AWS ECR service. So, let’s try to deploy our backend services to AWS.
As we did in the previous section, let’s use our AWS SSO script to login to the <Tasks Development> account:
$ hiperium-login
Then, run the project’s shell script, entering the IdP and Tasks profiles. Next, select option 2 to deploy the backend services into AWS:
$ ./run-script
After that, you must see a deprecated message indicating the use of the legacy <builder> component to build Docker images:
We didn’t see this message before when executing the Docker Compose command because the Compose plugin uses a different context to build and deploy the containers.
It’s supposed that the Docker client uses Buildx as a default build option, as we can see in its official documentation:
As of Docker Engine 23.0 and Docker Desktop 4.19, Buildx is the default build client.
Let’s verify this configuration in our Colima settings file located in the following path, opening it in a new terminal tab with your preferred editor:
$ vim $HOME/.colima/default/colima.yaml
Try to find the <docker> section to see the details:
We can confirm that BuildKit is enabled by default. So, the missing part is the Buildx component.
Let’s return to the previous terminal tab, and you will see that our script finishes with an error indicating an exit status of 1. If you look at the logs before the exit status, you must see that Docker was building the API microservice when the error occurred:
Notice the <Step 8/13>, where we try copying the resulting Linux Native Executable from a previous Docker stage. So, let’s go to our Dockerfile definition to take a more significant context about the process:
The COPY command is not working. So, let’s try executing the build command independently to see what happens:
$ docker build -t hiperium/city-tasks-api:1.7.0 -f src/city-tasks-api/Dockerfile .
You must see at the end of the output the same error:
It confirms that the COPY command is not working as expected. So, after researching the possible solutions for this error on the Internet, I found that it belongs to the BuildKit component according to Docker’s official documentation:
The legacy Docker Engine builder processes all stages of a Dockerfile leading up to the selected — target. It will build a stage even if the selected target doesn’t depend on that stage.
BuildKit only builds the stages that the target stage depends on.
As our final stage doesn’t depend directly on the intermediate one (which builds the Linux Native Image), Docker doesn’t process it according to the previous reference.
Until now, we have identified 2 related topics that we must address:
- Error: the BuildKit component is not processing the intermediate stage, which builds our Linux Native Executable.
- Warning: The legacy builder is deprecated, so we must install the Buildx plugin.
Let’s start with the first one. According to the Docker documentation, adding the DOCKER_BUILDKIT environment variable with zero can turn off BuildKit. This is because we are currently using the legacy builder, and we want to process all declared stages:
$ DOCKER_BUILDKIT=0 docker build --no-cache -t hiperium/city-tasks-api:1.7.0 -f src/city-tasks-api/Dockerfile .
But this didn’t work. The problem with the COPY command still appears. So my next attempt was editing and restarting the Colima service, turning off the BuildKit property:
docker:
features:
buildkit: false
This didn’t work either, so I reverted these changes and started working on our second point to see if that could solve the problem.
So, let’s start installing the missing Buildx plugin in our system as we did with the Compose one, this time following the Buildx official plugin documentation:
$ DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
$ curl -SL https://github.com/docker/buildx/releases/download/v0.11.2/buildx-v0.11.2.darwin-arm64 -o $DOCKER_CONFIG/cli-plugins/docker-buildx
$ chmod +x $DOCKER_CONFIG/cli-plugins/docker-buildx
Now, let’s execute the following command to configure Buildx as the default builder mechanism:
$ docker buildx install
This sets up the Docker builder command as an alias to the <docker buildx build> command, which results in the capacity to use the <docker build> as the default builder mechanism.
So, let’s try to build our API microservice again to see if the DEPRECATED message disappears:
$ docker build -t hiperium/city-tasks-api:1.7.0 -f src/city-tasks-api/Dockerfile .
And you can notice in the console that it disappeared:
Wait until the process finishes, and you will see that the COPY error doesn’t appear anymore:
As our problem was solved, let’s try to deploy our backend services to AWS again. This is because both tools, Copilot and SAM, use Docker to build the images and push them to ECR:
$ ./run-scripts
Enter the required AWS profiles and select option 2. The process of creating the docker images must be started, and each tool must deploy the necessary services to AWS:
And that’s it!!! We replaced the Docker Desktop tool with Colima using the required Docker plugins for our daily basis as software engineers :)
I hope this tutorial has been helpful, and I’ll see you in the next one.
See you the next time.