AWS Tip

Best AWS, DevOps, Serverless, and more from top Medium writers .

Follow publication

Deploying a Container Image with a Quarkus Native Application on AWS Fargate ECS.

--

Introduction

Another exciting topic, and I’ll tell you a little; it isn’t straightforward at first. Here, you will find many tasks, configurations, and commands to perform, but when you are immersing in this topic and following the instructions below, you will find more confidence. Besides, I’m using our well-known Timer Service that I’ve used in my previous articles. You can download the source code by following the instructions below.

Let’s begin talking about AWS Fargate, a serverless technology for Amazon ECS that allows us to run containers without managing servers or clusters. It’s a pay-as-you-go compute engine that lets us focus on building applications without managing servers. It removes the need to choose server types, decide when to scale our clusters or optimize packing.

On the other hand, building a native executable requires a GraalVM distribution. There are 3 types of it: Oracle GraalVM Community Edition (CE), Oracle GraalVM Enterprise Edition (EE), and Mandrel, which is the primary goal to provide a way to build native executables designed for Quarkus.

To complete this guide, you’ll need:

TIPS: I experienced some issues when I executed some of the commands detailed in this tutorial. I have the macOS Monterey 12.3.1 installed on my MacBook Pro, and these are some of the workarounds that I made:

  • If you want to use Minikube as a Docker Desktop alternative, install the VirtualBox driver instead of the suggested Hyperkit driver. The mount volumes specified at the “docker run” command or the “docker-compose” file don’t work using Hyperkit.
  • Update the version of your “aws-cli” to 2.7.0 or superior. Some command properties used when creating the Aurora Postgres Serverless don’t work if you use a lower version than the detailed before.

Configuring PostgreSQL with Docker Compose

As I detailed in my previous article, I created a Timer Service using the Quartz technology running on a single instance. For this tutorial, instead, we will configure the Quartz technology for a clustered environment. So, we must configure a database that Quarkz use to operate in this scenario. I’ll choose PostgreSQL because Aurora’s service allows us to create a cluster environment using the Postgres database in a Serverless fashion.

For development purposes, I used a docker-compose file to run an instance of the Postgres database for our local environment:

Note that I’m using the Postgres 13.6 version. That’s because Amazon Aurora only supports (at the moment) version 13.6 of the Postgres database for serverless environments. So, it is good to use identical versions of the same tools in different environments to maintain a standard.

Open a different terminal tab window and run the Postgres container:

# docker-compose up postgres

Then, return to the original tab window and execute the Timer Service in “dev” mode. Verify that all is working as we expect:

# mvn clean quarkus:dev

The Quartz service creates the needed tables at runtime using the Flyway plugging. You can open the pgAdmin 4 tool to connect to your local Postgres instance to see the created tables:

If all is working perfectly, we can move to the next section to create a native image container of our Timer Service.

Building Native Executable with GraalVM

When you have seated the GraalVM on your computer, also install the native-image tool using thegu command:

# ${GRAALVM_HOME}/bin/gu install native-image

IMPORTANT: The native executable will contain the application code, required libraries, Java APIs, and a reduced version of the VM. The smaller VM base improves the application’s startup time and minimizes disk footprint.

Open the POM file, and you can find the following profile at the end in the profile’s section:

<profiles>
<profile>
<id>native</id>
<properties>
<quarkus.package.type>native</quarkus.package.type>
</properties>
</profile>
</profiles>

We use this maven profile because packaging a native executable takes time. When the native executable is created using Maven, we have 2 options: passing the “-Dquarkus.package.type=native” property to the Maven command; or using the profile name directly:

# mvn clean package -Pnative

This command produces an artefact ontarget/java-timer-service-quarkus-1.0.0-SNAPSHOT-runner that we can run as a native executable:

# ./target/java-timer-service-quarkus-1.0.0-SNAPSHOT-runner

You can use the Postman tool to operate over the different endpoints of the Timer Service, as I showed in my previous tutorial. Besides, using the “failsafe-maven-plugin” we can run the Integration Tests:

# mvn verify -Pnative

This command also generates a native executable before performing the integration tests:

Also, it’s possible to run the tests against the native executable created previously with Maven, so we can avoid to re-build the native image again:

# mvn test-compile failsafe:integration-test

This command will discover an existing native image in the target directory and run the tests against it.

Creating a Container Image

The main idea here is to create a container image using the native executable of our service (which is a 64 bit Linux executable). So, we are going to copy our native executable into a Docker container, as shown in the following procedure:

Following this procedure, we also want to build the native executable directly in a container without having a final container containing the build tools. That approach is possible using a Multi-Stage Docker build approach:

  1. The first stage creates the native executable using Maven.
  2. The second stage copies the produced native executable into the Micro-Image.

Execute the following command to provision the Maven Wrapper plugging in our project:

# mvn -N org.apache.maven.plugins:maven-wrapper-plugin:3.1.1:wrapper

The multi-stage build can be achieved as is shown in the “Dockerfile.multistage” file under the “src/main/docker” directory:

Notice that I’m skipping Maven tests because otherwise, we will get an error due to the lack of our AWS credentials in the docker container. So, we need to pass our secret access keys as environment variables when running the docker container. The best approach for this part is to use IAM Roles, but we’ll do it when using the ECS service. For now, we must export our AWS access keys as env variables:

# export AWS_ACCESS_KEY_ID=[your-access-key]
# export AWS_SECRET_ACCESS_KEY=[your-secret-access-key]

Another significant change that you must do is to update your “.dockerignore” file to accept the following files at a build time:

*
!.mvn
!mvnw
!pom.xml
!src
!target/*-runner

Notice that the “Dockerfile” has COPY operations for files and directories when the docker service builds the Timer Service image. For this reason, those files must be accessible by the docker process.

Finally, we can run the docker build command to generate our native image:

# docker build -f src/main/docker/Dockerfile.multistage \
-t aosolorzano/java-timer-service-quarkus:1.0.0-SNAPSHOT .

The previous command generates our Timer Service native executable first and then copies it to the final docker image:

Now, we can run the Timer Service with 2 container instances alongside 1 Nginx container instance:

# docker-compose up --scale tasks=2 --scale nginx=1

We can see that the docker-compose service starts 2 instances of our Timer Service with the labels “-task-1” and “-task-2”. Also, it starts the Nginx instance as we spect:

Finally, we need to achieve a smoke test to validate if our service is working as we spect. First, we need to know the IP address of our running docker service. I’m using Minikube, and I can get the IP address of my docker service with the following command:

# minikube ip

Now, we can open the Postman tool to interact with our service:

IMPORTANT: If you get an error of the type “Connection Refused”, try to turn off your firewall for a moment while performing the smoke tests.

If the Timer Service is working as we expect. The next time that you use the docker-compose command, only execute the following:

# docker-compose up --scale tasks=2

This command will run 2 instances of our Timer Service container and 1 instance of the Postgres and Nginx containers.

Creating Aurora Serverless DB with PostgreSQL

At this point, we realized local environment testing using the docker-compose. Now, it’s time to configure the Postgres database on AWS.

For a serverless style, AWS offers a database service named RDS. We can configure a Postgres database using the Aurora service. First, we need to verify that your AWS user has permission to operate over the RDS service. So, your user must have been assigned the “AmazonRDSFullAccess” policy. Then, you must execute the following command to create the subnet group for AuroraDB:

# aws rds create-db-subnet-group \
--db-subnet-group-name timer-service-subnet-group \
--db-subnet-group-description "SG for the Timer Service" \
--subnet-ids '["subnet-0a1dc4e1a6f123456","subnet-070dd7ecb3aaaaaaa"]'

You need to replace the subnet ID written in bold with your own. Then, we must create our Aurora cluster with the following command:

# aws rds create-db-cluster                               \
--region us-east-1 \
--engine aurora-postgresql \
--engine-version 13.6 \
--db-cluster-identifier timer-service-db-cluster \
--master-username postgres \
--master-user-password postgres123 \
--db-subnet-group-name timer-service-subnet-group \
--vpc-security-group-ids sg-012121d2a33ebfe56 \
--port 5432 \
--database-name TimerServiceDB \
--backup-retention-period 35 \
--no-storage-encrypted \
--no-deletion-protection \
--serverless-v2-scaling-configuration MinCapacity=8,MaxCapacity=64

IMPORTANT: Change the VPC security group to your own. Once you’ve identified your security group ID, add an ingress rule that allows connection to your VPC from anywhere (0.0.0.0/0) on port 5432. Remember that your AWS account comes with a preconfigured VPC, subnets, security groups, etc.

Now it’s time to create the first database instance into the Aurora Postgres cluster:

# aws rds create-db-instance                              \
--db-instance-identifier timer-service-db-instance \
--db-cluster-identifier timer-service-db-cluster \
--engine aurora-postgresql \
--db-instance-class db.serverless

Note that this instance is of class type “serverless”. If you open your AWS console, you should see something like this:

In the “Connectivity & security” section, the following are the available options for our DB instance:

Notice that I’ve selected the “Publicly accessible” parameter with the “No” value. We can change this value to “Yes” for testing purposes. Later, we must change this value to “No”:

# aws rds modify-db-instance \
--db-instance-identifier timer-service-db-instance \
--publicly-accessible

Now, the “Publicly accessible” parameter value is updated:

We can open our “pgAdmin” tool again to access our Aurora Postgres database but this time on AWS:

Finally, let’s do new testing using our docker-compose tool. So, modify the docker-compose file to point to the newly created database on AWS:

Then, we must execute the following command to start only the Timer Service container:

# docker-compose up tasks

As you can see, there is a successful message from Flyway indicating the creation of the Quartz tables. Let’s see what happened in our “pgAdmin” tool:

Well, our Quartz tables were created successfully on AWS Aurora Serverless. Now, it’s time to push our Timer Service container image to AWS.

Pushing the Native Image to AWS ECR

First, we need to verify that our AWS user has permission to operate over the ECR service. Your user must have been assigned the “AmazonEC2ContainerRegistryFullAccess” policy. Then, we need to verify that our Timer Service docker image was created correctly:

Second, we need to create an Amazon ECR repository to store your docker Timer Service image:

# aws ecr create-repository \
--repository-name timer-service-repository \
--region us-east-1

The output will be something like this:

{
"repository": {
"registryId": "aws_account_id",
"repositoryName": "timer-service-repository",
"repositoryArn": "arn:aws:ecr:region:aws_account_id:repository/.",
"createdAt": 1505337806.0,
"repositoryUri": "aws_account_id.dkr.ecr.region.amazonaws.com/..."
}
}

Third, we need to tag our Timer Service image with the “repositoryUri” value provided in the previous command:

# docker tag aosolorzano/java-timer-service-quarkus:1.0.0-SNAPSHOT \           aws_account_id.dkr.ecr.us-east-1.amazonaws.com/timer-service-repository

Fourth, run the ECR “get-login-password” command. Specify the registry URI you want to authenticate to:

# aws ecr get-login-password | docker login --username AWS \
--password-stdin aws_account_id.dkr.ecr.us-east-1.amazonaws.com

Finally, push the image to Amazon ECR with the value of the repositoryUri provided in previous steps:

# docker push aws_account_id.dkr.ecr.us-east-1.amazonaws.com/timer-service-repository

If this is OK, in your AWS ECR console, you can have the newly pushed docker image:

Creating AWS ECS Cluster

Let’s verify that our AWS user account has permission to operate over the ECS service. Your user must have been assigned the “AmazonECS_FullAccess” policy. Then, let’s create our cluster with a unique name with the following command:

# aws ecs create-cluster --cluster-name timer-service-cluster

Before running a task on our newly ECS cluster, we must register a task definition. Task definitions are lists of containers grouped. You can find a file called “timer-service-ecs-task-definition.json” in the “/aws” directory with the following content:

In the previous file, we can define the source of our container image and the hardware capacities to deploy into the cluster. Notice that I’m referencing an IAM role called “TimerServiceEcsDynamoDbRole” that must have permissions to access the “Task” table on DynamoDB. The role can be created by executing the following command at the project’s root folder:

# aws iam create-policy \
--policy-name TimerServiceDynamoDBAccessPolicy \
--policy-document file://aws/timer-service-dynamodb-policy.json
# aws iam create-role \
--role-name TimerServiceEcsDynamoDbRole \
--assume-role-policy-document file://aws/timer-service-trust-policy.json
# aws iam attach-role-policy \
--role-name TimerServiceEcsDynamoDbRole \
--policy-arn "arn:aws:iam:::policy/TimerServiceDynamoDBAccessPolicy"

Now, we can create our task definition with the following command at the project’s root folder:

# aws ecs register-task-definition \
--cli-input-json file://aws/timer-service-ecs-task-definition.json

If you get an error like: “An error occurred (ClientException) when calling the RegisterTaskDefinition operation: Fargate requires task definition to have execution role ARN to support ECR images.”, it’s is because you do not have the managed role ecsTaskExecutionRole which is used by the ECS service to create tasks definitions. In this case, I’ve made some bash scripts to help you do this task. You only need to execute the following command in the “/scripts” directory:

 ./6_create-ecs-task-iam-roles-an-policies.sh

After registering a task in our account, we can create a service for our cluster’s task definition. It also requires a route to the internet, so there are two ways you can achieve this. One way is to use a private subnet configured with a NAT gateway with an elastic IP address in a public subnet. Another way is to use a public subnet and assign a public IP address to your task. I will use the second option for this tutorial:

# aws ecs create-service --cluster timer-service-cluster   \
--service-name fargate-timer-service \
--task-definition timer-service-task-definition \
--desired-count 1 \
--launch-type "FARGATE" \
--network-configuration "awsvpcConfiguration={subnets=[subnet-042e6673123570f61],securityGroups=[sg-012121d2a33ebfe56],assignPublicIp=ENABLED}"

For the values that I put in bold, you can go to the VPC service on your AWS console and select one of the pre-defined subnets created when you open your AWS account. That subnets have Internet access configured:

In the left menu of the VPC service, you can find an option for the “Security Groups”. As the same as the subnets section, our account comes with a predefined Security Group associated with our default VPC:

IMPORTANT: Do not forget to add a rule to allow connections from anywhere through port 8080, as you can see at the end of the “Inbound rules” table in the previous image. You can customize this port number when creating the “Task Definition”.

You can use all these values of your AWS account to configure your ECS Fargate Service.

If you didn’t get any error until this point, you could execute the following command to see the actual status of your running service:

# aws ecs describe-services \
--cluster timer-service-cluster \
--services timer-service

Try to observe in the “events” section if there are any error messages. If not, the output must look like this:

And it is the time of truth. We can execute the following command to locate the Elastic Network Interface identifier (ENI ID). You must run the following command:

NOTE: I wrote the “task number” in bold, and this is the number that appears at the end of the last image. With this param, we can get the desired ENI ID value:

# aws ecs describe-tasks \
--cluster timer-service-cluster \
--tasks 0f7983586eb348f894f832883854bf86

Next, we can get the public IP address of our running service in the Fargate cluster:

# aws ec2 describe-network-interfaces \
--network-interface-id eni-0a74c53f9ede3a545

The command also shows us the public DNS that we can use to access our Timer Service on ECS. And again, it is time to open our Postman tool to verify if all is working correctly:

Or we can use the provided public DNS to access our ECS cluster instance:

Finally, the time of truth. We must create a Task/Job in our Timer Service on the ECS cluster:

The service responds with an HTTP 201 code, and in a few seconds, we’ll see the logs in the CloudWatch service:

The logs are configured in the “Task Definition” document:

Automating tasks with Bash Scripts

Finally, you know that I like automating procedures in bash scripts. For that reason, I’ve created a bash script as a central access point to create, build and deploy all the tasks shown in this tutorial. This bash script is in the project root directory. You only need to execute the following command:

# ./run-scripts

You will see a menu of options that you can select according to your needs:

The first 3 options are for your local environment, and the next ones are for AWS. For example, if you select the option “a”, the bash script will ask you for some required params and then boom!! In a couple of minutes, you will have created and deployed the needed infrastructure on AWS, including the running Timer Service. Also, you have an option (d) to delete all this infrastructure of your AWS account.

These scripts have a lot of exciting commands that you can use for your projects and our binnacle of CLI commands.

And that’s it!! I hope this tutorial has been of your interest, and I will see you in my next post.

Thanks for your reading time.

--

--

No responses yet