Spring Boot Native microservice on ECS Fargate using AWS Copilot CLI for Cross-Account deployment with Cognito as OIDC service.

Andres Solorzano
14 min readMay 28, 2023

--

There are significant changes regarding the previous architectural design used in Quarkus. We used an API Gateway with a PrivateLink service to proxy all HTTP communications to the back-end application. We also used the CloudMap service to get the DNS of the deployed service and direct the HTTP traffic to that endpoint. This was done using the Copilot CLI tool to deploy the Quarkus Native microservice on ECS Fargate for a serverless architecture. Although we used the same services stack (ECS Fargate, DynamoDB, and AuroraDB), we added an extra overhead with the API Gateway, PrivateLink, and CloudMap services. We’re using a more practical solution to replace these services with an Application Load Balancer (ALB) as an entry point for our API endpoints.

To complete this guide, you’ll need the following tools:

NOTE: You can download the source code of the Task Service with all the configurations we talk about in this article from my GitHub repository.

Install AWS Copilot CLI.

Go to the install page of the Copilot CLI official webpage, and select adequate command for your processor arch. In my case, I used the brew utility tool:

$ brew install aws/tap/copilot-cli

Now you can execute the copilot command with the version flag:

Now are ready to continue with our tutorial.

IAM Identity Center.

In a previous tutorial, I configured an IAM Identity Center for a Multi-Account environment on AWS. In that article, I used my “idp-pre” account to deploy the OIDC service. So you can use that login script to access any of your Organization’s accounts:

$ hiperium-login

In this case, I’m using my “tasks-dep-dev” account profile to deploy the Task Service on AWS.

Use the same command to get the credentials to access the account where you want to deploy the Cognito OIDC service. I’ll use my “idp-pre” profile:

NOTE: it is optional that you deploy these services in different accounts. You can deploy them in the same AWS account. I do this as a good practice and thinking in a real-world scenario ;).

Now, we can deploy our Cognito OIDC service using the Amplify CLI.

Cognito as Identity Provider (IdP).

Please, go to my previous tutorial, and follow the instructions in the “Amazon Cognito as Identity Provider” section to deploy Cognito as an IdP service. We used the Amplify tool to perform this task.

The principal configurations to take into account are the following:

  1. Use the “https://oauth.pstmn.io/v1/browser-callback/in the OAuth Flow section for redirection after a successful login.
  2. Before pushing the Auth changes into AWS, update the “parameters.json” and the “cli-inputs.json” files changing the “userpoolClientGenerateSecret” parameter value to “true.”

Here are some of the configurations made for the Amplify Auth service:

Follow the rest of that section before continuing with the next ones. Consider that we’ll need the “Auth Server URI” when configuring our Spring Boot microservice with Copilot CLI.

Using AWS Copilot CLI.

So far, you must have at least 3 created profile credentials in your “~/.aws/credentials” file. We’ll use the “tasks-dev” and “tasks-dep-dev” profiles to deploy the Tasks Service. The last is the “idp-pre” profile, where our Cognito Auth Service is deployed (see my previous tutorial).

The Copilot init command doesn’t allow us to specify a distinct AWS profile other than the “default.” So, to use our profile for the deployment tools like CI/CD Pipelines (more in the next section), we must export the “AWS_PROFILE” environment variable:

NOTE: Omit this step if you’re okay using your “default” AWS profile.

$ export AWS_PROFILE='tasks-dep-dev' 

Now we’re ready to configure and deploy our Tasks Service into AWS with the help of the Copilot CLI tool.

1. Initializing Copilot into AWS.

Go to the project’s root directory and execute the following command to configure and deploy some initial base services into AWS:

$ copilot init                                 \
--app city-tasks \
--name api \
--type 'Load Balanced Web Service' \
--dockerfile './Dockerfile' \
--port 8080 \
--tag '1.5.0'

When asked if you want to deploy the service in the “test” environment, answer “No” because we must configure our environments manually.

Notice that the command shows the created AWS services like a CloudFormation StackSet for cross-account deployments and an IAM role to access the ECR, KMS, and S3 services.

2. Cross-Account Deployment.

In a previous tutorial, we mentioned some best practices regarding the accounts we can create when configuring a Multi-Account environment on AWS, which says the objective of the Workloads accounts:

Workloads: Contains AWS accounts that host your external-facing application services. You should structure OU’s under SDLC and Prod environments (similar to the foundational OU’s) in order to isolate and tightly control production workloads.

That’s the case of our “tasks-dev” profile, where the Tasks Service workloads will be deployed. Furthermore, AWS mentions another best practice for projects that generally use CI/CD pipelines:

Deployments: Contains AWS accounts meant for CI/CD deployments. You can create this OU if you have a different governance and operational model for CI/CD deployments as compared to accounts in the Workloads OUs (Prod and SDLC). Distribution of CI/CD helps reduce the organizational dependency on a shared CI/CD environment operated by a central team. For each set of SDLC/Prod AWS accounts for an application in the Workloads OU, create an account for CI/CD under Deployments OU.

This is the case for our “tasks-dep-dev” profile. We will use CI/CD pipelines in the subsequent tutorials to deploy these development tools in that AWS account.

3. Creating Cross-Account Environment.

So let’s follow the previous best practices by AWS and execute the following command indicating our “tasks-dev” profile to deploy our Spring Boot microservice. Deployment Tools will be deployed using our “tasks-dep-profile” profile which was exported in the AWS_PROFILE environment variable:

NOTE: Omit this flag if you are using the “default” AWS profile.

$ copilot env init              \
--app 'city-tasks' \
--name 'dev' \
--profile 'tasks-dev' \
--container-insights \
--default-config

The result of the previous command will be something like this:

So it’s time to update the “manifest.yml” files (API and Environment) with our custom configurations. So let’s do that.

4. Autoscaling Group (ASG).

As our service will be deployed on an ECS cluster, we can specify the number of ECS tasks we need to deploy. So let’s add the following configuration to our “copilot/api/manifest.yml” file:

count:
range: 1-3
cooldown:
in: 30s
out: 60s
cpu_percentage: 85
memory_percentage: 80

In the “range” parameter, we specify a minimum of 1 task and a maximum of 3 tasks. In the “cooldown” parameters, we determine 30 seconds to wait for metrics to scale in and 60 seconds for metrics to scale out the ECS tasks. In the “cpu_percentage” parameter value, we indicate that our ECS tasks must scale in or out with a CPU utilization of 70%. And finally, we specify the value of 80 in the “memory_percentage” parameter, indicating to scale in or out when the memory utilization in the ECS tasks arrives at 80%.

5. Application Load Balancer (ALB).

The following properties must be added to our “copilot/api/manifest.yml” file to configure an Application Load Balancer (ALB) for our Tasks Service microservice:

http:
path: 'api/task*'
healthcheck:
path: '/actuator/health'
port: 8080
success_codes: '200'
healthy_threshold: 3
unhealthy_threshold: 2
interval: 15s
timeout: 10s
grace_period: 60s
deregistration_delay: 30s
stickiness: false

More of them are evident due to their provided name. One of the important ones is the “http.path” property, which value indicates the path the ALB uses to forward to our City Tasks Service.

The other significant property is the “http.healthcheck.path,” indicating the path used by our ALB for the health checks. In this case, we have a problem because our microservice is configured as an “OAuth Resource Server,” meaning all endpoints are protected by the “Security Filter Chain.” So we need to create the following configuration class:

With this configuration, we specify that the “Spring Security” module does not validate the JWT for the “Spring Boot Actuator paths, so our ALB can check the “/actuator/health” path without getting an HTTP 401 error.

6. DynamoDB and Aurora Serveless v2.

We can add addons like RDS services or DynamoDB tables with Copilot CLI. Remember that we need a relational database for the Quartz library and a non-relational database to store “Devices” information. So let’s start creating our Aurora database:

$ copilot storage init                   \
--name city-tasks-db-cluster \
--storage-type Aurora \
--workload api \
--lifecycle workload \
--engine PostgreSQL \
--initial-db CityTasksDB

As default, the Copilot CLI command creates an Aurora Serverless version 2 configuration file with PostgreSQL as our database engine. Now let’s create our DynamoDB table:

$ copilot storage init      \
--name Devices \
--storage-type DynamoDB \
--workload api \
--lifecycle workload \
--partition-key id:S \
--no-lsi

The previous commands create 2 files inside the “copilot/api/addons” directory:

One is for the Aurora Serverless version 2 with PostgreSQL, and the other is for the DynamoDB table.

7. ECS Task Environment Variables.

Our Spring Boot microservice needs some environment variables to operate correctly. Two of the important ones must be declared in the “variables” section of our “copilot/api/manifest.yml” file:

variables:
CITY_TASKS_TIME_ZONE: -05:00
CITY_IDP_ENDPOINT: https://cognito-idp.<your_aws_region>.amazonaws.com/<your_cognito_user_pool_id>

Notice that here is where we need our Cognito Auth Server URI. Also, unlike the environment variables we define in our docker-compose file, the ECS Tasks don’t need the AWS variables because the ECS service injects them internally when deploying an EC2 instance in the cluster.

But what about the “CITY_TASKS_DB_CLUSTER_SECRET” declared in the docker-compose file? This environment variable is injected by the ECS Task automatically because it will be created as a secret in the AWS Secrets Manager service. We can corroborate this in the “outputs” section of the “copilot/api/addons/city-tasks-db-cluster.yml” file:

The comment on line 134 was added by Copilot CLI when it created this configuration file. When we deploy our Tasks Service in AWS, we can review all these configured services in the console.

The last environment variable we must set is the “SPRING_PROFILES_ACTIVE” variable. If we do not assign this variable, when we start the microservice, we will see the following message in the terminal:

This message will also appear in the ECS cluster environment. So let’s define this variable, but this time in the “environments” section in our “copilot/api/manifest.yml” file:

environments:
dev:
variables:
SPRING_PROFILES_ACTIVE: dev

We override the “variables” section on the same file but only for the “dev” environment. In future articles, when we deploy our Tasks Service in the other environment accounts, we can override this variable for the “tests” and “prod” environments.

Also, we can override the “CITY_IDP_ENDPOINT” variable for the “prod” environment (for example) because our Cognito OIDC Server Auth will be deployed in another account, and its URI will differ.

8. Additional Environment Configurations.

The last configurations are VPC and “Access Logs” in our Copilot CLI configuration. Open the “copilot/environments/dev/manifest.yml” file and add the following values:

network:
vpc:
flow_logs:
retention: 7
security_group:
ingress:
- ip_protocol: tcp
ports: 8080
cidr: 0.0.0.0/0

These configurations indicate to Copilot CLI to configure our VPC to enable “Flow Logs,” which are all incoming/outgoing network communication inside our VPC. These logs will be sent to CloudWatch and stored for 7 days.

The other VPC configuration is a “Security Group” that allows incoming traffic communication on port 8080. Our Spring Boot microservice uses this port to expose the Tasks Service endpoints.

Finally, let’s add another configuration in the same “manifest.yml” file:

http:
public:
access_logs:
bucket_name: <your_s3_bucket_name>
prefix: access-logs

These parameters indicate to Copilot CLI to configure our ALB to store all incoming/outgoing network traffic communication inside an S3 Bucket.

IMPORTANT: Don’t forget to add the required “S3 Bucket Policy” to allow write operations on this bucket. The following is the access policy used by our Tasks Service:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::127311923021:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<your_s3_bucket_name>/access-logs/AWSLogs/<your_aws_tasks_dev_account_id>/*"
}
]
}

The principal ARN contains the account “127311923021,” which is the AWS account ID located in the “US East (N. Virginia)” region where the ALB is deployed. You can find a complete list of the other AWS region on the AWS official website.

9. Deploying Environment Infrastructure.

So far, we finalized our “manifest.yml” files and have other configuration files for our DynamoDB table and Aurora Postgres cluster for our “dev” environment. So it’s time to deploy our services:

$ copilot env deploy      \
--app city-tasks \
--name dev \
--no-rollback

The “- -no-rollback” flag disables automatic CloudFormation rollback in case of a deployment failure, so we can review the error and fix it before we roll back to the stable version of the stack.

At the end of the command execution, you must see something like this:

We can notice which services the Copilot CLI deploys to AWS. We can see services more related to networking for the ECS cluster.

10. Deploying Environment Application.

Finally, it’s time to deploy our Task Service microservice in the ECS Fargate cluster using the following command:

$ copilot deploy          \
--app city-tasks \
--name api \
--env dev \
--tag '1.5.0' \
--no-rollback \
--resource-tags project=Hiperium,copilot-application-type=api

The “- -resource-tags” flag allows us to tag all applications, services, and Copilot environment resources with additional resource tags. This is another AWS best practice you can read in the official documentation.

After the command execution ends, you must see the following output:

Notice the ECS service section. The desired account is 1, and the running value is 1 too. There are no failed or pending values reported. At the end of the command output, you must see the ALB endpoint of the service. So, let’s do some tests in the next section.

Tasks Service Endpoints Testing.

So far, our Tasks Service is deployed in ECS Fargate. Also, we now have our ALB endpoint from the previous section, so let’s try to access it from our Postman tool:

From my previous tutorial, we must get an HTTP 401 error if we try to access the Tasks Service endpoints without a valid JWT token. So let’s try to get our access token. Please, refer to the “Task Service Endpoints Tests” section for more details about this task.

Click on the orange button, and you will ask for your Cognito user credentials to get a new access token. After that, Postman will ask you to use this new access token for the actual tab. Answer yes and then try to execute the endpoint again to see what happens:

Our Task Service endpoint responds to our request showing all created tasks in the Aurora PostgreSQL database.

As we did in the previous tutorial, let’s find all tasks that the execution day is on Tuesdays:

Finally, you can use the Copilot CLI to access the logs for our service. So let’s try to execute the following command to get the records from the last hour and let the connection open to receive more logs:

IMPORTANT: When executing the Copilot command in your terminal, you must specify the AWS profile to access the correct account. We must use the “tasks-dep-dev” profile as before.

$ copilot svc logs           \
--app city-tasks \
--name api \
--env dev \
--since 1h \
--follow

Now let’s try to execute another query. This time asks for all tasks that must be conducted at midday:

In your terminal, you must see the logs for this operation:

So, our Spring Boot Native microservice is deployed on Amazon ECS Fargate, and it’s using Cognito IdP as our OIDC service to allow us to access the Tasks Service endpoint with a valid JWT.

Bash Scripts.

As usual, the idea of automatization is essential in our tutorials. So I created a couple of bash script files to automate the tasks we made in this tutorial, so you can use them to deploy the Spring Boot Native microservice on ECS Fargate. Execute the following command in the project’s root directory:

NOTE: When you execute this script, you don’t need to export any AWS environment variable as we did before.

$ ./run-scripts.sh

This script shows a menu with the following options:

These scripts are based on the Tasks Service full-stack app, which we made before using the Quarkus and Angular frameworks, but this time we are giving the first steps again with Spring Boot for the back-end microservice.

The “Helper scripts” option shows a secondary menu:

These options are helpful for our automated tasks. The first option reverts the automated scripts to their initial state without configuring them so that you can deploy the Tasks Service many times with different parameters depending on your environment state.

When you try to use option 1 from the main menu, you will be asked about the critical environment variables, asks you for creating an S3 Bucket for ALB access logs, and get the Cognito User Pool ID to assign it to the respective Copilot CLI configuration file:

So that’s it!!! We have deployed our Spring Boot 3 microservice built using Spring Native, Spring Data JPA, WebFlux, Flyway, and Quartz. Don’t forget that since many previous tutorials, we have been using TDD with Integration Tests with the help of Testcontainers.

In our following tutorial, let’s configure a custom Domain Certificate in AWS Route53 service to configure our ALB to use a secure connection with a TLS protocol.

I hope this tutorial was helpful, and I will see you reading the next one ;).

--

--

No responses yet