Building a Cloud-Native App using Angular/Ionic with Amplify and Java/Quarkus with Copilot ECS.
Let’s do things more complex. In my previous tutorial, I deployed a Java Quarkus native application using AWS Fargate. But now, we will be adding the frontend counterpart to that application that will be created using Angular/Ionic in conjunction with the Amplify technology. The backend side will be the same, but we will use the ECS Copilot tool to automate the Fargate cluster creation on ECS.
As our ECS cluster doesn’t have Internet access, we will add an API Gateway service responsible for redirecting all HTTP requests to the ECS cluster through a Private Link leveraging the Cloud Map service.
UPDATE: After you finish reading this instructed article, I recommend you read a new tutorial that I wrote using the same Timer Service application described here, but with some additional characteristics, like deploying the Timer application using a Docker container image for DynamoDB and using a Quarkus OIDC configuration to validate the user’s session token against Cognito.
To complete this guide, you’ll need:
- An AWS account.
- Git.
- AWS-CLI.
- Amplify CLI.
- AWS Copilot CLI.
- GraalVM 22.1.0 with OpenJDK 17. You can use the SDKMAN tool.
- Apache Maven 3.8 or superior.
- Docker and Docker Compose.
- IntelliJ or Eclipse IDE.
NOTE: As usual, you will find the project’s source code used for this tutorial in my GitHub repository. You can download it to find some configurations and other details.
IAM User Groups and Policies.
If you have followed some tutorials on the web, you have noticed that many of them (if not all) use the “AWS System Administrator” policy to perform all the tutorial tasks. Conversely, I like to use specific IAM policies to execute some actions on AWS. The following picture shows you the “IAM User Groups” that I’ve been using for this tutorial:
My AWS user belongs to each of those groups, inheriting the policies associated with them, as you can see in the following image:
All the assigned policies have the “Full Access” pattern. My intention here isn’t to go to the extreme of setting more granular policies for users. I want to do so, but it will be a tremendous task that I think is unnecessary for the complexity of the different applications on the cloud. Still, assigning your root account administration privileges to a user isn’t a good idea too.
I hope that this information will be helpful for the security perspective of your applications.
The Backend.
As you can see in my previous article, there is a folder called “scripts” where you can find some bash scripts to create all the required services on AWS. I confess that I did that because I like automation very much, and I’ve seen much of this using Amplify in the frontend apps (more of this in the next section). But now, I found an excellent tool that allows us to automate all those tasks to create a Fargate ECS on AWS.
ECS Copilot to the rescue.
The ECS Copilot tool can help us build, deploy, and operate containerized applications on ECS. So first, you must install the copilot-cli command on your local environment. You can follow the instructions shown in this article to do so. Then, at the root of our backend Java project directory, initialize the copilot:
# copilot app init timerservice
Then, we must create the environment configuration that, in our case, is “dev” and select your AWS user profile to realize these operations:
# copilot env init --name dev
The previous command initializes the base infrastructure for our Timer service on AWS using CloudFormation:
The copilot command executes and creates the needed services on AWS like VPC, subnets, ECS cluster, IAM roles, task definition, etc. All these tasks are using a CloudFormation template internally.
Now, we can initialize the configuration file for our ECS service:
# copilot svc init
This command creates the ECR repository for our Timer Service docker image on AWS. This is the only service that copilot-cli makes on the cloud using this command:
Notice that I selected the “Backend Service” option as a Service type. This service doesn’t have Internet access and must be deployed in private subnets. We need other AWS services to interact with the cluster, like an API Gateway (more details in the next section).
Copilot has generated a YAML file called “manifest.yml” that you can find in the “copilot/api” folder. This file contains naming resources that we can define like: docker image, CPU, memory, env variables, etc.:
Our Timer Service also needs the Task table on DynamoDB, so let’s configure it:
# copilot storage init
Regarding storage services, remember that our Timer Service uses the Quartz technology configured for clustered environments. That technology needs a relational database to store the state of programmed Jobs. So, we must add an Aurora Cluster configuration too:
# copilot storage init
Notice that copilot shows a message that indicates the generation of an env variable called “TIMER_SERVICE_DB_CLUSTER_SECRET”, which contains a JSON of the values for our Aurora DB connection; in the case of JavaScript, we can get those values in the following form:
const {username, host, dbname, password, port} = JSON.parse(...);
We must consider this env variable because our Timer Service now needs to connect to our AuroraDB programmatically. For this reason, I’ve created a class that Quarkus use to initialize the application. In this class, you can define some business logic for anything before Quarkus starts the application, and for our case, it is to set the required system properties to connect to AuroraDB on AWS:
So far, copilot has generated 3 files representing CloudFormation templates to deploy our ECS service and the needed storage. These files are:
- copilot/api/manifest.yml: used to configure the docker image hardware resources, AWS environments, and the deployment strategy.
- copilot/api/addons/Task.yml: used to create the Task table on AWS DynamoDB.
- copilot/api/addons/TimerServiceDBCluster.yml: used to create the Aurora Postgres Cluster (using serverless v1) on AWS.
As you can see in the last point, I mentioned that the generated Aurora Cluster uses serverless version one. I want to use version two, and I modified this file. Then, we must use the aws-cli command to validate the correct syntax of the modified CloudFormation template:
# cd copilot/api/addons
# aws cloudformation validate-template \
--template-body file://TimerServiceDBCluster.yml
If there aren’t any errors, we can create our AuroraDB resource on AWS using our new CloudFormation template only for testing purposes:
# aws cloudformation create-stack \
--stack-name timerservice-auroradb-test \
--template-body file://TimerServiceDBCluster.yml \
--parameters \
ParameterKey=App,ParameterValue=timerservice \
ParameterKey=Env,ParameterValue=dev \
ParameterKey=Name,ParameterValue=AuroraDBCluster \
--capabilities CAPABILITY_NAMED_IAM
I was getting some problems trying to deploy the Aurora Serverless version 2. I was finding some configuration properties for Aurora Serverless 2 on the Internet, thinking that I was missing to set some properties. The issue that I was experimenting with was the unsupported “ServerlessV2ScalingConfiguration” property on the CloudFormation template. I’ve been using this configuration property in my last article in the automation section and using the aws-cli command, and it worked fine:
# aws rds create-db-cluster \
--region us-east-1 \
--engine aurora-postgresql \
--engine-version 13.6 \
--db-cluster-identifier timer-service-db-cluster \
--master-username postgres \
--master-user-password postgres123 \
--db-subnet-group-name timer-service-subnet-group \
--vpc-security-group-ids sg-012121d2a33ebfe56 \
--port 5432 \
--database-name TimerServiceDB \
--backup-retention-period 35 \
--no-storage-encrypted \
--no-deletion-protection \
--serverless-v2-scaling-configuration...
Finally, I found the following message in the CloudFormation official documentation in the Amazon RDS section :
So, for the moment, we must use Aurora Serverless version 1 for our Timer Service until version 2 is supported by the copilot-cli.
Well, it’s time to deploy all the Copilot configurations to AWS :
# copilot svc deploy --name api --env dev
Notice at the end of the output message that the copilot indicates that we can access our service using the “service discovery.” So, let’s go to the CloudMap service in our AWS console to see the results:
In the previous pictures, you can see the Service Discovery output configuration that registers 2 instances of our Timer Service. Furthermore, this service uses another AWS service called Route53 to register the private DNS for our ECS service instances. Later in this article, we must use the “service registry” tool to configure our API Gateway.
Besides, you can go to the CloudFormation service on your AWS console to see the results of the created stacks using the copilot CLI:
The API Gateway.
Following the copilot convention, I’ve created a folder called “cloudformation,” where you can find some templates to deploy the missing architecture components. The first file we need to deploy is the “SecurityGroupIngress.yml,” allowing the API Gateway Route to access the ECS cluster services through port 8080. Validate the correct syntax of the CF template:
# aws cloudformation validate-template \
--template-body file://cloudformation/1_SecurityGroupIngress.yml
Then, we can deploy the changes on AWS:
# aws cloudformation create-stack \
--stack-name timerservice-ecs-sg-ingress \
--template-body file://cloudformation/1_SecurityGroupIngress.yml \
--parameters \
ParameterKey=App,ParameterValue=timerservice \
ParameterKey=Env,ParameterValue=dev \
--capabilities CAPABILITY_NAMED_IAM
Open the Security Groups in the AWS Console and select the security group with “copilot-timerservice-dev-env.” Then, in the “Inbound Rules” tabs, you will see something like the following picture:
As you can see, we’ve created an inbound rule that allows connections from the CIDR block created by copilot-cli and for the port 8080 used by our Timer Services inside the ECS cluster.
Now is the turn of the API Gateway. First, validate the CF template:
# aws cloudformation validate-template \
--template-body file://cloudformation/2_ApiGateway.yml
If there are no errors, we can deploy our API Gateway to AWS:
# aws cloudformation create-stack \
--stack-name timerservice-apigateway \
--template-body file://cloudformation/2_ApiGateway.yml \
--parameters \
ParameterKey=App,ParameterValue=timerservice \
ParameterKey=Env,ParameterValue=dev \
ParameterKey=Service,ParameterValue=api \
ParameterKey=Name,ParameterValue=ApiGateway \
--capabilities CAPABILITY_NAMED_IAM
NOTE: For development purposes, I set the CORS configuration to allow connections from “http://localhost:8100”, which is the URL of our Angular/Ionic app.
After a few minutes, the stack will be completed, and you can verify that in the CloudFormation console:
This stack also creates the necessary logs group to monitor the activity in the API Gateway. You can check it out in the CloudWatch console like this:
Another manner to get the logs of our service in real-time is by opening your terminal and executing the following command:
# copilot svc logs --follow
Now, it’s time to test the internal service communication from the API until the ECS cluster. We need to know the public DNS of our created API Gateway for this purpose. Go to the CloudFormation console and click on the recently completed stack of our API Gateway:
Once inside the stack page, click on the “Outputs” tab and copy the “Value” field that contains the public DNS endpoint of our API Gateway:
Open the Postman tool and executes a GET operation to obtain a JSON template to use in different service operations:
Now, open a new tab and paste the JSON template in the Body section of the request to perform a POST operation. Update the required Task fields with a proper day, hour and minute. Then, complete the POST operation to create your desired Task:
After you successfully create the Task, go to your terminal and see the logs for the previous operations:
When the times come to execute the programmed Job, you will see the corresponding message logs in your terminal:
Finally, you can execute a GET operation to our API to obtain the actual Tasks on DynamoDB:
And that’s it!!. We have an active API Gateway that we can use to interact with our frontend App. But before that, let’s populate our DynamoDB table with some fake data for testing purposes.
Persisting Faker Data on DynamoDB.
I wrote an article called “Using Java Faker lib to populate data on DynamoDB employing AWS-SDK,” where I created a project using the Java Jaker lib to populate faker data on DynamoDB. I will reuse that project in this tutorial, making some changes to use it for our Task table. So, we need to package the project before using it:
# cd java-faker-data-generator/
# mvn clean package
Then, run the Java project:
# java -jar target/java-faker-data-generator.jar
Enter your aws profile and the amount of data to generate:
Open the DynamoDB console, and then you should see the generated data:
At this point, we can use our API Gateway to get those data:
Now, we are ready to use those faker data to make some tests in our frontend application.
The Frontend.
I’ve created an Angular/Ionic project for our microapp. You can find this project in the “frontend” directory. Also, I used Amplify to initialize the project on AWS:
# amplify init
IMPORTANT: Normally, we push the Amplify generated directory to our Git repositories. I’ve modified the “.gitignore” file to ignore this directory because it contains sensitive data of my AWS account. Furthermore, you will need to create the Amplify infrastructure from scratch and deploy it to your AWS account.
In the following image, I will show you the main configurations that I used to init the Amplify project:
IMPORTANT: Notice the “Build Command” value. I use the “build-prod” script for packaging the project for production use. This will be important when we deploy our app in the Amplify hosting service.
Then, I added “Cognito” support using Amplify for users’ registration and authentication:
# amplify add auth
# amplify push If you want to enable the 2nd-factor authentication to your Angular/Ionic app, you can read my last article called “Adding Amplify Auth with 2-factor authentication to your Ionic/Angular projects” and follow those steps.
I share with you the main configurations that I used for the authentication service with Cognito:
Now, it’s time to compile and deploy our app locally to verify that the login is working as expected:
# ionic build
# ionic serve
The previous image shows you the main page after the user has logged in. This is a default template, and I will modify this page in the following steps. The principal idea here is that we have validated the Cognito integration successfully.
The TASK pages.
Let’s create a new page component called “tasks”:
# ionic g page pages/tasks
The UI is something like the following images:
As you can see, I’ve created 3 extensive Angular modules—one for the authentication, another for shared components, and the final for the tasks. I’m doing this because, in the future, I will post an article about MicroFrontents. With this kind of modularization in our apps, the task of dividing the app into smaller ones will be less complicated.
Another setting we must be aware of is to indicate to our app the API endpoint of our tasks. We can do this by updating the files inside the “src/environments/” folder with something this:
The final pages shown by the app are the following:
Remember that I’m not an expert in UX design, but I do the best I can 😉. The main idea here is to prove the functionality of our app in terms of frontend and backend integration. Later, we will be deploying the entire app to AWS.
The Native App on AWS.
We have deployed an ECS infrastructure using the Copilot CLI and an API Gateway using the CloudFormation CLI. The only part that we haven’t deployed is our Amplify app. But don’t confuse the services as Cognito deployed on AWS by Amplify. I refer to the fact that our Angular/Ionic app is running in our local environment, so we need to push our app to the Amplify service on AWS.
First, go to the Amplify console on AWS, and you’ll see our Timer Service app. But it doesn’t have a hosted environment created, only the backend environment, so let’s create a hosted environment for our app:
In my case, I have the source code on GitHub, so I choose that service provider:
AWS automatically recognized my Github account and got authorization to read all the public repos in my account. You can fork my repo into your account to make the desired changes and deploy it to your AWS account.
Then, we must select the Timer Service repository and specify the directory of our Amplify/Ionic project inside the monorepo:
In the next step, we need to select the Amplify environment (created initially as “dev”) and the required IAM role that Amplify will use to deploy the infra on AWS. If you don’t have one, Amplify generates one for you:
In the next step, we’ll see the details of the options selected before deploying our project to Amplify hosting:
NOTE: As I don’t deploy the “amplify” directory to my GitHub repo (as I mentioned before), you will notice in the previous image that the “Build Settings” property has the “Auto-detected” value. When I configured the Amplify project, I entered another build script that packages the project for production use. That configuration doesn’t appear here because my “amplify” folder isn’t in the repo. The main idea here is to specify the correct “Building Script” for production use when you deploy your project to the Amplify service. This is like a best practice approach 😉.
When you click on the “Save and deploy” button, you must see the deployment flow as shown in the following image:
If all the process is OK, you must see all the checked steps as follows:
Notice that Amplify provides us with an URL (in blue color) that is where we can access our app:
So, let’s see what happens when we try to log in:
That’s because API Gateway CORS configuration allows connections only from “localhost.” We must update the CF file of our API Gateway again to allow connections from the “https://…amplifyapp.com” too, as you will see in the following code snippet:
So then, we must update the CF template using the AWS CLI. Remember to replace the values in bold with yours:
# aws cloudformation update-stack \
--stack-name timerservice-apigateway \
--template-body file://cloudformation/1_ApiGatewayAuthorizer.yml \
--parameters \
ParameterKey=App,ParameterValue=timerservice \
ParameterKey=Env,ParameterValue=dev \
ParameterKey=Name,ParameterValue=ApiGateway \
ParameterKey=AppClientID,ParameterValue=abcdefgh1234567890 \
ParameterKey=UserPoolID,ParameterValue=us-east-xyz123 \
--capabilities CAPABILITY_NAMED_IAM
After the CF command finishes, you can try to access your app. After you log in, you must see the app’s main page with all faker data created before. Furthermore, no error logs must appear in your browser console:
And that is!! Now you have a Native App running on AWS using services like Amplify and ECS Copilot. So, the next step we must perform is to validate the Cognito JWT in our API Gateway, and I will do it immediately in my next article. I know there is a lot of work here, but I will try to divide some of the crucial configurations for this project and explain them in other tutorials.
Building Scripts
As you know, I like automating some of the procedures I made in my tutorials. So in the main folder of this repo, you will find a script called “run-scripts.sh” that will show you an interactive menu with some of the essential steps that we made in this exercise:
# ./run-scripts.sh
You can find the details of these scripts in a folder called “scripts” in the project’s root directory. Those bash scripts execute CloudFormation templates that you can find in a folder called “cloudformation” in each of the projects for the backend or frontend, depending on the case.
That’s it for me until today. My next article will show you how to configure the Cognito JWT validation in every HTTP request to our API Gateway. So, I hope to see you the next time.
Thanks for your reading!!.