End-to-End Encryption using TLS ECDSA certificate, ACM, and AWS Copilot CLI to deploy Spring Boot Native MS on Fargate ECS.

Andres Solorzano
13 min readJun 14, 2023

--

My previous tutorial requested a signed TLS certificate from Amazon Certificate Manager (ACM). We used the certificate ARN in the Application Load Balancer (ALB) deployed by the AWS Copilot CLI. The ALB is configured by default with a TLS termination, so the communication to the ECS service was unencrypted but privately inside the AWS infrastructure.

The idea in this tutorial is not to configure a TLS termination on the ALB so that all HTTPS communication will pass through the ALB to the ECS Task. We also need the help of the Envoy Proxy service as a sidecar container in the ECS Task because it is now in charge of the TLS termination process before passing all HTTP (unencrypted) communication to the Spring Boot micro-service in the same ECS cluster.

To complete this guide, you’ll need the following tools:

NOTE 1: You can download the source code of the Task Service with all the configurations made in this tutorial from my GitHub repository.

NOTE 2: Validating your self-signed certificate by a Certificate Authority is unnecessary for the current tutorial.

ECDSA Algorithms Considerations.

When I’m writing these lines (June 2023), the latest version of the Envoy Proxy service is 1.26. At this moment, the Envoy Proxy only supports P-256 ECDSA — Elliptic Curve Digital Signature Algorithm — certificates:

On the other hand, the ACM — Amazon Certificate Manager — supports the following algorithms (taken from its official guide):

Notice that for ECDSA algorithms, AWS supports: 256, 384, and 521-bit. So we must use the ECDSA 256-bit due to its compatibility with the Envoy Proxy service.

ECDSA Certificate using OpenSSL.

We must create 2 certificates: CA and CSR certificates. The CA certificates (Intermediate Certificates) are for our top domain names, such as hiperium.com. The CSR certificates (Server Certificates) are for our app servers interacting with our final users, such as hiperium.cloud. The last domain in these examples must be imported into the ACM service and referenced by the ALB using the certificate’s ARN.

So, let’s go to the “utils/certs” directory and execute the following commands to generate your CA certificate (intermediate):

$ openssl ecparam                 \
-name prime256v1 \
-genkey \
-out ca-key.pem \
-outform PEM

$ openssl req -new -x509 -sha256 \
-key ca-key.pem \
-out ca-cert.pem \
-days 365

The second OpenSSL command will ask for some critical information.

Now you have 2 created files: the CA certificate and its private key. You can run the following command to validate the data entered previously:

$ openssl x509 -in ca-cert.pem -noout -text

You must see a piece of information like this:

Notice that the Signature Algorithm is ECDSA-with-SHA256, and the Public-Key uses 256-bit.

Then, generate your CSR certificate (server) with the following commands:

$ openssl ecparam               \
-name prime256v1 \
-genkey \
-out server-key.pem \
-outform PEM

$ openssl req -new -sha256 \
-key server-key.pem \
-out server-cert.pem \
-days 365

The idea is to use your CA certificate (intermediate) to sign all your CSR certificates. Before doing that, let’s create an extension file that the CSR must use to identify its alternative name:

$ echo "subjectAltName = DNS:api.example.io" > v3.ext

Now you can sign your CSR certificate using the CA certificate:

$ openssl x509 -req -days 365 -sha256   \
-in server-cert.pem \
-CA ca-cert.pem \
-CAkey ca-key.pem \
-out server-cert-dev.pem \
-extfile v3.ext \
-CAcreateserial

So now it’s time to configure our Envoy Proxy service with a TLS termination process so that all HTTP communication will be passed to the Spring Boot Native micro-service unencrypted.

Envoy Proxy as a TLS Termination service.

Create an “envoy.yaml” file in the “utils/docker/envoy” folder. You can use this configuration file on the official Envoy Proxy website. Then, we must configure the following sections:

  1. Add a listener port on 443 to accept any HTTPS communication that might come from any source:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 443

2. Update the route_config section to redirect the HTTP traffic to our Tasks Service cluster in an unencrypted manner (TLS Termination):

route_config:
virtual_hosts:
- name: default
domains:
- "<fqdn_server_name>"
routes:
- match:
prefix: "/"
route:
cluster: tasks-service

Replace the <fqdn_server_name> value with the Fully Qualified Domain Name (FQDN) used in your CSR certificate.

3. Create the Tasks Service cluster that receives the HTTP communication. Remember that our Tasks Service is accepting HTTP communications on port 8080:

clusters:
- name: tasks-service
type: STRICT_DNS
load_assignment:
cluster_name: tasks-service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: tasks-api
port_value: 8080

The tasks-api value declared in the address property is the hostname of our Spring Boot microservice deployed in the Docker cluster.

4. Last and not least, we need to specify the certificate files that our Envoy Proxy service must use for HTTPS connections:

transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/etc/server.cert"
private_key:
filename: "/etc/server.key"

Notice that certificate files are in the “/etc” directory inside the Envoy Proxy container.

So, connect all these configurations with our Docker Compose service before deploying to AWS.

Docker Compose with Envoy Proxy service.

So far, we have a “docker-compose.yml” file with 3 containers definition: Postgres, Localstack, and the Tasks Service. So the new kid on the block is our Envoy service. So let’s add this container definition:

  tasks-proxy:
image: envoyproxy/envoy:v1.26-latest
container_name: tasks-proxy
volumes:
- ./utils/certs/ca-cert.pem:/etc/ca.cert
- ./utils/certs/server-signed.pem:/etc/server.cert
- ./utils/certs/server-key.pem:/etc/server.key
- ./utils/docker/envoy/envoy.yaml:/etc/envoy/envoy.yaml
ports:
- "443:443"
networks:
- tasks-network

Notice that we’re passing 4 files to the container in the volumes section: the Envoy configuration and the certificates.

Furthermore, see that the names of the certificate files must match the defined ones in the “envoy.yaml” file:

tls_certificates:
- certificate_chain:
filename: "/etc/server.cert"
private_key:
filename: "/etc/server.key"

You must also modify the “utils/docker/compose/tasks-api-dev.env” file with your Cognito User Pool ID deployed in the IdP-Pre AWS account:

CITY_IDP_ENDPOINT=https://cognito-idp.<your_cognito_region>.amazonaws.com/<your_cognito_user_pool_id>

NOTE: You can obtain your Cognito User Pool ID by running the following command:

$ aws cognito-idp list-user-pools  \
--max-results 1 \
--output text \
--query "UserPools[?contains(Name, 'CityUserPool')].[Id]" \
--profile <your_idp_profile>

Now, you can deploy your Docker cluster from the project’s root directory:

$ docker compose up --build

You shouldn’t see errors in the service’s deployment:

Nor in the Envoy Proxy service deployment:

Then, open a new terminal window and edit your “/etc/hosts” file with the following line:

127.0.0.1  <fqdn_server_name>

With this setting, you can access the Tasks Service locally using your CSR FQDN domain name.

Try to access the health check endpoint using the CURL command. Skip the TLS certificate validation with the “- k” flag:

$ curl -k https://<fqdn_server_name>/actuator/health

You should see the response like this:

So far, so good. Our local configuration is working as expected. Please remove the line in your “/etc/hosts” file to avoid future issues.

As our CSR certificate works locally, let’s import it into AWS.

Importing CSR certificate into ACM.

This section is very similar to the one described in my previous tutorial when discussing requesting a certificate from ACM. So let’s run the following command inside the “utils/certs” directory:

$ aws acm import-certificate                           \
--certificate fileb://server-signed.pem \
--private-key fileb://server-key.pem \
--certificate-chain fileb://ca-cert.pem \
--profile 'tasks-dev'

You probably see an error like this:

If you open the “server-key.pem” file, you must see a structure similar to this:

-----BEGIN EC PARAMETERS-----

-----END EC PARAMETERS-----
-----BEGIN EC PRIVATE KEY-----

-----END EC PRIVATE KEY-----

The file contains a so-called “Parameters Block” in its header. What says ACM concerning ECDSA private keys files in its official user guide:

ACM says it removes the “parameters block” during the import process when we use the AWS console, not the CLI.

So let’s remove this block using the CLI with the OpenSSL command without deleting our previous CSR private key:

 $ openssl ec                        \
-in server-key.pem \
-out server-key-no-header.pem \
-outform PEM

IMPORTANT: I didn’t delete the previous CSR private key file because the header’s parameters block typically includes information about the elliptic curve parameters, and it can be helpful for specific scenarios where we need to provide the complete private key data. So it’s a good idea to maintain the previous private key file and generates a new one without the header parameters only for required services.

If you print the “server-key-no-header.pem” file, you must see the private key without the Parameters Block header:

Now we can try to import one more time our CSR certificate using the newly created private key:

$ aws acm import-certificate                              \
--certificate fileb://server-signed.pem \
--private-key fileb://server-key-no-header.pem \
--certificate-chain fileb://ca-cert.pem \
--profile 'tasks-dev'

You must see the ARN value at the command output indicating that our certificate was successfully imported to ACM.

So let’s go to the ACM console to validate if our certificate was registered:

Notice that the key algorithm is ECDSA P-256, and the additional DN is the one we added to the CSR certificate using the “ext” flag.

Now let’s try to deploy our Spring Boot Native microservice into AWS.

Deploying to Fargate ECS using AWS Copilot CLI.

Let’s modify our “copilot/api/manifest.yml” file with the following HTTP configuration followed by a new sidecar section:

http:
path: '/'
alias: 'example.io'
healthcheck:
path: '/actuator/health'
...
target_container: envoy

sidecars:
envoy:
port: 443
image:
build:
context: .
dockerfile: ./utils/docker/envoy/Dockerfile
variables:
ENVOY_UID: 0

When we use the target_container property, the ALB redirects all HTTP communication to our Envoy Proxy sidecar container listening on port 443 for secure connections.

IMPORTANT: We must use the ENVOY_UID environment variable because the Envoy Proxy container doesn’t use the <root> user at the start time. So the ECS Task deploys with an error indicating the Envoy container doesn’t have permission to bind port 443. The zero in the ENVOY_UID variable indicates that the Envoy service uses the root user at the start time.

The Dockerfile of our Envoy Proxy sidecar container is the following:

FROM envoyproxy/envoy:v1.26-latest

COPY ./utils/certs/ca-cert.pem /etc/ca.cert
COPY ./utils/certs/server-crt.pem /etc/server.cert
COPY ./utils/certs/server-key.pem /etc/server.key
COPY ./utils/docker/envoy/envoy-aws.yaml /etc/envoy/envoy.yaml

RUN chmod go+r /etc/envoy/envoy.yaml
EXPOSE 443

This configuration is very similar to the one used in the docker-compose file. We’re passing the certificate files into the container, which exposes port 443 for secure HTTP connections.

In the same Copilot configuration file, copy the Cognito User Pool ID you obtained when deploying the Tasks Service using Docker Compose:

variables:
CITY_TASKS_TIME_ZONE: '-05:00'
CITY_IDP_ENDPOINT: https://cognito-idp.<idp_aws_region>.amazonaws.com/<cognito_user_pool_id>

Likewise, add the FQDN used in your CSR certificate:

environments:
dev:
http:
alias: 'api.example.io'

Then, execute the following command to obtain the ARN of your CSR certificate located in the ACM service:

$ aws acm list-certificates               \
--includes keyTypes=EC_prime256v1 \
--profile 'tasks-dev' \
--output text \
--query "CertificateSummaryList[?contains(DomainName, 'example.io')].[CertificateArn]"

I’m using the “- -includes” flag because we must indicate the type of our certificate, which is EC_prime256v1. Without this parameter, the AWS CLI command returns an empty array list. Also, you must specify your domain name (not the FQDN) in the query parameter to obtain only the ARN certificate we want.

Then, copy and paste the resulting ARN certificate output in the previous command into the “copilot/environmets/dev/manifest.yml”:

http:
public:
certificates:
- arn:aws:acm:us-east-1:123456789012:certificate/6faec726

It’s time to use a series of AWS Copilot CLI commands to deploy our Spring Boot Native MS on Fargate ECS using a Cross-Account deployment as we did in my previous tutorial:

$ export AWS_PROFILE=tasks-dep-dev

$ copilot init \
--app city-tasks \
--name api \
--type 'Load Balanced Web Service' \
--dockerfile './Dockerfile' \
--port 8080 \
--tag '1.5.0'

$ copilot env init \
--app city-tasks \
--name 'dev' \
--profile 'tasks-dev' \
--default-config

$ copilot env deploy \
--app city-tasks \
--name dev

$ copilot deploy \
--app city-tasks \
--name api \
--env 'dev' \
--tag '1.5.0' \
--no-rollback

In the last command, I’m using the “- -no-rollback” flag to indicate Copilot does not roll back the deployed infra if an error is produced at deployment time. This way, we can review the logs in CloudWatch or ECS Tasks services to see what happens.

If everything it’s ok, you must see the following output at the end of the last AWS Copilot CLI command:

Now we must register the created ALB domain name in Route 53 before trying to access our service on the Internet.

Register ALB Domain Name in Route 53.

Execute the following command to obtain the recently created ALB domain name:

$ aws cloudformation describe-stacks    \
--stack-name city-tasks-dev \
--output text \
--query "Stacks[0].Outputs[?OutputKey=='PublicLoadBalancerDNSName'].OutputValue" \
--profile 'tasks-dev'

Copy the command output and go to your Route 53 console, where you have registered your domain name. Remember that this service might be in another AWS account, not necessarily in any workloads or deployments accounts.

Once positioned on your Hosted Zone page, click the “Creare record” button to create a new CNAME record, as I did in my previous tutorial. Paste the ALB domain name into the value field and enter the other required values:

In this case, we must wait at least 60 seconds, as indicated in the TTL value, before Route 53 propagates the changes to authoritative DNS servers.

After the specified TTL time, execute the following command in your terminal window:

$ dig dev.hiperium.cloud

You must see something like the following:

Notice that the CNAME must match the one you provided in your Hosted Zone in the Route 53 console.

As we have registered our ALB domain name on Route 53, we can access our Tasks Service on the Internet using an HTTPS connection.

Testing the Spring Boot Native micro-service.

Firstly, try to access the health check endpoint using the CURL command as we did when we deployed the service locally. Remember to remove the entries we added in the “/etc/hosts” file before executing the following command:

$ curl -k https://dev.hiperium.cloud/actuator/health 

You must get the expected response as before:

Now let’s open our Postman tool to perform the same request:

Postman can execute the HTTPS request but notify us that the connexion is not entirely secure because we use a self-signed TLS certificate. Also, notice that the Cipher Name is of type ECDSA.

What happens if we try to open this endpoint from a web browser?:

The browser alerts us that the site is insecure because any recognized Certificate Authority does not trust the certificate.

Let’s click on the “Proceed to dev.hiperium.cloud” link to access the site:

Now we can access the health check endpoint with the response we expected.

Please return to our Postman tool and perform the traditional tests we executed in the previous tutorials. First, we need to obtain a valid access token from Cognito. So go to the Authorization section and click the “Refresh” link to get a new access token:

Then, modify the request URL to obtain all City Tasks from the Spring Boot microservice:

Finally, query all City Tasks that were scheduled on Tuesdays:

That’s it for our testing purposes. Let’s clean up all created resources so we don’t want to pay for these resources if we’re not using them:

$ copilot app delete --yes

Before I conclude our tutorial, let’s review some automation tasks I did to make our AWS deployment easier.

Deployment Automation using Shell Scripts.

As usual, I created bash scripts to help us to deploy the entire Tasks Service solution into AWS using the previous configuration files, AWS CLI, and AWS Copilot CLI commands. You need to execute the following command in the project’s root folder:

$ ./run-scripts.sh

You must enter the different AWS profiles to be used by AWS Copilot CLI to perform a Cross-Account deployment for the required infrastructure:

Then, a menu appears with options to deploy the Tasks Service locally using Docker Compose or into AWS using the Copilot CLI tool:

Also, I created some helper bash scripts to perform some prerequisites that we saw in this tutorial, like creating certificates, importing them into ACM, etc.

Finally, the main menu has an option to delete all created resources in AWS. And at the end of this execution, the script reverts all changes you made in the configuration files so you can use them for another deployment.

And that’s all I have for this amazing tutorial. I hope this hands-on has been helpful to you, and I will see you in the next one.

Thanks for reading.

--

--