EDA using Amazon EventBridge, Lambda, and Fargate ECS, with SAM-CLI and a Native Spring Boot MS as Event Source.

Andres Solorzano
21 min readJul 15, 2023

--

Introduction.

When I designed the initial software architecture for our City Task Service, the idea was that when a Quartz Job is executed, it calls a method that updates the status of a Device item stored in DynamoDB. The improvement is that the scheduled method sends an event message to the EventBridge service (Amazon Event Bus), which routes the event to a Lambda function. The following tutorial will cover storing events using a particular data store. So with this brief introduction, let’s get started.

To complete this guide, you’ll need the following tools:

NOTE: You can download the project’s source code from my GitHub repository with all the configurations we will make in this tutorial.

IMPORTANT: To execute this project, you must generate 2 TLS certificates: one for your CA certificate (intermediate) and another for your CSR certificate (server). Please review my previous tutorial for more details.

Another significant change is that our Git repository now has a Monorepo structure. So the “src” folder now has the Spring Boot micro-service and the Lambda Function. Take this in main because, in the following tutorials, the “src” directory will have other projects.

AWS Lambda with SAM-CLI.

First, install the Serverless Application Model (SAM) tool in your local environment. If you are using MacOS as I do, use Brew as follows:

$ brew install aws/tap/aws-sam-cli

After a few minutes, SAM CLI will be installed on your computer. Try to validate the installation by running the following command:

$ sam --version

Now you can go to the “src” folder and execute the following command to initialize our Java function:

$ sam init                      \
--name city-tasks-events \
--runtime java17 \
--architecture arm64 \
--dependency-manager maven \
--config-env dev \
--no-tracing \
--no-application-insights \
--no-beta-features \
--package-type Zip

With these parameters, SAM CLI will ask you only for the template example we want to initiate our project.

Notice that for the specified parameters, SAM suggests 2 projects for “Infrastructure Event Management,” which are examples for EventBridge.

After a few arrangments to the created project template, the files are now organized in the following manner:

The next thing to review is our function’s handler class and method:

I prefer to use the RequestStreamHandler interface class because I like to control the InputStream object operations to get the request JSON event payload. You can use the Lambda Events library for a typed form of your AWS events. Consider that this library doesn’t have an event class for EventBridge events when I write these lines.

Talking about custom event objects, the following class represents the required parameters we must get from our EventBridge event:

These are the required fields for custom events. We can validate this by going to the EventBridge console in the Sandbox option and selecting the “Enter my own” option:

But what about the “detail” property in our custom event class?? That field is for the content of custom event data that we need to pass on to our event:

For now, we must detail the ID of the executed City Task, the ID of the operated Device, and the operation that our Device performed.

Finally, we must create a utility class to perform some operations against the previous class presented. The first one is for unmarshaling the JSON event object that resides inside the InputStream object:

Let’s review some useful tools to improve our Lambda functionality and performance.

Java Powertools for AWS Lambda.

The first utilities I will use in this tutorial are the Logging and Validation ones. So we need to add these dependencies in our pom.xml file:

<dependency>
<groupId>software.amazon.lambda</groupId>
<artifactId>powertools-validation</artifactId>
<version>1.15.0</version>
</dependency>
<dependency>
<groupId>software.amazon.lambda</groupId>
<artifactId>powertools-logging</artifactId>
<version>1.15.0</version>
</dependency>

NOTE: You can find more details about Powertools on its official web page. The latest version when I write these lines is 1.15.0.

1. Lambda Logging.

What’s the value of using this utility if we can use another library like Logback?? The Powertool library’s value is that it adds valuable context information when logging. You can find a detailed table with the structured values added when logging into its official web page.

The only thing that we need to do is add the Logging annotation in the function’s handler method:

The utility class itself automatically performs the rest of the logging function.

2. Object Event Validation.

We can also use an annotation in this case, but I prefer to use the utility method because I need to perform 2 validations:

  1. Validate the structure of the JSON Schema itself.
  2. Validate the JSON event against the JSON Schema.

The first validation is executed only once when the class is loaded. So we need to use a static method to perform this validation:

The idea is to validate the structure of the JSON Schema because, in the future, we might update this schema by adding more parameters that must follow an established convention:

{
"$id": "https://hiperium.cloud/task-event-schema.json",
"title": "EventBridgeCustomEvent",
"type": "object",
"properties": {
"source": {
"type": "string"
},
"detailType": {
"type": "string"
},
"detail": {
"title": "TaskEventDetail",
"type": "object",
"properties": {
"taskId": {
"type": "integer"
},
"deviceId": {
"type": "string",
"format": "integer"
},
"deviceOperation": {
"type": "string"
}
},
"required": [ "taskId", "deviceId", "deviceOperation" ]
}
},
"required": [ "source", "detailType", "detail" ]
}

I indicated the required fields that must exist when we receive the event, and this file must be validated too:

The second validation is for our custom event. So using the same utility class, we can define the method used for this purpose:

We need to unmarshal the event from the InputStream object before passing it to the previous method. So this is how I did this from the handler’s method:

Now it’s time to see if all these configurations work locally using Java unit testing.

Unit Testing in Lambda Functions.

The better way to test our functions is to create JSON files with the events we want to test and then load them inside each test method. For example, consider the following JSON file with a custom event for our happy path execution:

{
"id": "7bf73129-1428-4cd3-a780-95db273d1602",
"account": "123456789012",
"source": "com.hiperium.city.tasks",
"time": "2015-11-11T21:29:54Z",
"region": "us-east-1",
"resources": [],
"detail-type": "TaskExecution",
"detail": {
"task-id": 10,
"device-id": "123",
"device-operation": "ACTIVATE"
}
}

So the test method must load this JSON file inside an InputStream object that must be passed to our lambda function:

If we execute this method in our IntelliJ IDE, we must see a successful execution of our test method:

Notice that the Logging utility adds some information to our logging message, as mentioned before.

Let’s repeat this process for your test cases, creating a unit test for each. I made 5 unit tests for this tutorial. I hope to create more when we advance in our solution’s architecture.

The unit testing is covered in our Lambda Function. So what about integration tests for the Lambda Function?? Let’s see it.

Integration Testing in Lambda Functions.

We previously used Testcontainers for our Spring Boot micro-service, so we know about it. So the first thing is to add the Maven dependency in our pom.xml file:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers-bom</artifactId>
<version>1.18.3</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

<dependencies>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<scope>test</scope>
</dependency>
</dependencies>

We also used the Localstack technology to test our Spring Boot micro-service against a DynamoDB test container. So we need to add this dependency too:

<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>localstack</artifactId>
<scope>test</scope>
</dependency>

Notice that the Localstack dependency comes inside the Testcontainer, so we can use it without setting a new version.

Then, as usual, we need to create our parent class that defines and starts the LocalStack test container:

I’m using Lambda as a LocalStack service instead of DynamoDB, as we did in the Spring Boot project. Also, we must specify the JAR route containing our compiled Lambda Function. This is important because, as we’ll do later in the AWS, we need to upload the JAR that contains our function code.

Then, I created a before-all method that loads and starts the Lambda function inside the LocalStack container. Also, I made a Lambda client for the test methods used to call the Lambda Function using the endpoint provided by the LocalStack container itself:

The new part is the creation of the integration test methods that must call the Lambda Function. Remember that we created 3 JSON files in the Unit Test section that calls our handler method. Those files contain custom City Tasks events for our test cases, so I can simplify this process using a parameterized test as follows:

It’s important to note that the Lambda Function is inactive at the moment of the method execution. So we must wait a few seconds until the Lambda Function is active. For this reason, I created a method to wait 3 seconds maximum until the Lambda is ready to receive connections:

Now we’re ready to execute the integration tests using our IDE to see if all our configurations are working as expected:

The inconvenience is that we need the JAR file created in the target directory before executing the integration tests. So we must appeal to the Maven Surefire and Failsafe plugins. The first one is for excluding the integration test class at build time:

<build>
<plugins>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.1.2</version>
<configuration>
<excludes>
<exclude>**/*ITest.java</exclude>
</excludes>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>3.1.2</version>
<executions>
<execution>
<id>integration-tests</id>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
<configuration>
<includes>
<include>**/*ITest.java</include>
</includes>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>

The second plugin includes the integration test class after the packaging phase. So now, we can use the “verify” flag in the Maven command to execute the integration tests:

$ mvn verify

The good news is that we can use the same command to execute integration tests for the Spring Boot project. So now, you can run the integration tests at the project’s root directory:

$ mvn package -DskipTests
$ mvn verity

All integration tests are running in our project. Now let’s talk about the Amazon EventBridge service before continuing with the configurations to get more context.

Amazon EventBridge (Event Bus).

You guessed. So what is the new question for this section?? And that is: How can this service contribute to our City Tasks solution’s architecture?? So let’s see what the official documentation says about EventBridge:

EventBridge is a serverless service that uses events to connect application components together, making it easier for you to build scalable event-driven applications… EventBridge event buses are well suited for many-to-many routing of events between event-driven services.

In our case, the event source is our City Task service deployed on Fargate ECS. The event destination is our Lambda Function developed in the previous section. And the event bus is our EventBridge service.

Another essential thing to note is that every AWS account has a default Event Bus ready to use. We can validate this in your EventBridge console:

Now, we know about the EventBridge service in AWS. It’s time to talk about the Event Rules used by EventBridge to orchestrate events in AWS.

1. Event Rule Pattern Matching.

Our new question is, how the Event Bus knows which Lambda Function calls based on a raised event?? The answer is that we can define our event definitions in the SAM Template created previously. This is good news because earlier, we used SAM to develop APIs, DynamoDB tables, and other known Serverless events to invoke a Lambda function. I didn’t see before how to create Events for EventBridge as follows:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
CityTasksEventsFunction:
Type: AWS::Serverless::Function
Properties:
...
Events:
TaskExecutionTrigger:
Type: EventBridgeRule
Properties:
Pattern:
source:
- com.hiperium.city.tasks
detail-type:
- TaskExecution

Notice the Pattern section. We have “source” and “detail-type” parameters, which define the pattern that must follow the Event Bus to call our Lambda Function. I specified these 2 parameters as required in our JSON Schema file defined before. The details parameter is not shown here because it’s for custom events. But I also described it as a required parameter.

You can add more fields to your pattern. Only remember the required fields that you can use to define your Custom Event pattern:

The SAM CLI will automatically create the IAM Policy the Event Bus uses to call the Lambda Function. This is important because you must always create an IAM Policy to allow other services to reach your Lambda Function, and the SAM CLI now creates one for us.

Now the Lambda side is covered and configured so far. Let’s talk about the configurations on the Fargate ECS side.

EventBridge Client for Spring Boot.

The first thing to do is add the following Maven dependency in the pom.xml file of the Spring Boot project:

<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>eventbridge</artifactId>
<exclusions>
<exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
</exclusions>
</dependency>

IMPORTANT: when I wrote these lines, I experienced an issue when generating the Native Executable:

I found that the problem with the Apache Commons LogFactory class was in the EventBridge dependency. So we must exclude it from our classpath.

We can now create an EventBridge client:

This component is similar to the DynamoDB one, where we need to assign the credentials provider and endpoint override values in case these values are provided at runtime.

Now as we have configured the EventBridge client, it’s time to update the JobExecution class to call the EventBridge service when a Quartz Job is executed:

Notice that I added a new method called triggerCustomEvent which builds the custom Task Event object and passes it to the EventBridge client. We are setting the city task ID, the device ID, and the device operation fields because the Custom Event requires these.

The other required values must be set in the EventBridge request object:

Finally, notice in the previous code snipped that we assigned the source, detail type, and detail parameters. Remember that those fields are also required by the Custom Event by EventBridge service.

Before deploying all these configurations into AWS, let’s perform local testing to validate if all is working as expected.

LocalStack on Docker Compose.

It’s time to add our new services in the docker-compose file to complete our solution’s architecture for local testing:

version: '3.9'

services:
tasks-localstack:
image: localstack/localstack
environment:
- SERVICES=dynamodb,lambda,events
ports:
- '4566:4566'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- hiperium-network
...

networks:
hiperium-network:
driver: bridge

Then, update the “tasks-api-dev.env” file with your Cognito User Pool ID and AWS region:

SPRING_PROFILES_ACTIVE=dev
CITY_TASKS_DB_CLUSTER_SECRET={"dbClusterIdentifier":"city-tasks-db-cluster","password":"postgres123","dbname":"CityTasksDB","engine":"postgres","port":5432,"host":"tasks-postgres","username":"postgres"}
CITY_IDP_ENDPOINT=https://cognito-idp.<idp_aws_region>.amazonaws.com/<cognito_user_pool_id>

Recall that the CITY_IDP_ENDPOINT environment variable is required. Our Spring Boot micro-service is configured with the Spring OAuth2, so if you don’t provide this Cognito endpoint, the micro-service will not deploy.

Then, we must create the “resources.sh” shell script containing 2 parts for creating our Lambda Function and EventBridge event pattern and routing. So let’s start with the first part:

awslocal lambda create-function                         \
--function-name 'city-tasks-events' \
--runtime 'java17' \
--architectures 'arm64' \
--role 'arn:aws:iam::000000000000:role/lambda-role' \
--handler 'com.hiperium.city.tasks.events.ApplicationHandler::handleRequest' \
--zip-file fileb:///var/lib/localstack/city-tasks-events-1.6.0.jar

awslocal lambda create-function-url-config \
--function-name city-tasks-events \
--auth-type NONE

Notice that I’m using the “awslocal” command instead the traditional “aws” one. That’s because we’re creating these resources inside the LocalStack container, similar to the ones we used previously to generate our Integration Tests with the Tescontainers.

Then it’s time for the second part of our required resources using the same shell script file:

awslocal events put-rule        \
--name city-tasks-event-rule \
--event-pattern "{\"source\":[\"com.hiperium.city.tasks\"],\"detail-type\":[\"TaskExecution\"]}"

lambda_arn=$(awslocal lambda get-function \
--function-name city-tasks-events \
--query 'Configuration.FunctionArn' \
--output text)

awslocal events put-targets \
--rule city-tasks-event-rule \
--targets "Id"="1","Arn"="$lambda_arn"

Now, I’m creating the event rule and then the target for the event. We need the Lambda Function ARN for the last one, so I’m getting it first.

Finally, we need to pass this shell script to the LocalStack container to create these resources at runtime:

version: '3.9'

services:
tasks-localstack:
...
volumes:
- ./utils/docker/localstack/resources.sh:/etc/localstack/init/ready.d/resources-setup.sh
- ...
networks:
- hiperium-network

It’s the time of truth. So let’s start the Docker Compose cluster with these changes to see the results:

$ docker compose up --build

You must see that the LocalStack container created our resources at runtime:

Remember from our previous tutorial that for local testing, we modified the “/etc/hosts” file by adding the following line:

127.0.0.1  <your_fqdn_server_name>

So instead of sending the request to your AWS account, the HTTPS requests will be sent locally to your Docker cluster.

Then, open your Postman tool and try to access the health endpoint to see if our City Task service is responding securely:

As our health check endpoint is responding successfully, let’s try to find all created City Tasks, but recall that you need to get a valid JWT before accessing a secure endpoint. If you follow my previous tutorial, you might have saved the Cognito IdP endpoint the get a valid access key:

Then, you can send the request and must get a response like the following:

Finally, the time of the truth. Try to create a new City Task using the Postman tool, sending a POST request with a future date. Use your actual date plus a minute (or two) to execute the Quartz Job. The request is the following:

In your terminal, you must see the logs for the created City Task:

You must wait until the Quartz Job is executed, and you must see logs in the terminal indicating that the DynamoDB table was updated and an event was sent to EventBridge:

Notice that the yellow words are from our LocalStack container. We’re getting an HTTP 200 response when calling our Lambda Function, indicating the call was successful.

I couldn’t see the logs of our Lambda Function. LocalStack simulates sending logs to the CloudWatch service inside the container, but this is only for simulation purposes. If you want to validate this, you must open a new terminal window and create the following alias:

$ alias awslocal="AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test AWS_DEFAULT_REGION=us-east-1 aws --endpoint-url=http://localhost:4566"

Notice that the alias uses the regular “aws” command but adds the “ — endpoint-url” flag to invoke the LocalStack container instead the AWS cloud.

Then, execute the following command using the awslocal command to get the latest logs from the Lambda Function:

$ awslocal lambda invoke \
--function-name city-tasks-events-function output.txt \
--payload file://src/city-tasks-events/src/test/resources/events/valid-event.json \
--cli-binary-format raw-in-base64-out

You must see the following logs in the docker-compose terminal tab:

You can see that the LocalStack it’s only invoking the service but is not printing our function’s logs, and that’s because we are validating the process flow of our functionality locally with a kind of mocking service. We can try to get the logs using the command line tool, but we must see something like this:

The LocalStack again shows us only the invocations to our service, nothing more. But, no worries. The LocalStack helps us a lot, at least mocking service call responses, but our real validation it’s on AWS.

Deploying all services into AWS.

Before starting with the SAM commands, we must generate and install our Maven artifacts locally to avoid errors when SAM builds the project when we have a parent POM:

$ mvn clean install -f src/city-tasks-events/pom.xml

This command will execute the unit and integration tests. So, in the end, you must see the following output:

Then, we’re ready to execute our SAM build command that builds our Java Lambda Function and generates the required files:

$ sam build --config-env dev

You must get a result like this:

Then, we can deploy our Lambda Function and EventBridge Rule resources into AWS:

$ sam deploy               \
--config-env dev \
--disable-rollback \
--profile tasks-dev

I’m using the “tasks-dev” profile because we’re deploying the entire solution into that AWS account. Recall we’re using a Multi-Account and Cross-Account deployment into AWS. You can read more about this in my previous tutorial, where I wrote about this topic.

Behind the scenes, SAM generates a CloudFormation template with the required resources and then deploys them into AWS:

So far, so good. The next part is to deploy our City Tasks service. This is easy because we have automated this deployment into AWS using shell scripts. So you can read my previous tutorial for more details:

$ ./run-scripts.sh

You must enter the required AWS profile names configured in your local environment to access the needed accounts and deploy the instructed infrastructure on AWS:

If you haven’t imported your TLS-CSR certificate into the ACM service, please select the “Helper menu” and choose option 4 to import it into the ACM service in the “dev” account.

I also added a new procedure to assign an IAM Policy to the ECS Task, so our Spring Boot micro-service will have permission to send events to the EventBridge service:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "events:PutEvents",
"Resource": "arn:aws:events:<aws_region>:<aws_account_id>:event-bus/default"
}
]
}

Using option 2 from our shell script menu, the shell script deploys our Spring Boot microservice into AWS.

At the end of the script execution, you must see the following output:

If you select that the script updates the Hosted Zone in your Route53 service with the ALB domain name, you must see that record in your Route53 console:

Then, execute the following command to get the logs from the CloudWatch service of our Copilot CLI application:

$ export AWS_PROFILE=tasks-dep-dev

$ copilot svc logs \
--app city-tasks \
--name api \
--env dev \
--since 15m \
--follow

You must see the logs of our Spring Boot micro-service that is deployed in the Fargate ECS service:

In another terminal window, execute the following command in the project’s root directory to get the logs of our Lambda Function:

$ sam logs                            \
--name CityTasksEventsFunction \
--stack-name city-tasks-events-dev \
--tail \
--profile tasks-dev

You must see a message like the following and notice that the connection is still opened to receive log messages:

Revert the changes made in your “/etc/hosts” file in the previous section because we expect our HTTPS requests to go to AWS now.

Open your Postman tool and try to access the health check endpoint first to see if our Spring Boot micro-service is responding:

Then, send the request to get all created City Tasks. Remember to refresh your access token if it’s expired:

Now, as we did before, let’s create a new City Task using a date-time near to complete so we can validate the Quartz Job execution:

Go to the Copilot CLI terminal. You must see the following logs from the scheduled Quartz Job:

When the Quartz Job is executed, you must see the following logs in the terminal:

These are similar to those we saw in the Docker Compose section, but this time notice that these logs are from AWS.

The important thing is that you must see the SAM logs of our Lambda Function in your opened terminal window for this purpose:

Excellent!! Our solution’s architecture is working on AWS. So we can corroborate that all City Tasks events are being sent by the Spring Boot micro-service and received by the EventBridge service.

Deployment Automation.

As mentioned in the previous section, I’ve created shell scripts to deploy our required services locally (using Docker Compose) or in the cloud (using AWS and Copilot CLI tools).

So now, I added all SAM CLI command to these scripts, so with a single menu option, you can deploy the entire infrastructure and applications into AWS. You can also delete all created infrastructure from AWS using a single menu option in the same shell scripts.

echo "BUILDING SAM PROJECT..."
sam build --config-env "$AWS_WORKLOADS_ENV"

echo ""
echo "DEPLOYING SAM PROJECT INTO AWS..."
sam deploy \
--config-env "$AWS_WORKLOADS_ENV" \
--disable-rollback \
--profile "$AWS_WORKLOADS_PROFILE"

The unique difference with AWS deployment is that I’m using the Docker file to create the container image of our Native Linux Executable that contains the Spring Boot microservice natively.

Parent Maven POM.

I’m passionate about testing, specifically Integration Testing. So I created a parent pom.xml file in the project’s root directory for the 2 Java projects:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.hiperium.city</groupId>
<artifactId>city-tasks-parent</artifactId>
<name>city-tasks-parent</name>
<version>1.6.0</version>
<packaging>pom</packaging>
<description>Hiperium City Tasks Parent POM.</description>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<java.version>17</java.version>
<maven.compiler.source>${java.version}</maven.compiler.source>
<maven.compiler.target>${java.version}</maven.compiler.target>
</properties>
<modules>
<module>src/city-tasks-api</module>
<module>src/city-tasks-events</module>
</modules>
</project>

Notice in the properties section that I’m defining the Java version for the entire project (JDK 17). I did this to determine the parameters the child projects must follow.

Another objective is to package all internal projects and execute unit and integration testing with a single Maven command:

$ mvn package  // or simply
$ mvn test

So I can guarantee the quality of our project in some initial form. I’ll write about test coverage in a future tutorial, and we probably create more unit and integration tests as needed.

Spring Boot Native-Image Issues.

The first issue I found was the one I commented on before about the LogFactory class initialization at native image build time. The solution was excluding the Apache Commons Logging library from the EventBridge dependency. This solution’s inconvenience is that we could not configure Log4J2 because it depends on the Commons Logging library. So, for now, we need to use the Logback logging library that Spring Boot uses as default in the starter projects.

The second issue I found is JVM memory consumption by GraalVM at native image build time. I’m not sure if this is because of the many apps I’ve opened on my computer, but the solution I found was increasing the JVM memory using the JAVA_OPTIONS environment variable like this:

$ export _JAVA_OPTIONS="-Xmx12g -Xms8g"

I’ve also used this procedure when generating the native image with the Docker Compose command. In this case, the error was the following:

So you can also solve this error by incrementing your Docker engine memory to 8 or 10 GB.

But occasionally, I was obtaining the same memory error issue. So I decided to appeal to increment now the number of CPU threads in the Maven command:

$ mvn -T 4C clean native:compile     \
-Pnative -DskipTests \
-f src/city-tasks-api/pom.xml \
-Ddependency-check.skip=true

I’m also using the “dependency-check.skip=true” property to optimize the memory in building the native image.

Finally, it’s not mandatory to deploy and use the Native Executable locally or in AWS. You can use the standard JAR file to execute and deploy the Spring Boot micro-service. However, recall that you can use the native image configurations described in this tutorial for your projects ;)

Cleaning Up.

You can use option 3 from our main bash script menu to delete all created resources in AWS:

And that’s!!! In my following tutorial, I’ll write about a particular database to store our events, and we’ll also configure our Lambda Function with GraalVM to obtain a Native Executable using Java on the AWS cloud.

I like to write about these topics, and I hope this article was helpful to you.

Thanks for your reading time.

--

--