Storing EDA events in DynamoDB from a Native Lambda Function built with GraalVM and Spring Cloud Functions for Graviton2 processors.

Andres Solorzano
32 min readSep 21, 2023

--

Introduction.

Our previous tutorial discussed implementing an EDA on AWS. Our solutions architecture consists of 3 parts: the container service, the event router service, and the event processor service. So, the AWS services applied for our solutions architecture are Fargate ECS, EventBridge, and Lambda. I used Spring Boot for the containerized service and pure Java for the Lambda function. The Java version that we’re using is 17 for both projects. The idea in this tutorial is to store the EDA events arriving at the Lambda function in a DynamoDB table. Furthermore, the Lambda function must be deployed using a Native Image build with GraalVM, as we did with the Spring Boot API service.

To recap the API service, it was built using Spring Boot 3 and deployed in AWS using the Copilot CLI tool. Reading this article, you can review more details about this project’s architecture and components. You can read this article to recap how to integrate the API service with Amazon Cognito as an OIDC service using OAuth2.

As usual for our tutorials, I use Testcontainers with LocalStack for integration testing inside the Spring Boot and Spring Cloud Functions projects using Maven. For local deployment and service integration testing, I use Docker Compose with LocalStack.

To complete this guide, you’ll need the following tools:

Important Notes.

  • You can download the project’s source code from my GitHub repository with all the steps made in this tutorial.
  • I’ll omit the installation and configuration of the development tools and dependencies for this tutorial. In the case of SAM-CLI, you can review my previous tutorial for details. For the TLS certificate generation, you can check my past article for more information.
  • When I updated some libraries using the brew command, I got the following message:
  • So, as we will not have support beyond August 14, 2023, I encourage you to install SAM CLI using a different mechanism (as I did) using its official guide website.

How to navigate from this tutorial.

There are 2 main sections in this tutorial. The first is from sections 1 to 8, and the second is from section 9 onwards. If you want to focus only on the final solution using the Spring Cloud Functions (SCF), please go to section 9. If you want to read some essential experience using a plain Java project with GraalVM, please read from section 1 onwards.

The first section doesn’t have a GitHub repository because it’s more oriented to my experience building a Java native image. Still, you can review the code snippets in that section to get more context in the second section.

1. Storing Events in DynamoDB.

Let’s start working on our existing Lambda project base, adding the new feature to store EDA events in DynamoDB. The idea is to test our new functionality locally using the JAR version of our Lambda function employing the LocalStack technology.

First, add the following dependencies in the parent Maven pom file. Recall that we used a DynamoDB dependency in the API project to share the same dependency for both projects. The only difference is that this time, we’re using the enhanced dependency, which allows us to use an ORM-like persistence model:

<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>dynamodb-enhanced</artifactId>
</dependency>

So, we can create a new Java class that creates the async client using the enhanced library:

The “DynamoBDUtil” class will contain the only objective to create the DynamoDB async client, asking the variables with the value of the AWS endpoint override URL, which is helpful for integration testing (more in subsequent sections):

Then, we must create the Event class that has the required data to be stored in DynamoDB:

The “@DynamoDbBean” annotation comes in the DynamoDB enhanced library. The other 2 annotations specify our table’s partition key and sort key.

As you can see, until now, we need some helper libraries, as we used in the API project, like Lombok and Mapstruc. So let’s add them in the parent Maven POM so both projects can reuse them:

<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>${lombok.version}</version>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.mapstruct</groupId>
<artifactId>mapstruct</artifactId>
<version>1.5.5.Final</version>
</dependency>

So, with Mapstruct, we can map the properties we need from the received Lambda event with the ones declared in the Event class:

I ignored the “id” property field because we need to specify it manually. After creating our mapper class, we can complete our DynamoDB service class, which is also in charge of persisting the Event data in DynamoDB:

With the Lombok library, our entity classes must be like the following, which uses the “@Data” annotation:

Also, notice the getter annotations added to our Event class. That annotation can specify other annotations that must be injected in the getter methods. That’s because the “@DynamoDbBean” annotations must be determined only at the getter level, so the Lombok “@Getter” annotation can help us with this at the field level, making our code cleaner.

Finally, our Lambda handler class must be as follows:

So after unmarshal the EventEridge event data into a Java object, we can call the DynamoDBService class to extract the required information to store it in the DynamoDB table.

As the business logic code is ready, let’s configure our previous configuration of LocalStack for integration testing.

2. Testcontainers with LocalStack.

So, let’s add the DynamoDB service to our existing test base class:

Also, notice that I’m adding 2 files to the LocalStack container. One is to create the DynamoDB table, and the other is to load test data to our Event table. So let’s review the “infra-setup.sh” which is in the resources folder:

$ awslocal dynamodb create-table                \
--table-name 'Events' \
--attribute-definitions \
AttributeName=id,AttributeType=S \
AttributeName=deviceId,AttributeType=S \
--key-schema \
AttributeName=id,KeyType=HASH \
AttributeName=deviceId,KeyType=RANGE \
--billing-mode PAY_PER_REQUEST \
--region us-east-1

$ awslocal dynamodb batch-write-item \
--request-items file:///var/lib/localstack/events-data.json

As defined in the Event class, we have primary and sort keys. After creating the table, the following command loads a few items in the Event table for testing.

I’m doing this because I want to reuse the same infra configuration we used in the previous tutorial to start the LocalStack on Docker Compose. Still, this time, I split the required infra into smaller files and put them into the projects necessary for testing. So now we don’t need a big file with the required infra to start the LocalStack on Docker Compose (more of in a moment).

So now, we can execute the integration tests using our IntelliJ IDE to see the results:

So far, so good. So now, let’s perform functional testing using LocalStack, but this is using Docker.

3. Docker Compose with LocalStack.

As mentioned in the previous section, I split the initial single bash script file (which created the required infra for both projects) into individual script files. The idea is to reuse them for integration testing using Testcontainer, as we did previously. Now, the “docker-compose.yml” file contains the following configuration for LocalStack:

version: '3.9'

services:
tasks-localstack:
image: localstack/localstack:latest
ports:
- "4566:4566"
env_file:
- ./utils/docker/env/localstack-dev.env
volumes:
# CITY TASKS API RESOURCES
- ./src/city-tasks-api/src/test/resources/infra-setup.sh:/etc/localstack/init/ready.d/api-setup.sh
- ./src/city-tasks-api/src/test/resources/data-setup.json:/var/lib/localstack/api-data.json
# CITY TASKS EVENTS RESOURCES
- ./src/city-tasks-events/src/test/resources/infra-setup.sh:/etc/localstack/init/ready.d/events-setup.sh
- ./src/city-tasks-events/src/test/resources/data-setup.json:/var/lib/localstack/events-data.json
- ./src/city-tasks-events/target/city-tasks-events-1.7.0.jar:/var/lib/localstack/city-tasks-events.jar
# CITY TASKS COMMON RESOURCES
- ./utils/docker/localstack/common-infra-setup.sh:/etc/localstack/init/ready.d/common-setup.sh
- /var/run/docker.sock:/var/run/docker.sock
...

Notice that I used the same strategy for the API project. So, the integration tests used by Spring Boot use the previous files to create and load the required infra and data, respectively.

So first, we must generate the JAR file of our Lambda function as we did in the previous tutorial before starting the docker-compose cluster.

$ mvn package -f src/city-tasks-events/pom.xml

Then, we can start the docker-compose cluster:

$ docker compose up tasks-localstack

NOTE: I need to focus on the Events project in this tutorial. Review my previous tutorial for details on configuring an End-to-End encryption in a Spring Boot project, including integration testing.

After executing the docker-compose command, you must see the creation of our required infra in your terminal. Put a focus on creating messages of the Lambda function and the Events table.

Open a new terminal tab and execute the following command, which invokes our Lambda function, sending one of the payloads used in the happy path for the testing scenarios:

$ awslocal lambda invoke \
--function-name 'city-tasks-events-function' /tmp/out.txt \
--payload file://src/city-tasks-events/src/test/resources/events/lambda-event-valid-datail.json \
--cli-binary-format raw-in-base64-out

NOTE: You can install the “awslocal” utility using the following command: python3 -m pip install — upgrade localstack

Return to your docker-compose terminal tab, and you must see the successful execution of our Lambda function:

Excellent. So, after successful testing using LocalStack and Docker Compose, we can finally configure our project to use GraalVM.

4. Native-Image with GraalVM.

As I did in my previous tutorial, I used the “sam init” command to generate a new project template, but this time, I used GraalVM with Java 17. The following is an example of the init parameters you can use to create a project:

If you open this project in the IntelliJ IDE, you will notice that it’s using Java 11 and not 17. If you execute the project as is, the project will run without problems, but we’re using Java 17. So, we need to adapt some things using the latest versions of the dependencies that exist until this moment.

The first thing we must add is the following AWS Lambda dependency that is in charge of invoking our Java native image internally:

<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-runtime-interface-client</artifactId>
<version>2.3.1</version>
</dependency>

The other significant change is to add a new profile section that is in charge of building the native image using a GraalVM Maven plugin:

<profile>
<id>native</id>
<build>
<plugins>
<plugin>
<groupId>org.graalvm.buildtools</groupId>
<artifactId>native-maven-plugin</artifactId>
<version>0.9.23</version>
<extensions>true</extensions>
<executions>
<execution>
<id>build-native</id>
<goals>
<goal>build</goal>
</goals>
<phase>package</phase>
</execution>
</executions>
<configuration>
<skip>false</skip>
<imageName>native</imageName>
<buildArgs>
<buildArg>--no-fallback</buildArg>
</buildArgs>
</configuration>
</plugin>
</plugins>
</build>
</profile>

Then, we must add a new Docker file so SAM-CLI can use this image to build our project with Maven using the native profile. I will copy only the essential sections of this Docerfile:

FROM --platform=linux/arm64 public.ecr.aws/amazonlinux/amazonlinux:2

# Graal VM
ENV JAVA_VERSION java17
ENV GRAAL_VERSION 22.3.3
GRAAL_FILENAME graalvm-ce-${JAVA_VERSION}-linux-aarch64-${GRAAL_VERSION}.tar.gz
RUN curl -4 -L https://github.com/graalvm/graalvm-ce-builds/releases/download/vm-${GRAAL_VERSION}/${GRAAL_FILENAME} | tar -xvz
...

# Maven
ENV MVN_VERSION 3.9.4
...

# AWS Lambda Builders
RUN amazon-linux-extras enable python3.8

RUN /usr/lib/graalvm/bin/gu install native-image
RUN ln -s /usr/lib/graalvm/bin/native-image /usr/bin/native-image
RUN ln -s /usr/lib/maven/bin/mvn /usr/bin/mvn

ENV JAVA_HOME /usr/lib/graalvm
ENTRYPOINT ["sh"]

I’m using ARM64 for the AMI Linux builder image. I have this processor type on my computer and use the same arch type on AWS. The same is true for the GraalVM version. So, you must update this file according to your needs.

I’m using the latest and most stable versions of these tools. The only exception I notice is for the Linux Extras, which use Python 3.8.

Build the image with the following command from the project’s root directory:

$ docker build -t hiperium:native-image-builder -f utils/docker/sam-builder/Dockerfile .

Then, let’s update the “samconfig.toml” file where we must specify the new build method for our Lambda function:

version = 0.1

[dev]
[dev.global.parameters]
stack_name = "city-tasks-events-function-dev"

[dev.build.parameters]
use_container = true
build_image = ["hiperium:native-image-builder"]

Notice that in the build section, we’re specifying the image name of our Docker builder image created previously.

Now, we must create a Makefile that is in charge of building the Maven project using the native profile added previously:

build-CityTasksEventsFunction:
mvn -T 4C clean package -Pnative -DskipTests
cp ./src/city-tasks-events/target/native $(ARTIFACTS_DIR)
chmod 755 ./src/city-tasks-events/target/classes/bootstrap
cp ./src/city-tasks-events/target/classes/bootstrap $(ARTIFACTS_DIR)

The “bootstrap” file is located in the resources directory, which is responsible for running the native Linux executable of our Lambda function when the native image is generated:

#!/bin/bash

./native $_HANDLER

Then, in the “template.yaml” file, we must specify the new build method so SAM can use the builder Docker image to build our Lambda function:

Resources:
CityTasksEventsFunction:
Type: AWS::Serverless::Function
CodeUri: ./src/city-tasks-events
Handler: com.hiperium.city.hiperium.city.tasks.events.function.ApplicationHandler::handleRequest
Events:
TaskExecutionEvent:
Type: EventBridgeRule
...
Metadata:
BuildMethod: makefile

We can now build our SAM project with all the configurations we have made so far:

$ sam build --config-env dev

Then, try to invoke the Lambda function locally using the following SAM command to verify that it works correctly:

$ sam local invoke CityTasksEventsFunction  \
--event src/city-tasks-events/src/test/resources/events/lambda-event-valid-detail.json

But I’m getting the following error:

It seems that our Linux executable doesn’t have some native libraries.

If you go to the AWS Lambda Java Support Libraries official website, you will find a section for the Java Runtime Interface Library (Java RIC), a dependency we configured previously. On this page, you will find a configuration section that indicates that we can specify the processor architecture to use:

<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-runtime-interface-client</artifactId>
<version>2.3.3</version>
<classifier>linux-x86_64</classifier>
</dependency>

At this point, the error persists despite the classifier configuration previously. So, after searching on the Internet, I noticed that the configuration file, “resource-config.json,” located inside the META-INF directory (the configuration used by the Java RIC library), was written using a previous version, 2.1.1. So, we must adapt this configuration file to use these new native libraries:

{
"resources": {
"includes": [
{
"pattern": "\\Qjni/libaws-lambda-jni.linux-x86_64.so\\E"
},
{
"pattern": "\\Qjni/libaws-lambda-jni.linux-aarch_64.so\\E"
},
{
"pattern": "\\Qjni/libaws-lambda-jni.linux_musl-x86_64.so\\E"
},
{
"pattern": "\\Qjni/libaws-lambda-jni.linux_musl-aarch_64.so\\E"
}
]
},
"bundles": []
}

We can verify the name of these native libraries by going to the External Libraries section in the IntelliJ IDE of our project:

If you try to build the SAM project and run it locally again, you will see a different error like this:

The problem concerns the Log4J library, which uses reflection and other features unsupported by GraalVM’s native-image tool. The error indicates that Log4j 2 is trying to create an instance of “DefaultFlowMessageFactory” using reflection, but it’s failing because GraalVM has not included the necessary constructor in the native image.

IMPORTANT: Remember that GraalVM’s native image builder has limitations regarding reflection, dynamic class loading, and other runtime features of the JVM. If your application (or its dependencies) relies on these features, you may need to provide additional configuration to get them to work correctly with native images.

We must incorporate the necessary components of the Log4J library in the native image. This configuration must be placed in the “reflect-config.json” file in the META-INF directory. The content must be like the following:

[
{
"name": "com.hiperium.city.tasks.events.ApplicationHandler",
"allDeclaredConstructors": true,
"allPublicConstructors": true,
"allDeclaredMethods": true,
"allPublicMethods": true,
"allDeclaredClasses": true,
"allPublicClasses": true
},
{
"name": "org.apache.logging.log4j.message.DefaultFlowMessageFactory",
"methods": [
{"name": "<init>", "parameterTypes": [] }
]
}
]

You must repeat this process for the rest of the classes with the same problem until you can run the Lambda function without errors.

But proceeding this way will be tedious and endless for large projects. So, I remember that when I was developing the Spring Boot micro-service, there was a way to execute the Tracing Agent in the GraalVM. So, I used this agent to generate the required files containing the components needed to be configured for native executables.

5. GraalVM Tracing Agent.

When building a native image with GraalVM, we’re compiling our Java application to a standalone executable. This process involves a static application analysis to determine what parts of the JDK and our code need to be included in this native image. This is necessary because, in contrast to a traditional JVM, the native image can’t load classes dynamically at runtime, so everything it might need has to be included at build time.

The challenge here is that certain Java features, like reflection, dynamic proxies, and loading resources, inherently involve some dynamic behavior — they may depend on classes or resources that aren’t explicitly referenced in the code and thus wouldn’t be included in the native image.

The Tracing Agent is a tool we can execute alongside our applications in a JVM mode, and it monitors our application’s behavior to see what classes, methods, and resources are being accessed dynamically. Then, it outputs configuration files that describe this dynamic behavior. These config files can be provided to the native-image utility when building a native image. The utility will read these files and include the necessary classes, methods, and resources in the native image, even though they aren’t referenced directly in our code.

So the first thing we must add is the following Maven plugin to execute the Tracing Agent at the “process-classes” phase:

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>3.1.0</version>
<executions>
<execution>
<phase>process-classes</phase>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>${env.JAVA_HOME}/bin/java</executable>
<arguments>
<argument>-agentlib:native-image-agent=config-output-dir=${project.build.directory}/tracing-agent</argument>
<argument>-cp</argument>
<classpath/>
<argument>com.hiperium.city.tasks.events.function.utils.agent.TracingAgentUtil</argument>
</arguments>
</configuration>
</plugin>

NOTE: I created a dummy Java class with a static <main> method to run the tracing agent.

As you can see in the previous configuration, I’m requesting the agent to put the resulting analysis in the target directory. So we can now execute the following Maven command to run the tracing agent:

$ mvn process-classes -f src/city-tasks-events/pom.xml

After that, you must see the following files generated in the configured folder inside the target directory:

Notice that the naming convention of these files follows the ones created by the SAM project template in the META-INF directory.

So copy these files into the “city-tasks-events” directory, which is inside the META-INF directory as follows:

If you open the “reflect-config.json” file, you will notice that the ApplicationHandler and DefaultFlowMessageFactory classes we added in the previous section also exist in this file, among other classes.

If you try to rerun the native executable, you probably get the same error in your terminal. So, until now, we have been exploring many problems that arise when generating the native image.

The good news is that we now know which components are causing problems to build or run our native executable. So, let’s try to fix that.

6. Dropping AWS Powertools for Logging.

Welcome to one of the problematic aspects of software development. Dependency version migrations. I am showing you the previous sections highlighting the difficulty of adding new features. But we must utilize some of those tasks for sure.

NOTE: One example of this job is when we migrated to Spring Boot from version 2 to version 3. We refactored many classes (and related dependencies) to adapt them to the new framework version.

I don’t want to make this section with a lot of content. The idea is to use the same SAM template for GraalVM as our project base. Then, we need to remove the AWS Powertools dependencies from the Maven POM file, so we know so far that the Logging library is causing us many problems.

From now on, our project will not compile, so we can comment out the lines that belong to the Powertools Validation part. The logging component may be the only one making our lives hard. If this is true, we can use at least the Validation component, and in the following tutorial, we’ll use the Tracing one.

So replace the following Logging dependencies for the AWS Powertools ones:

<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>2.0.7</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>2.0.7</version>
</dependency>

With this, we can log in to our code without Log4J dependency. So replace the following line of code where you added logging:

private static final Logger LOGGER = LoggerFactory.getLogger(ApplicationHandler.class);

Execute the Tracing Agent to see the required classes and files to generate our native executable without problems. I moved the “process-classes” Maven phase to a new profile:

$ mvn process-classes -f src/city-tasks-events/pom.xml -Ptracing-agent

Copy the files with the critical changes generated by the agent into the respective META-INF directory.

Our Java class was reduced to the following only to test if the native executable is running without errors:

public class ApplicationHandler implements RequestStreamHandler {

private static final Logger LOGGER = LoggerFactory.getLogger(ApplicationHandler.class);

public void handleRequest(final InputStream inputStream, final OutputStream outputStream, final Context context) {
LOGGER.info("Hello world!!");
}
}

We’re starting from the start. We must go step by step. So try to build the project with SAM and then execute the local invocation to the results:

$ sam build --config-env dev

$ sam local invoke CityTasksEventsFunction \
--event src/city-tasks-events/src/test/resources/events/lambda-event-valid-detail.json

This is the response of the last command:

But I would like to use structured logging as the Powertools does. So, replace the previous SLF4J dependencies with the following ones to use Logback as we did with our API project:

<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>${logback.version}</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>${logback.version}</version>
</dependency>
<dependency>
<groupId>ch.qos.logback.contrib</groupId>
<artifactId>logback-json-classic</artifactId>
<version>${logback.contrib.version}</version>
</dependency>
<dependency>
<groupId>ch.qos.logback.contrib</groupId>
<artifactId>logback-jackson</artifactId>
<version>${logback.contrib.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>${jackson.databind.version}</version>
</dependency>

Then, repeat the preceding process one more time to execute the Tracing Agent, update the “native-image” files accordingly, build the docker image, build the SAM project to generate the native image, and invoke the native image locally using SAM:

1) mvn process-classes -f src/city-tasks-events/pom.xml -Ptracing-agent

2) docker build -t hiperium:native-image-builder -f utils/docker/sam-builder/Dockerfile .

3) sam build --config-env dev

4) sam local invoke CityTasksEventsFunction \
--event src/city-tasks-events/src/test/resources/events/lambda-event-valid-detail.json

And you must see the results of all configurations made so far:

Nice!!. But what about the Validation library in the AWS Powertools? So, let’s try to configure that feature for our native image.

7. Keeping AWS Powertools for Validation.

So far, we have removed the Log4J2 dependencies because they are causing some errors when generating the Java native image. So, continuing with our previous Lambda Function business logic, let’s try configuring the Powertools Validation dependency for our project again:

<dependency>
<groupId>software.amazon.lambda</groupId>
<artifactId>powertools-validation</artifactId>
<version>${aws.powertools.version}</version>
<exclusions>
<exclusion>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j2-impl</artifactId>
</exclusion>
</exclusions>
</dependency>

But this time, we must exclude the Log4J2 dependency inside the Validation dependency. Also, I exclude the Lambda Events dependency as we’re not using it.

IMPORTANT: The fewer dependencies we use for our native image, the better.

Now we can uncomment the lines concerning the Validation logic in our FunctionUtil class:

The difference with the business logic concerning our previous tutorial is that we cannot use the “classpath:/schemas/custom-event-schema.json” location to load our JSON Schema file. This convention is used when we generate a JAR executable. So, for our native image, we must pass the JSON Schema to the Powertools validation method as a String object.

When the Powertools Validation library validates our JSON Schema library, we can validate our “EventBridgeCustomEvent” object to verify that the required fields were populated:

Finally, execute one more time our well-known process to generate our Java Native Lambda Function:

1) mvn process-classes -f src/city-tasks-events/pom.xml -Ptracing-agent

2) docker build -t hiperium:native-image-builder -f utils/docker/sam-builder/Dockerfile .

3) sam build --config-env dev

4) sam local invoke CityTasksEventsFunction \
--event src/city-tasks-events/src/test/resources/events/lambda-event-valid-detail.json

You must see the following output in your terminal:

Excellent. So, we can confirm using the Powertools Validation library in our project to generate a Java native image for our Lambda Function.

8. Problems with the DynamoDB Enhanced Client.

Now, let’s add our tested components that store EDA events on DynamoDB back in our project. Then, we can execute the Tracing Agent as before.

$ mvn process-classes -f src/city-tasks-events/pom.xml -Ptracing-agent

It would be best if you achieved the following successful output:

Now, you can compare the “reflect-config.json” and the “resource-config.json” files again. Pass the new values from the target directory to the META-INF/native-image directory files.

Now you can continue with the established steps as we did in previous sections, starting with step 2:

2) docker build -t hiperium:native-image-builder -f utils/docker/sam-builder/Dockerfile .

3) sam build --config-env dev

4) sam local invoke CityTasksEventsFunction \
--event src/city-tasks-events/src/test/resources/events/lambda-event-valid-detail.json

But, new errors appear at native build time in step 3:

I tried to add the classes with errors individually, but it was exhausting because new errors appeared after solving the previous one. For this, I need to use the following configuration inside the Maven build tools plugging from GraalVM:

<configuration>
<imageName>native</imageName>
<buildArgs>
<buildArg>-H:+ReportExceptionStackTraces</buildArg>
<buildArg>-H:EnableURLProtocols=http,https</buildArg>
<buildArg>--initialize-at-run-time=io.netty.channel.AbstractChannel,io.netty.channel.socket.nio.SelectorProviderUtil,io.netty.util.internal.logging.Slf4JLoggerFactory$NopInstanceHolder...</buildArg>
<buildArg>--trace-class-initialization=com.fasterxml.jackson.databind.cfg.MapperConfig,com.fasterxml.jackson.databind.cfg.BaseSettings,com.fasterxml.jackson.databind.cfg.DatatypeFeatures$DefaultHolder...</buildArg>
</buildArgs>
</configuration>

Furthermore, I wouldn’t be able to solve all those errors.

After some time searching on the Internet, I found that for native image building, the DynamoDB Enhanced Client library has some problems because the library uses the Java reflection feature overly.

We need another approach to get our objective. The good news is that we have some experience building native images with Spring Boot for the API project. So, let’s try this but using the adequate framework for Lambda functions.

9. Spring Cloud Functions to the Rescue.

Let me clarify: When I say “Rescue,” it concerns the native image building. Of course, we could have used SCF from the beginning of this tutorial, but as we saw sections before, we were able to generate the Jar file of our function, and we were able to test it successfully. So that is not the problem. The problem is when we try to generate the native image.

So, as I needed to start from the beginning, I renamed the events project to “city-tasks-events-function” because of the new improvements and the use of the SCF framework. Also, our main Java class is like a Spring Boot application main class:

In the Spring Boot application properties file, we must specify where are residing our function classes and the function name definition:

spring.cloud.function.scan.packages=com.hiperium.city.tasks.events.function.functions
spring.cloud.function.definition=createEventFunction

For this to work, we must declare our function class implementing a Function interface with the input and output Java types:

I’m not using the Powertools for validation anymore. Instead, I’m using the standard Jakarta Bean Validations library, as we did in the API project:

So, our EventBridge custom class will be as follows to validate the required fields as we did when we used the Powertools library:

To persist our custom event into DynamoDB, I created a service component as usual:

Two things to notice here. The first uses classic Spring Boot annotations like “@Component” or “@Service” in class definitions as in a standalone web project. The second uses the ancient method to persist items in DynamoDB using Java maps. It’s tedious, but we learned that using the DynamoDB Enhanced library produces a lot of errors.

Finally, we have the DynamoDB Client bean, used to persist the EventBridge custom events in the database:

These components are similar in some way to the ones defined in the API project, and that’s because we are using the Spring Framework for our Lambda function project.

So you are asking now: Where is the difference between SCF and Spring Boot?? The answer resides in the use of dependencies:

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-aws</artifactId>
</dependency>

The first 2 dependencies are Spring Boot and the last 2 of Spring Cloud. As we said, we’re using a Spring Boot application without exposing any Rest endpoints. We need to add the following configuration property for this to work correctly:

spring.main.web-application-type=none

The last dependency is to indicate to the SCF framework to aggregate the required AWS components before packaging our Spring Boot project. For this to work, we must update the SAM template file as follows:

Resources:
CityTasksEventsFunction:
Type: 'AWS::Serverless::Function'
Properties:
CodeUri: src/city-tasks-events-function
Handler: org.springframework.cloud.function.adapter.aws.FunctionInvoker::handleRequest

Notice that the class and method for the Handler property changed. Now, we’re using a predefined SCF class to receive all function events. This class is a facade that redirects the function messages to our custom function class.

Another difference is the implementation of our Integration Tests:

We’re using an “@FunctionalSpringBootTest” annotation and injecting a FunctionCatalog instance to access our function class and send the required values to test.

If we execute this test class, these are the results from the IntelliJ IDE:

So, our integration tests are working as expected. Let’s configure our project to use GraalVM for native image build.

10. Building Lambda Native Image with GraalVM.

First, we must configure the serialization mechanism for the following 3 classes using the “@RegisterReflectionForBinding” annotation:

These classes are used for marshaling and unmarshaling from the Jackson Library. So, we need to convert these classes from JSON to Java and vice-versa in some parts of our business logic. Remember that we did this for our API project tutorials before. So here is the same way.

Second, we must use the “native” Maven profile from the Spring Boot parent POM. The good news is that our main POM file inherits from the Spring Boot parent:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.1.3</version>
<relativePath/>
</parent>
<groupId>com.hiperium.city</groupId>
<artifactId>city-tasks-parent</artifactId>
<name>city-tasks-parent</name>
<version>1.7.0</version>
<packaging>pom</packaging>

So we don’t need to configure the Maven Shade plugin as we did in the previous section because the Spring Boot Maven plugins will be in charge of this for us:

<profile>
<id>native</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
...
</plugin>
<plugin>
<groupId>org.graalvm.buildtools</groupId>
<artifactId>native-maven-plugin</artifactId>
<configuration>
<imageName>native</imageName>
<buildArgs>
<buildArg>--enable-url-protocols=http,https</buildArg>
</buildArgs>
</configuration>
<executions>
<execution>
<goals>
<goal>build</goal>
</goals>
<phase>package</phase>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>

The only new thing to notice here is the use of the Maven assembly plugin. This plugin is not configured in the API project because the final native image is packaged inside a Docker image.

In the case of our Lambda function, we need to specify the Lambda handler class that must receive the AWS events, as we saw before when configuring the SAM template file:

org.springframework.cloud.function.adapter.aws.FunctionInvoker

Recall that our Lambda project is not configured to receive any HTTP request like a typical Spring Boot application. Instead, our function’s handler class is the entry point to receive all AWS event messages. For this reason, the Maven assembly plugin adds a standard “bootstrap” file to the JAR file with the following content:

#!/bin/sh

cd "${LAMBDA_TASK_ROOT:-.}"

./native "$_HANDLER"

The “$_HANDLER” environment variable is the SCF class defined previously to receive all AWS event messages. The “native” executable is our Linux native image built using Java 17 and GraalVM.

Finally, we can execute our well-known process to build and test our native image using Docker and SAM CLI in our terminal window:

2) docker build -t hiperium:native-image-builder -f utils/docker/sam-builder/Dockerfile .

3) sam build --config-env dev

4) sam local invoke CityTasksEventsFunction \
--event src/city-tasks-events-function/src/test/resources/events/lambda-event-valid-detail.json

NOTE: You must deploy the LocalStack in another terminal window. You don’t need to deploy all the services configured in the “docker-compose.yml” file. Only the “tasks-localstack” is required.

The following must be the execution output of the second command:

Next must be the execution output of the third command:

In the LocalStack terminal, validate the DynamoDB logs. They must indicate an HTTP 200 code, which means a successful insert:

So far, so good. Now, let’s try to deploy our Lambda native image locally using Docker Compose and LocalStack.

11. Lambda Native Image with Docker & LocalStack.

After a successful integration testing using Testcontainers, let’s try to deploy our Lambda native image locally using Docker Compose and LocalStack. Recall that we tested our API microservice similarly using Docker Compose in previous tutorials. The LocalStack was used to deploy a DynamoDB table. On this occasion, we’ll use the LocalStack also to deploy the Lambda native function.

First, let’s create a Dockerfile with the following content:

#####################################################################################
############################# Stage 1: Builder Image ################################
#####################################################################################
FROM hiperium/native-image-builder AS builder

COPY pom.xml pom.xml
COPY src/city-tasks-events-function/pom.xml src/city-tasks-events-function/pom.xml
RUN mvn dependency:go-offline -B -f src/city-tasks-events-function/pom.xml
COPY src/city-tasks-events-function/src src/city-tasks-events-function/src
COPY src/city-tasks-events-function/utils src/city-tasks-events-function/utils

RUN mvn -T 4C clean native:compile -Pnative -DskipTests -f src/city-tasks-events-function/pom.xml -Ddependency-check.skip=true

#####################################################################################
######################## Stage 2: Native Application Image ##########################
#####################################################################################
FROM public.ecr.aws/lambda/provided:al2

COPY --from=builder /workspace/src/city-tasks-events-function/target/native-assembly.zip /workspace/apps/events-native-assembly.zip
COPY --from=builder /workspace/src/city-tasks-events-function/target/native ${LAMBDA_TASK_ROOT}
COPY --from=builder /workspace/src/city-tasks-events-function/utils/shell/bootstrap ${LAMBDA_RUNTIME_DIR}

CMD [ "org.springframework.cloud.function.adapter.aws.FunctionInvoker::handleRequest" ]

The first stage uses the “hiperium:native-image-builder” image alongside the project’s source code to build the native image. The second stage uses a Lambda image for AL2 instances to run our Lambda native functions. Also, notice that this stage imitates in some way the behavior of the Lambda execution at AWS.

The second stage’s instruction copies a zip file containing the native executable alongside the bootstrap file. This zip file is created by the Maven Assembly plugin at the packaging phase:

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptors>
<descriptor>utils/assembly/native.xml</descriptor>
</descriptors>
<appendAssemblyId>false</appendAssemblyId>
<finalName>native-assembly</finalName>
</configuration>
<executions>
<execution>
<goals>
<goal>single</goal>
</goals>
<phase>package</phase>
<inherited>false</inherited>
</execution>
</executions>
</plugin>

The idea behind this copy is to pass the zip file to the LocalStack container, which must be in charge of deploying our Lambda native function using the AWS CLI command. For this reason, we must create a shared docker volume and attach it to the service definitions in the docker-compose file:

version: '3.9'
services:

tasks-events-function:
image: aosolorzano/city-tasks-events-function:1.7.0
container_name: tasks-events-function
build:
context: .
dockerfile: src/city-tasks-events-function/Dockerfile
ports:
- "9000:8080"
volumes:
- tasks-shared-data:/workspace/apps
...

volumes:
tasks-shared-data:

Then, create a bash script called “lambda-setup.sh” file with the following content:

$ awslocal lambda create-function                                                             \
--function-name 'city-tasks-events-function' \
--runtime 'provided.al2' \
--architectures 'arm64' \
--zip-file fileb:///workspace/apps/events-native-assembly.zip \
--handler 'org.springframework.cloud.function.adapter.aws.FunctionInvoker::handleRequest' \
--role 'arn:aws:iam::000000000000:role/lambda-role' \
--environment 'Variables={AWS_ENDPOINT_OVERRIDE=http://host.docker.internal:4566}'

The runtime is “provided.al2” because we must use an Amazon Linux machine to execute our Java native image. The architecture is of type “arm64” because we want to use a Gravition2 processor on AWS. Notice that we provide the zip file path, so the Lambda command will unpack it and put the files in the proper directories as we did in the last Dockerfile.

As I said before, this image will be used by the AWS CLI command to create and deploy the Lambda function locally using LocalStack in the “docker-compose.yml” file as follows:

version: '3.9'
services:
...

tasks-localstack:
image: localstack/localstack:2.2.0
ports:
- "4566:4566"
depends_on:
- tasks-events-function
volumes:
- tasks-shared-data:/workspace/apps
- ./src/city-tasks-events-function/src/test/resources/infra-setup.sh:/etc/localstack/init/ready.d/events-setup.sh
- ./src/city-tasks-events-function/src/test/resources/lambda-setup.sh:/etc/localstack/init/ready.d/events-lambda-setup.sh
- ./src/city-tasks-events-function/src/test/resources/data-setup.json:/var/lib/localstack/events-data.json

Now, we can deploy the required services to verify if they are deploying successfully:

$ docker compose up localstack

For the LocalStack deployment, you must see the following output indicating that our native Lambda function was deployed successfully:

Open a new terminal window and invoke the lambda function using the LocalStack CLI to call the Lambda service locally:

$ awslocal lambda invoke \
--function-name city-tasks-events-function /tmp/out.txt \
--payload file://src/city-tasks-events-function/src/test/resources/events/lambda-event-valid-detail.json \
--cli-binary-format raw-in-base64-out

You must see the following output in the Docker Compose terminal:

Notice that at the end of the execution, we also obtained an HTTP 200 code for the DynamoDB service. This means that our HTTP PUT operation was applied successfully.

We can test calling our Lambda function deployed directly in a docker container. For this to work, you must execute the following CURL command:

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d @src/city-tasks-events-function/src/test/resources/events/lambda-event-valid-detail.json

You must see the following output in the Docker Compose terminal:

Notice that the “tasks-events-function” container logs the messages this time, processes the valid event, and sends it to DynamoDB to store it. Also, notice that the DynamoDB service deployed on the “tasks-localstack” container responds with an HTTP 200, indicating that our HTTP PUT operation was applied successfully.

Now, let’s talk about another interesting theme. We can deploy our entire solution locally (API and Events services), but this time, using our main bash script:

$ ./run-scripts.sh

In the AWS profiles, enter the “idp-pre” profile where our Identity Provider (IdP) is deployed. This is the only service needed when we deploy our solution locally:

When the defined services are deployed, notice the API service logs to see if it started successfully:

The LocalStack also must be started successfully as before:

Modify the “/etc/hosts” file by uncommenting the following line as we did in the previous tutorial to access our API service securely using HTTPS:

127.0.0.1 dev.hiperium.cloud

Then, open the Postman tool and create a new City Task, choosing a close hour and minute of your actual day to execute the task:

You must see in the LocalStack terminal the logs from the API service indicating the created City Task:

The good news is that when the Quartz Job is executed, the LocalStack sends the event to the Lambda function and runs it:

Notice in the middle of the logs that the LocalStack is creating a service endpoint to call our Lambda function, and it is executed automatically instead of using the “awslocal lambda invoke” command:

Excellent, right?! Now, let’s try to deploy our native Lambda function into AWS.

12. Deploying to AWS using SAM CLI.

So, let’s use the SAM-CLI to build and deploy our Lambda function into AWS, but first, you must log in to the AWS Identity Center service to access our development accounts:

NOTE: In my previous article, you can review how to configure a multi-account environment using Amazon Organizations and IAM Identity Center for more details.

$ hiperium-login

For this tutorial section, you must log in to the “tasks-dev” account because we must deploy our Lambda function and the DynamoDB table in this account. The API deployment will be discussed later.

Then, build and deploy the Lambda function:

$ sam build --config-env

$ sam deploy \
--config-env 'dev' \
--disable-rollback \
--profile 'tasks-dev'

You must see the following resources that will be created on AWS:

If you go to your CloudFormation (CF) console, you must see the following stacks created successfully:

Then, execute the following AWS command to fetch Lambda logs in real-time from the CloudWatch Logs (CWL) service:

$ aws logs tail \
--follow /aws/lambda/city-tasks-events-function \
--profile 'tasks-dev'

Open a new terminal window and execute the following AWS command from the project’s root directory to send a valid event payload to the EventBridge service:

$ aws events put-events \
--cli-input-json file://src/city-tasks-events-function/src/test/resources/events/eventbridge-event-valid-payload.json \
--profile 'tasks-dev'

You must see a response like the following indicating that the message was successfully sent to EventBridge:

If you return to the previous terminal tab where you run the AWS logs command, you must see a successfully processed event message like this:

Finally, go to the DynamoDB console to validate if our “Events” table has the new event registered:

Excellent!! We deployed our Lambda function into AWS using the SAM CLI tool. Now, you can delete the deployed SAM project in AWS:

$ sam delete                                       \
--stack-name city-tasks-events-function-dev \
--config-env dev \
--no-prompts \
--profile tasks-dev

In the final section, we must deploy the API and Events projects into AWS using our deployment tools and bash scripts to automate this process.

13. Deploying Linux Natives Executables into AWS.

It’s time for the truth. If we had done our homework in unit and integration tests, the cloud deployment would have achieved at least a 75% success rate. Permission or infra-dependency configuration in the AWS Copilot CLI or SAM CLI tools could be missing if something is wrong. But if you think we should have configured that somehow when we deployed our solution locally using Docker Compose and LocalStack.

We must execute our main bash script again, but this time, we will choose different options:

$ ./run-scripts.sh

As usual, please enter the required AWS profiles to deploy our solution. Recall that we use a multi-account deployment, so you must enter the 3 AWS profiles for this purpose:

Select option 2 to deploy our services and the required infra into AWS. The scripts start deploying the Events and API services:

The process might take a long time because the scripts execute all the Linux commands shown in this tutorial, including the native-image generation with GraalVM for both projects.

NOTE: If you’re getting some CPU and memory problems with building the native executables, try to use more JVM memory exporting the following environment variable: _JAVA_OPTIONS=”-Xmx12g -Xms8g”

To deploy the Lambda function, we use the SAM CLI tool. To deploy the API service, we use the Copilot CLI tool. At the end of the command executions, you must see the following output:

Notice that after the API was deployed on ECS, the script creates an IAM policy that allows the ECS Task to invoke the EventBridge service. Also, the script gets the Application Load Balancer (ALB) endpoint, which you must register into your DNS. If your domain name is registered in Amazon Route53 like mine, the script helps us to update the DN record set.

Now, execute the following command to get the logs of our API service from the ECS cluster. This time, using the AWS Copilot CLI command:

$ export AWS_PROFILE=tasks-dep-dev

$ copilot svc logs \
--app city-tasks \
--name api \
--env dev \
--since 30m \
--follow

We cannot provide the “profile” parameter in the Copilot command. For this reason, we must use the AWS_PROFILE environment variable, so the Copilot CLI internally takes the “tasks-dep-dev” profile to invoke the AWS service. Then, you must see an output like this:

This indicates that our API service was deployed successfully on AWS. So now open a new terminal tab and execute the following command to get the logs of our Lambda function:

$ sam logs -n CityTasksEventsFunction               \
--stack-name 'city-tasks-events-function-dev' \
--tail \
--profile 'tasks-dev'

We used the “profile” parameter to connect to a specified AWS account and fetch the logs in real-time.

Edit your “/etc/hosts” file, and comment out the following line to ensure you’re calling the API service on AWS and not the local one:

# Added by Hiperium City project
# 127.0.0.1 dev.hiperium.cloud
# End of section

Now open the Postman tool and create a City Task that must be executed as soon as possible:

Notice the HTTP 200 response, which indicates the City Task was created successfully. Also, notice in the Copilot CLI terminal that the API service also logs the creation of the requested City Task:

When the time comes to execute the City Task, you must see the following logs in the same Copilot CLI terminal:

In the SAM CLI terminal, you must see the following logs indicating the successful process of the event:

Finally, go to the DynamoDB console to verify that our processed event was stored successfully:

That’s it!!! Awesome, right?? We have validated that our solution is working on AWS as expected. Remember to delete the created infrastructure after you finalize your tests. You can do this using our main bash script again, but this time, select option 3:

I hope this tutorial was helpful to you. In the next one, we will migrate our Ionic/Angular project so we can interact with our API from the web app.

Thanks for your reading time.

--

--