Using Spring WebFlux in AWS Lambda functions with Spring Native and SAM CLI.

Andres Solorzano
11 min readAug 11, 2024

--

Introduction.

Our last tutorial deployed a POM file into the Maven Central Repository to build our Lambda functions without providing parent POM files. Build tools like AWS SAM to download the dependency from Maven Central. Furthermore, other projects can reuse those parent POM files to bootstrap the implementation. And talking about different projects, remember that our Task Service project invokes a Lambda function to update a device’s status in a DynamoDB table. For this reason, that project deployed a device table to read data and perform device updates based on task operations.

On this occasion, we’ll create a new project based on the previous article. The idea is to decouple device artifacts from the task project, making the device domain independent from task one. We’ll also add Spring Reactive to one of the new Lambda functions to update the device status in DynamoDB.

As we worked on in previous tutorials, we will use Spring Native with GraalVM (to build a native Linux executable), Spring Cloud AWS, and LocalStack with Testcontainers for integration tests. So, let’s get started.

Tooling.

To complete this guide, you’ll need the following tools:

Note: You can download the project’s source code from my GitHub repository, which contains all the configurations I made in this tutorial.

Lambda function for Reading Operations.

This is the simplest one because it uses the same codebase of the Lambda function we built in a previous tutorial. As our last Lambda uses the synchronous DynamoDbClient class to interact with DynamoDB, we’ll use the same approach for this new Lambda function. So, it will seem like this:

@Override
public DeviceDataResponse apply(Message<DeviceDataRequest> deviceIdRequestMessage) {
DeviceDataRequest deviceDataRequest = deviceIdRequestMessage.getPayload();
BeanValidations.validateBean(deviceDataRequest);

HashMap<String, AttributeValue> keyMap = new HashMap<>();
keyMap.put(Device.ID_COLUMN_NAME, AttributeValue.builder().s(deviceDataRequest.deviceId()).build());
keyMap.put(Device.CITY_ID_COLUMN_NAME, AttributeValue.builder().s(deviceDataRequest.cityId()).build());
GetItemRequest request = GetItemRequest.builder()
.key(keyMap)
.tableName(Device.TABLE_NAME)
.build();

DeviceDataResponse response;
Map<String, AttributeValue> returnedItem = this.dynamoDbClient.getItem(request).item();
if (Objects.isNull(returnedItem) || returnedItem.isEmpty()) {
response = new DeviceDataResponse.Builder()
.httpStatus(HttpStatus.NOT_FOUND.value())
.errorMessage("Device not found.")
.build();
} else {
Device device = this.deviceMapper.mapDevice(returnedItem);
response = this.deviceMapper.mapDeviceResponse(device, HttpStatus.OK.value(), null);
}
return response;
}

Nothing new, right? So, let’s continue with the next section, which will use the Project Reactor for the other Lambda function.

Spring WebFux in Lambda functions.

We need to use another Spring Cloud Function (SCF) dependency for this:

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-function-webflux</artifactId>
</dependency>

Then, the Lamba function must be seen like this:

@Override
public Mono<LambdaResponse> apply(Message<byte[]> message) {
try {
EventBridgeEvent event = OBJECT_MAPPER.readValue(message.getPayload(), EventBridgeEvent.class);
return Mono.just(event)
.doOnNext(BeanValidations::validateBean)
.doOnNext(this::validateCityStatus)
.doOnNext(this.deviceRepository::updateDeviceStatus)
.then(Mono.just(new LambdaResponse.Builder().statusCode(HttpStatus.NO_CONTENT.value()).build()))
.onErrorResume(UpdateStatusFunction::handleException);
} catch (IOException exception) {
return Mono.error(new RuntimeException("Error deserializing the request", exception));
}
}

You must notice two differences in the previous code regarding the previous section’s code:

  • I used “byte[]” to receive the request payload.
  • I used “Mono” to return the resultant value to the caller.

For the first point, I think the AWS adapter we use in SCF passes the entire request to the function’s handler method in a bytes array as is. So, we must know the structure of the requesting payload beforehand to deserialize it accordingly. As the function must be called from an EventBridge event, the Java records must be as follows:

public record EventBridgeEvent(
String id,
String version,
String source,
String account,
String time,
String region,
List<String> resources,

@JsonProperty("detail-type")
String detailType,

@Valid
@NotNull
EventBridgeEventDetail detail) {
}
public record EventBridgeEventDetail(

@NotBlank
@ValidUUID
String deviceId,

@NotBlank
@ValidUUID
String cityId,

@NotNull
DeviceOperation deviceOperation) {
}

The last record is personalized because it must have the parameters we expect from the caller. In this case, we must know the device ID, the City ID where the device belongs, and the operation we must execute against that device.

With this in mind, we can deserialize the byte[] array without error:

EventBridgeEvent event = OBJECT_MAPPER.readValue(message.getPayload(), EventBridgeEvent.class);

This is the first step in our functions’ handler method before continuing to create the reactive flow, as we see previously.

The second point is easy to understand because our Lambda function is returning a Mono object containing a Java object with the results of the operation:

public record LambdaResponse(
int statusCode,
Map<String, String> headers,
String body) {
}

But how can we know if these changes are correct before deploying our function to AWS? Well, let’s write about it.

Integration Testing with LocalStack.

As we said before, our Lambda function must be called from the EventBridge service, so the event message must have the following structure in JSON format:

{
"version": "0",
"id": "7bf73129-1428-4cd3-a780-95db273d1602",
"account": "123456789012",
"source": "hiperium.city.tasks.api",
"time": "2024-05-08T15:00:00Z-05:00",
"region": "us-east-1",
"resources": [],
"detail-type": "ExecutedTaskEvent",
"detail": {
"deviceId": "37f44ed4-b672-4f81-a579-47679c0d6f31",
"cityId": "a0ecb466-7ef5-47bf-a1ca-12f9f9328528",
"deviceOperation": "ACTIVATE"
}
}

So, create a couple of files with this structure to validate your function’s behavior. Then, invoke your function, sending those files from your integration test method based on your required test cases:

@ParameterizedTest
@DisplayName("Non-valid requests")
@ValueSource(strings = {
"requests/non-valid/empty-device-id.json",
"requests/non-valid/wrong-device-uuid.json",
"requests/non-valid/non-existing-city.json",
"requests/non-valid/non-existing-device.json",
"requests/non-valid/existing-device-disabled-city.json"
})
void givenInvalidEvents_whenInvokeLambdaFunction_thenThrowsException(String jsonFilePath) throws IOException {
Function<Message<EventBridgeEvent>, Mono<LambdaResponse>> createEventFunction = this.getFunctionUnderTest();
try (InputStream inputStream = getClass().getClassLoader().getResourceAsStream(jsonFilePath)) {
assert inputStream != null;
EventBridgeEvent event = TestsUtils.unmarshal(inputStream.readAllBytes(), EventBridgeEvent.class);
Message<EventBridgeEvent> requestMessage = TestsUtils.createMessage(event);
StepVerifier.create(createEventFunction.apply(requestMessage))
.assertNext(response -> {
assertThat(response).isNotNull();
// The status code should be an error code.
int statusCode = response.statusCode();
assertThat(statusCode >= HttpStatus.OK.value() && statusCode <= HttpStatus.IM_USED.value()).isFalse();
})
.verifyComplete();
}
}

Notice I’m using the “@ParameterizedTest” annotation in our test method to use the JSON file’s path as test method input to call the Lambda function. Also, I’m parsing the JSON file as an “EventBridgeEvent” class before calling the Lambda function, but you can send the “byte[]” array if you prefer.

As usual in my articles, my integration test classes extend the “TestContainersBase” class, which starts the LocalStack container with the DynamoDB service to create the required table for the Lambda function:

public abstract class TestContainersBase {

private static final HiperiumLogger LOGGER = new HiperiumLogger(TestContainersBase.class);
private static final LocalStackContainer LOCALSTACK_CONTAINER;

static {
LOCALSTACK_CONTAINER = new LocalStackContainer(DockerImageName.parse("localstack/localstack:latest"))
.withServices(LocalStackContainer.Service.DYNAMODB)
.withCopyToContainer(MountableFile.forClasspathResource("localstack/table-setup.sh"),
"/etc/localstack/init/ready.d/table-setup.sh")
.withCopyToContainer(MountableFile.forClasspathResource("localstack/table-data.json"),
"/var/lib/localstack/table-data.json")
.withLogConsumer(outputFrame -> LOGGER.info(outputFrame.getUtf8String()))
.withEnv("DEBUG", "0")
.withEnv("LS_LOG", "info")
.withEnv("EAGER_SERVICE_LOADING", "1");
LOCALSTACK_CONTAINER.start();
}

@DynamicPropertySource
public static void dynamicPropertySource(DynamicPropertyRegistry registry) {
registry.add("aws.region", LOCALSTACK_CONTAINER::getRegion);
registry.add("aws.accessKeyId", LOCALSTACK_CONTAINER::getAccessKey);
registry.add("aws.secretAccessKey", LOCALSTACK_CONTAINER::getSecretKey);
registry.add("spring.cloud.aws.endpoint", () -> LOCALSTACK_CONTAINER.getEndpoint().toString());
}
}

When the container starts, I configure the LocalStack container to print the container’s logs using my logger object. I also passed a bash script file to create the DynamoDB table and load testing data:

 #!/bin/bash

echo ""
echo "CREATING DEVICES TABLE..."
awslocal dynamodb create-table \
--table-name 'Devices' \
--attribute-definitions \
AttributeName=id,AttributeType=S \
AttributeName=cityId,AttributeType=S \
--key-schema \
AttributeName=id,KeyType=HASH \
AttributeName=cityId,KeyType=RANGE \
--billing-mode PAY_PER_REQUEST

echo ""
echo "WRITING DEVICE ITEMS..."
awslocal dynamodb batch-write-item \
--request-items file:///var/lib/localstack/table-data.json

With this configuration, you must see related logs printed in your terminal when executing the test methods so you can validate the correct creation of the required infra before running test methods.

If you’re using IntelliJ, you can execute all test methods directly from a test class. In our case, you must see a result like the following:

You can scroll down in the test class execution to see the logs I mentioned before to validate the correct creation of your required infra in LocalStack.

Spring Native and GraalVM.

The configuration is straightforward, as I did in tutorials before. The main thing to do (as in many Spring Boot projects) is to extend your POM file from the Spring Boot’s parent POM file:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>city.hiperium</groupId>
<artifactId>cities-parent-pom</artifactId>
<version>1.0.7</version>
<relativePath/>
</parent>
...
</project>

The only different configuration regarding Lambda functions with the traditional Spring Boot’s microservices we’re accustomed to building is in the way to enable HTTP protocols at build time and in the packaging form used for the native Linux executable:

<profiles>
<profile>
<id>native</id>
<build>
<plugins>
<plugin>
<groupId>org.graalvm.buildtools</groupId>
<artifactId>native-maven-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>build</goal>
</goals>
<phase>package</phase>
</execution>
</executions>
<configuration>
<imageName>native</imageName>
<buildArgs>
<buildArg>--enable-url-protocols=http,https</buildArg>
</buildArgs>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptors>
<descriptor>tools/assembly/native.xml</descriptor>
</descriptors>
<appendAssemblyId>false</appendAssemblyId>
<finalName>native-assembly</finalName>
</configuration>
<executions>
<execution>
<id>native-zip</id>
<goals>
<goal>single</goal>
</goals>
<phase>package</phase>
<inherited>false</inherited>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>

As the Spring Native plugin uses the “native” Maven profile to build the native executable, we’re overriding the traditional process by adding additional steps for Lambda functions, as seen in the previous configuration.

The <native.xml> file defined in the assembly plugin is in charge of Zip the native Linux executable with a required script file called <bootstrap>:

<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 https://maven.apache.org/xsd/assembly-1.1.2.xsd">
<id>native-assembly</id>
<formats>
<format>zip</format>
</formats>
<baseDirectory></baseDirectory>
<fileSets>
<fileSet>
<directory>target</directory>
<outputDirectory>/</outputDirectory>
<useDefaultExcludes>true</useDefaultExcludes>
<fileMode>0775</fileMode>
<includes>
<include>native</include>
</includes>
</fileSet>
<fileSet>
<directory>tools/shell</directory>
<outputDirectory>/</outputDirectory>
<useDefaultExcludes>true</useDefaultExcludes>
<fileMode>0775</fileMode>
<includes>
<include>bootstrap</include>
</includes>
</fileSet>
</fileSets>
</assembly>

The bootstrap file is the entry point to execute the Linux executable, which internally invokes our Lambda function handler class:

#!/bin/sh
set -e

cd "${LAMBDA_TASK_ROOT:-.}"

./native "$_HANDLER"

Notice that the <native> script file, defined inside the previous script, has the same name as the one configured in the <native-maven-plugin> Maven plugin. If you decide to change the name of the native executable in the Maven plugin, don’t forget to update the bootstrap script to use the same executable filename.

We must make these minimal configurations to build our Lambda function as a native Linux executable with Spring Native and GraalVM.

Deploying to AWS using SAM-CLI.

As we’re using an EventBridge event bus for all our services, we must create the event bus independently:

aws events create-event-bus                                    \
--name 'cities-event-bus' \
--description "Event-Bus for the Hiperium City project" \
--profile 'your-aws-profile'

Then, you must update your Lambda function definition inside your <template.yaml> file to refer to the previous event bus:

DeviceUpdateFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./device-update-function
FunctionName: 'device-update-function'
Handler: org.springframework.cloud.function.adapter.aws.FunctionInvoker::handleRequest
Events:
EventBridgeEvent:
Type: EventBridgeRule
Properties:
EventBusName: 'cities-event-bus'
Pattern:
source:
- 'hiperium.city.tasks.api'
detail-type:
- 'ExecutedTaskEvent'
Policies:
- DynamoDBReadPolicy:
TableName: 'Devices'
- DynamoDBWritePolicy:
TableName: 'Devices'
Environment:
Variables:
SPRING_PROFILES_ACTIVE: 'dev'
Metadata:
BuildMethod: makefile

Notice that inside the “Events” block, we must define the event bus name and pattern. So, the event bus must dispatch events to a Lambda function based on the event pattern. So, be careful when defining the properties and values to be matched to invoke your corresponding function.

It’s crucial to add the DynamoDBReadPolicy and DynamoDBWritePolicy policies in the <Policies> section because before writing to a DynamoDB table, you must read the values in the table first.

The other significant roadblock is to define an event rule inside the existing event bus so EventBridge can dispatch the event message to the corresponding Lambda function:

DeviceUpdateEventRule:
Type: AWS::Events::Rule
Properties:
Name: 'device-update-event-rule'
EventBusName: 'cities-event-bus'
State: ENABLED
EventPattern:
source:
- 'hiperium.city.tasks.api'
detail-type:
- 'ExecutedTaskEvent'
Targets:
- Arn: !GetAtt DeviceUpdateFunction.Arn
Id: 'DeviceUpdateFunctionTarget'
RetryPolicy:
MaximumRetryAttempts: 3
MaximumEventAgeInSeconds: 300

Then, you must execute the following commands from the <functions> directory to deploy the Lambda function in AWS:

sam build

sam deploy \
--parameter-overrides SpringProfile="dev" \
--no-confirm-changeset \
--disable-rollback \
--profile 'your-aws-profile'

After a successful deployment, execute the following command to receive all logs for the Lmanda functions deployed with your SAM template:

sam logs --tail                     \
--stack-name 'devices-sam-cli' \
--profile 'your-aws-profile'

In another terminal, execute the following command from the project’s root directory to invoke the reactive Lambda function directly:

aws lambda invoke                               \
--function-name "device-update-function" \
--payload file://functions/device-data-function/src/test/resources/requests/valid/lambda-valid-id-request.json \
--cli-binary-format raw-in-base64-out \
--profile "your-aws-profile" \
~/Downloads/response.json

I’m using the same JSON file I used for integration tests in the previous section, but this time, I am using it to invoke the lambda function deployed in AWS. Also, notice that the response is stored in a JSON file in my downloads directory. You can execute the following command to see the content of the file:

cat ~/Downloads/response.json | jq

You must see the following content:

I programmed the function to return an HTTP response code when the device update was performed successfully.

In your other terminal, you must see the following logs when executing the Lambda invoke command:

Finally, execute the following command to send an event to the event bus so EventBridge can dispatch the event message to our Lambda function:

aws events put-events \
--cli-input-json file://functions/device-update-function/src/test/resources/requests/valid/eventbridge-valid-event-request.json \
--profile 'your-aws-profile'

The command will show you the event ID indicating the event message was received successfully:

In your logger terminal, you must see a new group of log messages indicating that the Lambda function was called successfully:

For this to work, the message events you send to your event bus must have the following JSON structure:

{
"Entries": [
{
"Source": "hiperium.city.tasks.api",
"DetailType": "ExecutedTaskEvent",
"Detail": "{\"deviceId\":\"37f44ed4-b672-4f81-a579-47679c0d6f31\",\"cityId\":\"a0ecb466-7ef5-47bf-a1ca-12f9f9328528\",\"deviceOperation\":\"INACTIVATE\"}",
"Resources": [],
"EventBusName": "cities-event-bus"
}
]
}

It’s important to note that the “Source” and “DetailType” values are used as event patterns I configured in the SAM template to be used as a rule to EventBridge to dispatch this message to our Lambda function.

Automation Script Files.

As usual, you can execute the following command from the project’s root directory to execute your required operations:

./setup.sh

This script will show you a little menu with a few options according to the current project status:

Enter the menu option according to your requirements. These script files will internally execute all commands you saw in this article, so you don’t need to type every command to deploy or delete the SAM project in AWS.

And that is!!! Now, you have deployed 2 Lambda functions using a SAM template file. One function reads data from a DynamoDB table, and the other writes data into the same table. However, the latter is called asynchronously and uses reactive programming to perform its objective.

I hope this article has been helpful to you, and I’ll see you in the next one.

Thanks for reading.

--

--