Using Spring Native over a Reactive Spring Boot microservice that employs Quartz, Postgres, and DynamoDB.
I have already written some tutorials in the past about the Quarkus Framework and the use of Native Executables in a Docker container. Indeed, I have a project called City Tasks in my GitHub organization that is a full-stack application developed with Ionic and Angular in the front-end and Java with Quarkus in the back-end. You can clone and deploy this functional microservice into your AWS account, using services like AuroraDB with Postgres to store the Quartz Jobs and an ECS cluster for the Quarkus native executable balanced with an Amazon ALB.
This tutorial tries to replicate this same microservice using identical business logic but for a Spring Boot application. The previous Quarkus microservice also used Reactive Programing, so my previous tutorial was about the same topic but using the Spring WebFlux library. Now you can recognize what the objective of this tutorial is. So let’s get started.
To complete this guide, you’ll need the following tools:
- Git.
- AWS CLI (version 2).
- OpenJDK 17 with GraalVM (You can use the SDKMAN tool).
- Apache Maven 3.8 or superior.
- Docker and Docker Compose.
- IntelliJ or Eclipse IDE.
NOTE: You can download the project’s source code from my GitHub repository to review the latest changes made in this tutorial.
GraalVM Native Images.
GraalVM Native Images are standalone executables that can be generated by processing compiled Java applications ahead of time (AOT). This ahead-of-time processing involves statically analyzing our application’s code from its main entry point.
Native Images generally have a smaller memory footprint and start faster than traditional JVM applications. These applications are well-suited to be deployed on cloud providers like AWS and used alongside Docker containers.
Maven POM Configurations.
The <pom.xml> file, as we mentioned before, contains the same dependencies as we did in my previous tutorial. But, for the native support, we need to add 2 plugins. The first one is for Hibernate:
<plugin>
<groupId>org.hibernate.orm.tooling</groupId>
<artifactId>hibernate-enhance-maven-plugin</artifactId>
<version>${hibernate.version}</version>
<executions>
<execution>
<id>enhance</id>
<goals>
<goal>enhance</goal>
</goals>
<configuration>
<enableLazyInitialization>true</enableLazyInitialization>
<enableDirtyTracking>true</enableDirtyTracking>
<enableAssociationManagement>true</enableAssociationManagement>
</configuration>
</execution>
</executions>
</plugin>
The other one is a Maven plugin for GraalVM compilation:
<plugin>
<groupId>org.graalvm.buildtools</groupId>
<artifactId>native-maven-plugin</artifactId>
</plugin>
Remember that we are using Spring Boot 3, so the parent dependency <spring-boot-starter-parent> includes a Maven profile called <native> with some configurations ready to generate a Native image with Maven.
Spring Native Custom Hints.
The winner here is definitively Quarkus. In Spring Native (as we did in Quarkus), we need to specify some classes for reflection, serialization, proxy usage, etc. Spring calls all of these <hints>. The problem is that the GraalVM, at compilation time, cannot recognize every class in our project that must be used for reflection at runtime. For this reason, these frameworks (Spring and Quarkus) have annotations that we can use con classes to specify reflection at compilation time to GraalVM.
Using the Quarkus Framework is easy in this topic because we only need to add the <@RegisterForReflection> annotation on the required classes. I didn’t have problems with components like Postgres JDBC and Quarkz, for example. Quarkus manages reflection on these components transparently. In Spring Native, instead, I have a bunch of problems at runtime with those components that the framework cannot instantiate (calling constructors) or access methods at runtime because of the use of reflection. So I have to be in a trial and error many times identifying the classes that Spring native cannot operate at runtime. So I need to create 2 <hints> components for these 2 libraries. The first one for the Postgres JDBC:
The second one is for Quartz, where there are many components that we need to specify for reflection, and more of them are for the use of the Postgres library as a default Job store:
And talking about the Job store using the Postgres database for Quartz, this last library doesn’t recognize the Spring Datasource provided by the project to be used as the default datasource for Quartz. We need to specify the connection properties manually:
I can assume that when we use Spring Native, we lose some of the automatic configurations provided by Spring Boot. This characteristic is forfeited at compile time for the use of reflection by the GraalVM. Having to say this, we also need to specify the following 2 properties in the <application.properties> file so that Quartz can work properly at runtime:
spring.quartz.properties.org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
spring.quartz.properties.org.quartz.jobStore.dataSource=CityTasksQuartzDS
We can validate this in the following table taken from the Quartz website:
Notice that the first 2 properties have default values of <null>. So, we need to provide these 2 properties for Quartz. The <driverDelegateClass> property was adequately set previously in the TDD project.
Furthermore, we also need to specify the same properties in the Integration Tests. Don’t forget that our tests must still run adequately. So I added these properties in the <AbstractContainerBase> class:
So finally, our Integration Tests run correctly:
As you can see, there are many configurations that we need to make to run the Task Service with Spring Native without errors and, more specifically, configurations to the Quartz and Postgres libraries.
So before generating the native image, let’s install the required Java version in the next section.
Compiling the Native Image.
Verify that you’re using a Java 17 distribution with GraalVM. If you are using SDKMan like me, you can install this Java version with the following command:
$ sdk install java 22.3.1.r17-grl
You can set it as the default Java version for your computer:
Execute the following command to verify that the Java version is the one we require for our project:
Generate the native executable with the following command:
$ mvn clean -Pnative native:compile
If you have seen various of my Quarkus tutorials, you will see similarities in the native image generation. These frameworks (String Native and Quarkus) use GraalVM behind the scenes to generate the native executable. So now, we can execute the required Maven command:
After a few minutes, the build process will be finished:
The native image will reside in the <target> directory as usual, and it’s called <city-tasks-spring-native> in our case:
We may be tempted to run this native executable, but remember that it depends on other services like Postgres and DynamoDB. So let’s better create a docker image for this native executable in the next section.
Docker Image for Native Executable.
As usual, I created a multi-stage docker image for our purpose:
Here I’m using a GraalVM image to generate our native executable in the first stage. I’m using an Oracle Slim image in the second stage to run our native executable.
Notice that I’m skipping the test execution in the first stage. We need a docker image with the docker engine pre-installed to execute the integration tests. Remember that integration tests use Testcontainers to run the tests, and Testcontainers needs a docker engine to pull and run the required images. So, don’t worry about this because we can execute the tests in a CI/CD pipeline running the <mvn test> command before generating the native image.
So that we can have the final Dockerfile, let’s run our project using Docker Compose.
Functional Testing (Docker Compose).
Finally, it’s time for the truth. So let’s execute the following command to deploy our docker cluster with the required services:
$ docker compose up --build
The <build> parameter builds the docker image previously defined in the Dockerfile. So as we saw before, this can take a few minutes to build and deploy the entire cluster.
In another terminal window, execute the following command to verify that the device persisted successfully in the DynamoDB:
$ aws dynamodb scan \
--table-name Devices \
--endpoint-url http://localhost:4566
You must see the following info in JSON format:
Notice that the device <status> is <OFF>. So that means that the initial state of our device is deactivated or turned off.
So, as usual, open the Postman tool to create a task and activate the device:
Notice that the HTTP response was 201, and the task ID is 7. So when the time is come to run the task job in the terminal, you must see the following logs:
Notice that the <task-localstack> container prints 2 log messages in the console. One is the result of finding the device with ID <123>, and the other is for updating the device’s status. Both requests have an HTTP 200 status.
So, let’s verify the current status of our device using the previous command:
$ aws dynamodb scan \
--table-name Devices \
--endpoint-url http://localhost:4566
We must see the following output in the terminal window:
The device now has an <status> of <ON>, so the task updates the device’s status correctly.
So our Task Service is working correctly and using Spring Native. If you can see the complete CRUD examples, please visit my previous tutorial in section 8, where I put screenshots of the rest of the operations that don’t change in this tutorial regarding business logical code.
Running Integration Tests with Maven.
So far, we are running the integration tests using the IDE. But Maven can also execute all test classes. So, let’s terminate the docker cluster by pressing <control+c> and then executing the following command:
$ mvn test
You must see that all Integration Tests are executed, and they must be successfully performed:
So that’s it!!!
We developed a native Spring Boot microservice using Spring Native, WebFlux for reactive programming, TDD, and Testcontainers. We also used the DynamoDB Async client that comes in the AWS SDK version 2 and works very well with the Reactor library.
I hope this tutorial has been of interest to you. I’ll see you in my next article, where I will discuss how to implement Bean Validation and Error Handling in our project using the same stack of technologies here.
I will see you soon.