Skip to main content

Preparing a Spring Boot App for Kubernetes

Running your application in a container on Kubernetes is quite different from running on on-prem Windows servers. Here we'll go through some main things you need to do to prepare your Spring Boot application for Kubernetes 🌟

Prerequisites

Health Probes, Prometheus Metrics and Graceful Shutdown

Robust probes and application metrics are essential for running your application in Kubernetes. Kubernetes probes secures that your app is always healthy and also enables zero downtime deployment. You need metrics to observe how your app is behaving inside the cluster over time. Metrics enables you to for example fine-tune resource consumption or catch errors and set up alerts 🔔

To secure zero downtime when deploying new versions, we will enable graceful shutdown of the Tomcat server. This is in line with Spring Best Practices when running on Kubernetes.

Start by adding required Maven dependencies:

pom.xml
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-core</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
</dependencies>

Next, you'll need to configure Actuator to expose health probes and Prometheus metrics endpoints:

resources/application.yaml
spring:
application:
name: <your-service-name> # Change this

management:
endpoints:
web:
exposure:
include: health,prometheus
endpoint:
health:
probes:
enabled: true
metrics:
tags:
application: ${spring.application.name}
server:
port: 8081 # Expose metrics on a separate port for internal usage

server:
shutdown: graceful
port: 8080

Your app should now expose the following Actuator endpoints:

  • Liveness Probe: http://<host>:8081/actuator/health/liveness
  • Readiness Probe: http://<host>:8081/actuator/health/readiness
  • Prometheus merics: http://<host>:8081/actuator/prometheus
Access to metrics must be restricted

For security reasons, end users should never be able to access application metrics. Metrics endpoints should only be available for internal systems. In this guide, we propose to restrict access serving them on a separate port which is only exposed for internal systems.

Now Spring Security needs to be configured to expose the actuator endpoints. Note that normally we would not use .permitAll() for actuator endpoints, but because we configured them to be served at port 8081 which will not be exposed externally, it's OK.

src/SecurityConfig.kt
@EnableWebSecurity
class SecurityConfig : GjeWebSecurityConfigurerAdapter() {
override fun authorizeRequests(http: HttpSecurity) {
http
.authorizeRequests()
.requestMatchers(
EndpointRequest.to("health", "prometheus")
).permitAll() /* It's OK to permit all for actuator endpoints as they are served on the internal port 8081 */
.antMatchers(
HttpMethod.GET, "/v1/your-endpoint/**"
).hasAuthority(GjensidigeRole.PRIVATPERSON.value()) /* Always authorize requests */
.anyRequest().denyAll() /* Deny non permitted requests */
}
}

Application logs

In Kubernetes, we don't write logs to file, we write them to stdout (or "the console"). Clusters at Gjensidige have Splunk Connect for Kubernetes installed which is built on top of FluentD. FluentD collect logs from all parts of the Kubernetes Cluster and sends them to gjensidige.splunkcloud.com 🔎

You must adhere to Gjensidige Security Logging Standards

You are responsible and accountable for ensuring that your application logs follow Gjensidige Security Logging Standards. We send logs to Splunk Cloud and this standard ensures that we don't violate our agreement with them as well as EU laws.

Be sure to have read the general logging recommendations

To ensure good logging practise are being followed in Gjensidige, it's important that you are familiar with the general recommendations for logging at Gjensidige.

We'll use the Logstash Logback Encoder to write application logs to stdout in JSON format. Start by adding required maven dependencies:

pom.xml
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>changeme</version> <!-- Get latest version from https://github.com/logstash/logstash-logback-encoder -->
</dependency>

Then configure logback-spring.xml (spring logback reference) to create a custom consoleJSONAppender for logging in Kubernetes:

resources/logback-spring.xml
<configuration>
<appender name="consoleJSONAppender" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<!-- Make sure exception stack traces don't exceed Splunk's size limit -->
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>30</maxDepthPerThrowable>
<maxLength>2048</maxLength>
<shortenedClassNameLength>20</shortenedClassNameLength>
<rootCauseFirst>true</rootCauseFirst>
<inlineHash>true</inlineHash>
</throwableConverter>
</encoder>
</appender>

<!-- Use Spring Boot default appender for local development -->
<springProfile name="local">
<include resource="org/springframework/boot/logging/logback/base.xml" />
</springProfile>
<!-- Use Logstash JSON appender when running in Kubernetes Pod -->
<springProfile name="!local">
<root level="INFO">
<appender-ref ref="consoleJSONAppender"/>
</root>
</springProfile>
</configuration>

Creating a Dockerfile

A Dockerfile contains all commands needed to assemble your app container. The example below assumes a jar-file has been created by running mvn clean package. You can learn how to improve performance in this guide on spring.io.

Dockerfile
# Get latest version from https://hub.docker.com/r/azul/zulu-openjdk-alpine/tags
FROM azul/zulu-openjdk-alpine:17.X.Y-jre as builder

# Use layers from the fat JAR to run the application
# This approach is discussed here https://spring.io/guides/topicals/spring-boot-docker and here https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#features.container-images.building.dockerfiles
WORKDIR application
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application.jar
RUN java -Djarmode=layertools -jar application.jar extract

# Get latest version from https://hub.docker.com/r/azul/zulu-openjdk-alpine/tags
FROM azul/zulu-openjdk-alpine:17.X.Y-jre
WORKDIR application

# Get Splunk Open Telemetry javaagent for application tracing. Find latest version at https://github.com/signalfx/splunk-otel-java/releases
RUN wget https://github.com/signalfx/splunk-otel-java/releases/download/vX.Y.Z/splunk-otel-javaagent-all.jar

# Run as non-root user to mitigate security risks
RUN addgroup -S --gid 10001 spring && adduser -S --uid 10001 spring -G spring
USER spring:spring

COPY --from=builder application/dependencies/ ./
RUN true
COPY --from=builder application/spring-boot-loader/ ./
RUN true
COPY --from=builder application/snapshot-dependencies/ ./
RUN true
COPY --from=builder application/application/ ./

ENTRYPOINT ["java","-javaagent:splunk-otel-javaagent-all.jar","-Djavax.net.ssl.trustStore=/etc/ssl/certs/java/cacerts","org.springframework.boot.loader.JarLauncher"]

You can now build your container image by running:

docker build -t gjensidige.azurecr.io/your-team-name/your-app-name:v0.0.1 .

When the image is built, you can run it with:

docker run -p 8080:8080 gjensidige.azurecr.io/your-team-name/your-app-name:v0.0.1

Your Spring Boot app should now be running on http://localhost:8080

Next steps

  1. Push your container image to Gjensidige's Container Registry 📦
  2. Create an auto-scalable Kubernetes deployment for your application 🚀