Introduction to Spring Boot
Spring Boot is an opinionated, open-source Java-based framework used to create stand-alone, production-grade Spring-indexed applications with minimal effort. It evolved as a solution to the "configuration hell" often associated with the traditional Spring Framework, which required extensive XML configuration or repetitive annotation setups to manage beans, view resolvers, and transaction managers. By contrast, Spring Boot leverages a "convention over configuration" philosophy, meaning the framework makes intelligent assumptions about what your application needs based on the dependencies you have added to your classpath.
At its core, Spring Boot is not a replacement for Spring but a sophisticated toolset built on top of it. It automates the boilerplate tasks that developers previously performed manually, such as setting up an embedded web server (Tomcat, Jetty, or Undertow), managing version compatibility between libraries, and providing production-ready features like metrics and health checks out of the box. This allows developers to focus almost entirely on business logic rather than infrastructure plumbing.
Core Capabilities
Spring Boot's efficiency is driven by three primary mechanisms: Starters, Auto-Configuration, and the Actuator.
Starters are a set of convenient dependency descriptors that you can include in your application. For example, if you want to build a RESTful web service, you simply include the spring-boot-starter-web dependency. This single entry pulls in all the necessary libraries—such as Spring Web MVC, Jackson for JSON processing, and an embedded Tomcat server—ensuring that all versions are mutually compatible.
Auto-Configuration is the "magic" of Spring Boot. It attempts to automatically configure your Spring application based on the jar dependencies that you have added. If HSQLDB is on your classpath, and you have not manually configured any database connection beans, Spring Boot will automatically configure an in-memory database for you.
The Actuator provides production-ready features that allow you to monitor and manage your application. It exposes HTTP endpoints or JMX beans to check the application's health, view environment properties, and examine the state of the application context.
Application Entry Point
Every Spring Boot application begins with a main class annotated with @SpringBootApplication. This single annotation is a convenience shortcut that combines @Configuration, @EnableAutoConfiguration, and @ComponentScan. When the main method executes SpringApplication.run(), it bootstraps the application, starts the embedded server, and performs a comprehensive scan of the package hierarchy to register components.
package com.example.docs;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* The @SpringBootApplication annotation triggers auto-configuration
* and component scanning for the current package and its sub-packages.
*/
@SpringBootApplication
@RestController
public class DocumentationApplication {
public static void main(String[] args) {
// Launches the application and starts the embedded web server
SpringApplication.run(DocumentationApplication.class, args);
}
@GetMapping("/hello")
public String sayHello() {
return "Welcome to the Spring Boot Technical Documentation.";
}
}
Spring vs. Spring Boot Comparison
The following table outlines the fundamental differences in how a developer interacts with the standard Spring Framework versus Spring Boot.
| Feature |
Standard Spring Framework |
Spring Boot |
| Configuration |
Requires manual XML or Java-based @Configuration. |
Uses Auto-Configuration based on classpath dependencies. |
| Dependency Management |
Manual versioning in Maven/Gradle; risk of conflicts. |
"Starters" manage curated, compatible versions. |
| Deployment |
Requires an external Web Server (WAR deployment). |
Stand-alone JAR with embedded server (Tomcat/Jetty). |
| Boilerplate Code |
High (setting up DispatcherServlet, ViewResolvers). |
Minimal (Convention over Configuration). |
| Monitoring |
Requires third-party tools or manual implementation. |
Built-in "Actuator" for health and metrics. |
Operational Edge Cases
While Spring Boot simplifies development, it is important to understand the behavior of the Fat JAR. Unlike traditional Java applications that rely on a system-wide application server, Spring Boot packages all library dependencies and the web server into a single executable JAR file. This makes the application highly portable across cloud environments but can lead to significantly larger file sizes and longer startup times if the classpath is cluttered with unnecessary Starters.
Another critical detail is the Component Scan Scope. By default, @SpringBootApplication only scans the package it is placed in and its sub-packages. If a developer places a @Service or @Repository in a package hierarchy that is "above" or parallel to the main application class without explicitly defining the scan path, those beans will not be detected, leading to NoSuchBeanDefinitionException at runtime.
Warning: Dependency Management Constraints
Always use the Spring Boot Parent POM or the Spring Dependency Management plugin. Manually overriding versions of libraries included in a "Starter" (e.g., forcing a specific Hibernate version) can break the internal compatibility tested by the Spring team and lead to unpredictable runtime ClassNotFoundExceptions.
Configuration Properties
Spring Boot uses a hierarchical approach to configuration, primarily through application.properties or application.yml. These files allow you to override the default auto-configuration values.
| Property Key |
Default Value |
Description |
server.port |
8080 |
The HTTP port the embedded server listens on. |
server.servlet.context-path |
/ |
The context path for the web application. |
spring.profiles.active |
default |
Determines which environment-specific properties to load. |
spring.main.banner-mode |
console |
Controls if the Spring Boot ASCII art is printed on startup. |
System Requirements
Before initializing a Spring Boot project, it is essential to ensure that your environment aligns with the framework's baseline requirements. With the release of Spring Boot 3.x, the framework underwent a significant architectural shift, moving from the legacy Java EE (Enterprise Edition) to Jakarta EE 9/10. This transition necessitates modern versions of the Java Development Kit (JDK) and specific build tool versions to handle the new namespace requirements and the AOT (Ahead-of-Time) compilation features used for native image generation.
Spring Boot is designed to be platform-agnostic, running effectively on any operating system that supports a modern JVM, including Windows, macOS, and various Linux distributions. However, the performance and compatibility of your application depend heavily on the specific versions of the compiler, build automation tools, and servlet containers used during the development and deployment phases.
Core Platform Requirements
The most critical requirement for Spring Boot 3.x is the Java version. Unlike Spring Boot 2.x, which maintained backward compatibility with Java 8 and 11, the 3.x release train establishes Java 17 as the absolute minimum baseline. This shift allows the framework to leverage modern language features such as Records, Sealed Classes, and improved pattern matching, while also providing a stable foundation for Virtual Threads (available when using Java 21 or later).
| Component |
Minimum Version |
Recommended Version |
| Java SDK (JDK) |
Java 17 |
Java 21 (LTS) |
| Build Tool: Maven |
3.6.3 |
3.9.x or later |
| Build Tool: Gradle |
8.x (8.12+) |
9.x |
| Jakarta EE |
Jakarta EE 9 |
Jakarta EE 10 |
| GraalVM |
22.3 |
Latest Community Edition |
Embedded Servlet Containers
One of the defining features of Spring Boot is its ability to embed a servlet container directly into the executable archive. This removes the requirement for a pre-installed application server on your target environment. Spring Boot provides "Starter" dependencies for the three major containers, each with its own specific version requirements to ensure compatibility with the Jakarta Servlet specification.
| Container |
Default Version |
Spring Boot Starter |
| Tomcat |
10.1 |
spring-boot-starter-tomcat |
| Jetty |
11.0 / 12.0 |
spring-boot-starter-jetty |
| Undertow |
2.3 |
spring-boot-starter-undertow |
Build Tool Configuration
To manage dependencies and package your application, you must configure your build tool to use the Spring Boot parent metadata. This ensures that all third-party libraries (like Hibernate, Jackson, or Micrometer) are pulled in with versions that have been verified to work together.
Maven Implementation
In a Maven project, you typically inherit from the spring-boot-starter-parent. This parent POM manages the versions of all standard dependencies so that you do not have to specify them manually.
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.4.2</version>
<relativePath/> </parent>
<properties>
<java.version>17</java.version>
</properties>
Gradle Implementation
For Gradle, the Spring Boot plugin must be applied. This plugin provides the bootJar task for creating executable archives and integrates with the dependency management plugin to provide version alignment.
// Example build.gradle configuration
plugins {
id 'java'
id 'org.springframework.boot' version '3.4.2'
id 'io.spring.dependency-management' version '1.1.7'
}
group = 'com.example'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '17'
repositories {
mavenCentral()
}
Important Operational Constraints
When preparing your system, you must account for the Jakarta Namespace Migration. If you are migrating a legacy project or using older third-party libraries, any code that imports javax.servlet.* or javax.persistence.* will fail to compile or run. You must update these to jakarta.servlet.* and jakarta.persistence.* respectively.
Additionally, if you intend to use GraalVM Native Images, your system must have the native-image tool installed and sufficient memory (at least 8GB recommended) because the AOT compilation process is resource-intensive compared to standard JIT compilation.
Note: Java 21 and Virtual Threads
While Java 17 is the minimum, using Java 21 is highly recommended for production environments. Spring Boot 3.2+ includes first-class support for Project Loom (Virtual Threads). You can enable this by setting spring.threads.virtual.enabled=true in your properties, which can significantly improve throughput for I/O-bound applications without requiring complex reactive programming.
Warning: Build Tool Compatibility
Using Maven versions older than 3.6.3 or Gradle versions older than 8.x will likely result in plugin execution errors. The Spring Boot 3.x build plugins utilize internal APIs that are only present in these more recent versions of the build engines.
Installation (Maven & Gradle)
Installing Spring Boot is fundamentally different from installing traditional software suites. Because Spring Boot is a library-driven framework, "installation" refers to the process of configuring a build automation tool—specifically Apache Maven or Gradle—to fetch the framework's artifacts from a central repository. Once the build tool is configured, it manages the entire lifecycle of the application, including downloading dependencies, compiling code, running tests, and packaging the executable "Fat JAR."
The framework is distributed via Maven Central, meaning no manual .zip or .exe installations are required for the framework itself. However, developers must have the corresponding build tool installed on their local machine and configured in their system's PATH.
Installing Build Automation Tools
Before configuring a Spring Boot project, you must install the underlying build engine. While many Integrated Development Environments (IDEs) like IntelliJ IDEA or Eclipse come with "bundled" versions of these tools, it is a best practice to install them manually for command-line consistency.
| Tool |
Installation Method |
Verification Command |
| Apache Maven |
Download binary from maven.apache.org, extract, and add /bin to PATH. |
mvn -version |
| Gradle |
Download binary from gradle.org/install or use SDKMAN!. |
gradle -v |
| SDKMAN! |
Recommended for Unix-based systems (macOS/Linux) to manage multiple versions. |
sdk install maven |
Project Initialization with Maven
Maven uses a declarative pom.xml (Project Object Model) file to manage the project. To install Spring Boot within a Maven context, you must define the Spring Boot Starter Parent. This parent serves as a "bill of materials" (BOM), providing default configurations for the compiler, resource filtering, and, most importantly, dependency version management.
When you add a dependency under this parent, you do not need to specify a version tag; Maven inherits the version tested and approved by the Spring Boot release team.
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.4.2</version>
<relativePath/>
</parent>
<groupId>com.example</groupId>
<artifactId>demo-app</artifactId>
<version>1.0.0</version>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
Project Initialization with Gradle
Gradle offers a more programmatic approach using a Groovy or Kotlin DSL (Domain Specific Language). To install Spring Boot in a Gradle project, you apply the org.springframework.boot plugin. This plugin automatically pulls in the io.spring.dependency-management plugin, which provides Maven-like dependency resolution without versions.
Gradle is often preferred for large-scale projects due to its Incremental Build capability, which avoids re-compiling code that hasn't changed, leading to faster build times than Maven.
// Example build.gradle (Kotlin DSL)
plugins {
id("java")
// 1. Apply the Spring Boot Plugin
id("org.springframework.boot") version "3.4.2"
// 2. Apply Dependency Management for version alignment
id("io.spring.dependency-management") version "1.1.7"
}
group = "com.example"
version = "1.0.0"
repositories {
mavenCentral()
}
dependencies {
// 3. Include the Web starter
implementation("org.springframework.boot:spring-boot-starter-web")
testImplementation("org.springframework.boot:spring-boot-starter-test")
}
The Role of the Wrapper
A critical "best practice" in Spring Boot installation is the use of the Maven Wrapper (mvnw) or Gradle Wrapper (gradlew). These are small scripts included in your project root that allow anyone to build the project without having the build tool pre-installed on their machine.
When you run ./mvnw clean install, the script checks if the correct version of Maven exists in the .m2 directory; if not, it downloads it automatically. This ensures that every developer on a team and every CI/CD pipeline uses the exact same version of the build tool, eliminating "it works on my machine" errors.
Common Installation Issues & Edge Cases
| Issue |
Cause |
Resolution |
| Certificate Errors |
Corporate firewalls blocking Maven Central (HTTPS). |
Add corporate SSL certificates to the Java KeyStore (CACERTS). |
| Dependency Resolution Failure |
Incorrect repository configuration in settings.xml. |
Ensure mavenCentral() or a valid Mirror is defined. |
| Missing 'bootJar' Task |
Gradle plugin not applied correctly. |
Verify the id "org.springframework.boot" is in the plugins block. |
| Incompatible Java Version |
Build tool using a different JDK than the project. |
Set JAVA_HOME to point specifically to JDK 17+. |
Note: Using the Spring Initializr
The officially recommended way to "install" and bootstrap a project is via start.io.spring. This web-based tool generates a complete Maven or Gradle structure with the correct entries for the versions you select, ensuring your project metadata is valid from the first second of development.
Warning: The spring-boot-maven-plugin Requirement
If you omit the spring-boot-maven-plugin from your pom.xml, the resulting JAR file will not be executable. Standard Maven packaging does not know how to nest dependencies inside a JAR or how to launch the Spring Boot JarLauncher. Always ensure this plugin is present in the <build> section.
Installing the Spring Boot CLI
The Spring Boot CLI (Command Line Interface) is an optional but powerful command-line tool used to bootstrap, develop, and test Spring applications rapidly. While most enterprise developers use an IDE (Integrated Development Environment) and build tools like Maven or Gradle, the CLI provides a unique advantage: it allows for Prototyping with Groovy.
With the CLI, you can write concise Groovy scripts that eliminate boilerplate code—such as import statements and class declarations—because the CLI automatically provides them through "compiler customizations." It is particularly useful for developers who want to test a Spring Boot concept or a small microservice without setting up a full-blown project structure.
Installation Methods
The Spring Boot CLI can be installed on Windows, macOS, and Linux. The installation process varies depending on whether you prefer manual management or a package manager.
| Method |
OS Compatibility |
Description |
| SDKMAN! |
Linux, macOS, WSL |
The recommended approach for Unix-based systems. It manages versions and PATH automatically. |
| Homebrew |
macOS, Linux |
The standard package manager for Mac users. |
| Manual Installation |
All |
Downloading the binary distribution and manually configuring environment variables. |
| MacPorts |
macOS |
An alternative package manager for macOS users. |
Installation via SDKMAN! (Recommended)
SDKMAN! (The Software Development Kit Manager) is the most efficient way to install the Spring Boot CLI. It allows you to switch between multiple versions of Spring Boot easily.
- Open your terminal and ensure SDKMAN! is installed.
- Execute the installation command:
sdk install springboot
- Verify the installation:
spring --version
Manual Installation (Windows & Generic)
If you do not wish to use a package manager, you can install the CLI manually by following these steps:
- Download: Obtain the Spring Boot CLI distribution (e.g.,
spring-boot-cli-3.4.2-bin.zip) from the Spring software repository.
- Unpack: Extract the ZIP or TAR.GZ file to a permanent location on your hard drive (e.g.,
C:\tools\spring-3.4.2).
- Environment Variables Add the
/bin directory from the extracted folder to your system PATH variable.
- Windows Edit "System Environment Variables" ->
Path -> Add C:\tools\spring-3.4.2\bin.
- Linux/macOS Add
export PATH=$PATH:/path/to/spring-3.4.2/bin to your .bashrc or .zshrc.
Using the CLI to Run Applications
The true power of the CLI lies in its ability to execute .groovy files. Because the CLI understands the Spring ecosystem, it recognizes common annotations and automatically adds the necessary dependencies to the internal classpath.
// Save this as hello.groovy
@RestController
class WebApp {
@RequestMapping("/")
String home() {
"Hello from the Spring Boot CLI!"
}
}
To run this application, simply execute the following command in your terminal. The CLI will automatically download any missing dependencies, compile the script, and start an embedded web server:
spring run hello.groovy
CLI Command Reference
The CLI provides several built-in commands to assist with the development lifecycle.
| Command |
Action |
spring init |
Creates a new Spring Boot project using the Spring Initializr service. |
spring run |
Compiles and runs a Groovy script or a set of Java files. |
spring test |
Runs tests for a given Groovy script. |
spring jar |
Packages a Groovy script into a self-contained executable JAR file. |
spring shell |
Enters a nested shell with tab-completion for Spring commands. |
Important Operational Notes
When using spring init, the CLI acts as a client for the Spring Initializr API. This is the fastest way to generate a Maven or Gradle project structure directly from your terminal without opening a browser.
# Example: Create a web project with JPA and MySQL dependencies
spring init --dependencies=web,data-jpa,mysql --build=maven my-project.zip
Note: Shell Completion
The Spring Boot CLI comes with shell completion scripts for BASH and ZSH. If you installed via SDKMAN!, this is usually configured for you. If you installed manually, you should source the completion script found in the /shell-completion folder of your installation to enable tab-completion for commands and dependencies.
Warning: CLI vs. Production Code
While the CLI is excellent for rapid prototyping and "scratch-pad" development, it is not recommended for building large-scale production applications. For production systems, you should use a standard Maven or Gradle project structure to ensure better IDE support, static analysis, and CI/CD integration.
Developing Your First Application
Developing your first Spring Boot application involves moving beyond theory to create a functional, executable service. The goal of this phase is to understand how the Component Scan, Dependency Injection, and Embedded Server work together to serve a request. In a typical Spring Boot project, you follow a standard directory structure (usually the Maven/Gradle standard) where source code resides in src/main/java and configuration files in src/main/resources.
At this stage, you will see how Spring Boot simplifies the "web plumbing." Instead of manually configuring a DispatcherServlet or a web.xml file, you simply define a class with specific annotations, and the framework auto-configures the rest.
Project Structure and Components
A standard "Hello World" application requires two primary components: an Application Launcher and a REST Controller.
- The Launcher: This class contains the
main method and acts as the entry point. It is annotated with @SpringBootApplication.
- The Controller: This class handles incoming HTTP requests. By using
@RestController, you tell Spring that this class should be scanned and its methods should return data directly to the HTTP response body (typically as JSON or plain text).
Implementation Code
Below is the complete implementation for a basic web service. This example combines the entry point and the controller for simplicity, though in larger applications, these are usually separated into different files.
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* The @SpringBootApplication annotation encapsulates:
* - @Configuration: Tags the class as a source of bean definitions.
* - @EnableAutoConfiguration: Tells Spring Boot to start adding beans based on classpath.
* - @ComponentScan: Tells Spring to look for other components (Controllers, Services) in this package.
*/
@SpringBootApplication
@RestController
public class DemoApplication {
public static void main(String[] args) {
// This line bootstraps the application, starting Tomcat on port 8080 by default.
SpringApplication.run(DemoApplication.class, args);
}
/**
* Maps GET requests for "/greet" to this method.
* Use @RequestParam to capture query parameters from the URL.
*/
@GetMapping("/greet")
public String sayHello(@RequestParam(value = "name", defaultValue = "World") String name) {
return String.format("Hello, %s!", name);
}
}
Running and Testing the Application
Once the code is written, you can execute the application using your build tool's command line or your IDE's run button. Spring Boot will output logs to the console showing the startup progress, the beans being initialized, and the port on which the server is listening.
| Step |
Command (Maven) |
Command (Gradle) |
Expected Outcome |
| Compile & Run |
./mvnw spring-boot:run |
./gradlew bootRun |
Console logs show "Started DemoApplication" |
| Verify (Browser) |
http://localhost:8080/greet |
http://localhost:8080/greet |
Browser displays "Hello, World!" |
| Verify (Params) |
http://localhost:8080/greet?name=User |
http://localhost:8080/greet?name=User |
Browser displays "Hello, User!" |
| Package |
./mvnw clean package |
./gradlew bootJar |
Creates an executable JAR in the /target or /build folder |
Under the Hood: The Startup Process
When you run the application, several complex actions occur in the background:
- Classpath Inspection: Spring Boot looks for
spring-boot-starter-web and detects the presence of Tomcat and Spring MVC.
- Embedded Server Startup: Instead of deploying to an external server, Spring Boot initializes an internal Tomcat instance.
- Bean Registration: The
@ComponentScan detects the @RestController. Spring creates an instance (a "Bean") of DemoApplication and registers the /greet endpoint in the HandlerMapping.
- HTTP Request Handling: When a request hits port 8080, the embedded Tomcat passes it to the
DispatcherServlet, which routes it to your sayHello method.
Common Troubleshooting for First Applications
| Error / Issue |
Common Cause |
Resolution |
| Port 8080 already in use |
Another process (or previous run) is using the port. |
Change port in application.properties via
server.port=9090.
|
| 404 Not Found |
Controller is in a package not scanned by the main class. |
Ensure Controller is in the same package or a sub-package of the
@SpringBootApplication class.
|
| Whitelabel Error Page |
No mapping found for the root URL /. |
Add a mapping for @GetMapping("/") or navigate to the
specific endpoint (e.g., /greet).
|
Note: Hot Swapping
For a better development experience, add the spring-boot-devtools dependency. This allows the application to automatically restart whenever you save a file, significantly reducing the feedback loop during development.
Warning: Classpath Overlap
Avoid putting your @SpringBootApplication class in the "default" package (i.e., no package declaration at the top of the file). Doing so causes Spring to scan every single class in every JAR on your classpath, which leads to massive startup delays and unexpected bean collisions.
SpringApplication Class
The SpringApplication class is the central bootstrap component of any Spring Boot application. It is responsible for creating the appropriate ApplicationContext instance, registering a CommandLinePropertySource to expose command-line arguments as Spring properties, refreshing the application context, and triggering any CommandLineRunner or ApplicationRunner beans. While most developers interact with it via the static run() method in their main class, the class offers a rich API for customizing how an application starts and behaves during its lifecycle.
Startup Customization
While the default SpringApplication.run(MyClass.class, args) is sufficient for most use cases, you can instantiate SpringApplication manually to tweak its behavior before the application launches. This is particularly useful for disabling the startup banner, setting additional profiles programmatically, or changing the web application type (e.g., forcing a non-web application).
package com.example.config;
import org.springframework.boot.Banner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import java.util.Collections;
@SpringBootApplication
public class CustomApplication {
public static void main(String[] args) {
SpringApplication app = new SpringApplication(CustomApplication.class);
// Disable the Spring Boot ASCII banner
app.setBannerMode(Banner.Mode.OFF);
// Programmatically set default properties that can be overridden by application.properties
app.setDefaultProperties(Collections.singletonMap("server.port", "8081"));
// Launch the application
app.run(args);
}
}
Application Events and Listeners
The SpringApplication class sends events at different stages of the startup process. Some events are actually fired before the ApplicationContext is even created, meaning you cannot register listeners for them as standard @Bean components. Instead, these must be registered manually via the SpringApplication.addListeners(...) method or through the META-INF/spring.factories file.
| Event |
Timing |
Typical Use Case |
ApplicationStartingEvent |
Start of a run, but before any processing (except listener registration). |
Logging early initialization or manual environment setup. |
ApplicationEnvironmentPreparedEvent |
When the Environment is known but the context is not yet created. |
Modifying environment variables or profiles programmatically. |
ApplicationContextInitializedEvent |
When the ApplicationContext is prepared and initializers are called. |
Injecting early-stage context modifications. |
ApplicationPreparedEvent |
After bean definitions are loaded but before the refresh starts. |
Inspecting the bean factory before initialization. |
ApplicationReadyEvent |
After the application has started and is ready to service requests. |
Triggering post-startup logic like cache warming. |
Fluent Builder API
If you prefer a hierarchical or more readable way of configuring the application, Spring Boot provides the SpringApplicationBuilder. This class allows you to chain configuration methods and is particularly helpful when dealing with parent/child contexts in complex multi-module applications.
package com.example.builder;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class BuilderApplication {
public static void main(String[] args) {
new SpringApplicationBuilder()
.sources(BuilderApplication.class)
.bannerMode(org.springframework.boot.Banner.Mode.CONSOLE)
.profiles("dev")
.logStartupInfo(false)
.run(args);
}
}
Accessing Command-Line Arguments
SpringApplication provides a way to access the raw arguments passed to the main method as well as a parsed version via the ApplicationArguments interface. By injecting ApplicationArguments into any Spring bean, you can access option names and values (e.g., --debug or --server.port=9000) without manually parsing strings.
import org.springframework.boot.ApplicationArguments;
import org.springframework.stereotype.Component;
import java.util.List;
@Component
public class MyArgumentHandler {
// ApplicationArguments is automatically available for injection
public MyArgumentHandler(ApplicationArguments args) {
boolean debug = args.containsOption("debug");
List<String> files = args.getNonOptionArgs();
// Use arguments for logic...
}
}
Application Exit
To ensure a graceful shutdown, each SpringApplication registers a shutdown hook with the JVM. This ensures that the ApplicationContext is closed properly, releasing resources such as database connections and thread pools. If you need to return a specific exit code when the application stops, you can implement the ExitCodeGenerator interface.
| Feature |
Description |
| Shutdown Hook |
Automatically registered to close the context on JVM termination. |
| ExitCodeGenerator |
An interface that can be implemented to return specific codes to the OS. |
| SpringApplication.exit() |
A static method used to programmatically trigger a clean shutdown. |
Note: Web Application Types
SpringApplication automatically deduces your application type. If spring-webmvc is present, it starts an AnnotationConfigServletWebServerApplicationContext. If spring-webflux is present, it starts a reactive context. If neither is present, it starts a standard non-web AnnotationConfigApplicationContext.
Warning: Lazy Initialization
You can enable lazy initialization globally via app.setLazyInitialization(true). While this significantly reduces startup time by only creating beans when they are needed, it can hide BeanConfiguration errors until the application is already running in production, and may increase the latency of the first request.
Externalized Configuration (Properties & YAML)
Spring Boot allows you to externalize your configuration so that you can work with the same application code in different environments. You can use properties files, YAML files, environment variables, and command-line arguments to feed configuration data into your Spring Beans. This approach follows the "Twelve-Factor App" methodology, ensuring that secret keys, database URLs, and environment-specific settings are never hard-coded into the compiled artifact.
Configuration Formats: Properties vs. YAML
Spring Boot supports two primary file formats for configuration: .properties and .yml (or .yaml). While both serve the same purpose, they offer different stylistic advantages.
- Properties Files: Use a flat key-value structure. They are the traditional Java standard and are easy to manipulate with simple text processing tools.
- YAML Files: Use a hierarchical, indentation-based structure. YAML is often preferred for its readability when dealing with complex, nested configurations and its ability to store list data more cleanly.
| Feature |
Properties ( .properties ) |
YAML ( .yml ) |
| Structure |
Flat (key=value) |
Hierarchical (indented) |
| Readability |
Becomes verbose with long keys |
Clean and concise for nested keys |
| List Support |
Uses bracket notation: list[0] |
Uses bulleted lists or flow style |
| @PropertySource |
Fully supported |
Not supported with @PropertySource |
Implementation Examples
Below is a comparison of how the same configuration—defining a server port and a list of authorized admin emails—is represented in both formats.
Properties Format
# application.properties
server.port=9090
app.security.admins[0]=admin@example.com
app.security.admins[1]=dev@example.com
app.display-name=Production-Server
YAML Format
# application.yml
server:
port: 9090
app:
security:
admins:
- admin@example.com
- dev@example.com
display-name: Production-Server
Accessing Configuration Data
There are three primary ways to inject these values into your Java code: the @Value annotation, the Environment abstraction, and the type-safe @ConfigurationProperties.
- Using @Value
The @Value annotation is used for simple, individual value injections. It supports SpEL (Spring Expression Language) and default values.
@Component
public class MyService {
@Value("${app.display-name:DefaultName}")
private String serverName;
public void printName() {
System.out.println("Running on: " + serverName);
}
}
- Type-Safe Configuration Properties
For complex or grouped settings, @ConfigurationProperties is the best practice. It maps a set of properties to a POJO (Plain Old Java Object), supporting nested objects and validation.
@Configuration
@ConfigurationProperties(prefix = "app.security")
public class SecurityProperties {
private List<String> admins;
private boolean enabled;
// Standard getters and setters are required
public List<String> getAdmins() { return admins; }
public void setAdmins(List<String> admins) { this.admins = admins; }
public boolean isEnabled() { return enabled; }
public void setEnabled(boolean enabled) { this.enabled = enabled; }
}
Configuration Priority (The Hierarchy)
Spring Boot uses a very specific "Order of Precedence" when loading properties. If the same key is defined in multiple places, the higher-priority source overrides the lower-priority one.
| Priority |
Source |
Description |
| 1 |
Command-line arguments |
e.g., --server.port=9000 |
| 2 |
JSON in Environment Variables |
SPRING_APPLICATION_JSON='{"foo":"bar"}' |
| 3 |
OS Environment Variables |
e.g., SERVER_PORT=80 |
| 4 |
Config file (External) |
application.properties outside the JAR |
| 5 |
Config file (Internal) |
application.properties inside the JAR |
| 6 |
Default Properties |
Set via SpringApplication.setDefaultProperties |
Relaxed Binding
Spring Boot uses "relaxed binding" when mapping environment properties to @ConfigurationProperties. This means that the property name in the file doesn't have to be an exact case-match for the field name in the Java class.
| Property Source |
Format |
Example |
| Properties file |
kebab-case (Recommended) |
my.main-project.name |
| YAML file |
kebab-case (Recommended) |
my.main-project.name |
| Env Variables |
Upper case with underscore |
MY_MAINPROJECT_NAME |
| Java System Props |
camelCase |
my.mainProject.name |
Note: YAML and @PropertySource
You cannot use the @PropertySource annotation to load YAML files. If you need to load a custom configuration file using YAML, you must use the YamlPropertySourceLoader or, more simply, stick to the default application.yml and application-{profile}.yml naming conventions which Spring Boot detects automatically.
Warning: Security of Configuration
Never store sensitive information like passwords, API keys, or private certificates in application.properties files that are committed to version control. Instead, use environment variables or a secure vault (like HashiCorp Vault or AWS Secrets Manager) and reference them using the ${SECRET_VAR} syntax.
Profiles
Spring Profiles provide a powerful way to segregate parts of your application configuration and make it only available in certain environments. A common challenge in enterprise development is managing the variance between a developer's local machine, a shared testing environment, and the final production cluster. Profiles allow you to define environment-specific beans and configuration settings without changing the underlying code.
When a profile is "active," the application context only loads the components and properties associated with that specific profile. If no profile is explicitly activated, Spring Boot defaults to the default profile.
Defining Profile-Specific Configuration
There are two primary ways to define profile-specific properties: using separate files or using a multi-document file approach.
- Separate Files:
You can create files following the naming convention application-{profile}.properties or application-{profile}.yml. Spring Boot will automatically load application.properties first, then override any matching keys with the values found in the active profile's file.
- Multi-Document YAML:
In a single application.yml file, you can separate configurations using the --- separator. This keeps all environment logic in one place.
# application.yml
server:
port: 8080
spring:
profiles:
active: dev # Sets the default active profile
---
# Development Profile
spring:
config:
activate:
on-profile: dev
db:
url: "jdbc:h2:mem:testdb"
---
# Production Profile
spring:
config:
activate:
on-profile: prod
db:
url: "jdbc:mysql://prod-server:3306/main_db"
server:
port: 80 # Overrides the default port for production
Activating Profiles
You can activate one or more profiles using various methods. If multiple profiles are activated, the last one defined in the list usually takes precedence for overlapping properties.
| Method |
Syntax / Example |
Use Case |
| Properties File |
spring.profiles.active=prod |
Setting a default in application.properties |
| Command Line |
--spring.profiles.active=prod,common |
Overriding during deployment/JAR execution |
| Env Variables |
export SPRING_PROFILES_ACTIVE=staging |
Standard for Docker and Kubernetes deployments |
| Programmatic |
app.setAdditionalProfiles("dev"); |
Hard-coding a profile during SpringApplication setup |
| JVM System Prop |
-Dspring.profiles.active=test |
Passing arguments to the JVM at startup |
Using @Profile with Components
Beyond configuration properties, you can use the @Profile annotation to restrict the registration of Spring Beans. This is highly useful for swapping out service implementations—for example, using a mock Email Service during local development and a real SMTP Service in production.
package com.example.service;
import org.springframework.context.annotation.Profile;
import org.springframework.stereotype.Service;
public interface NotificationService {
void send(String message);
}
@Service
@Profile("dev")
class DevNotificationService implements NotificationService {
public void send(String message) {
System.out.println("DEV LOG: " + message);
}
}
@Service
@Profile("prod")
class ProdNotificationService implements NotificationService {
public void send(String message) {
// Real logic to send via Amazon SES or SendGrid
}
}
Profile Expression Logic
As of Spring Boot 2.4+, you can use Profile Expressions for more granular control. This allows you to activate beans only when a complex combination of profiles is met (using operators like &, |, and !).
| Expression |
Meaning |
@Profile("dev") |
Active if dev is active |
@Profile("!prod") |
Active if prod is NOT active |
@Profile("dev & cloud") |
Active only if both dev and cloud are active |
@Profile("dev | staging") |
Active if either dev or staging is active |
Profile Groups
If you have many micro-profiles (e.g., mysql, security-off, local-storage), you can use Profile Groups to bundle them under a single umbrella name like local. This simplifies the activation command for developers.
# application.properties
spring.profiles.group.local=mysql,local-storage,debug-logging
Activating local will now automatically activate all three constituent profiles.
Note: The "Default" Profile
If you do not specify an active profile, Spring uses default. You can create an application-default.properties file to provide settings that only apply when no other profile is explicitly set. Once any other profile is activated, the default profile becomes inactive.
Warning: Profile-Specific Overrides
| Priority | Source |
| :--- | :--- |
| Highest | application-{profile}.properties (External to JAR) |
| High | application-{profile}.properties (Internal to JAR) |
| Medium | application.properties (External to JAR) |
| Lowest | application.properties (Internal to JAR) |
Be careful when mixing external and internal files. An external application.properties will override an internal application-dev.properties for the keys it contains, which can be counter-intuitive.
Logging
Spring Boot uses the Commons Logging API for all internal logging but leaves the underlying implementation open to the developer. By default, Spring Boot provides auto-configuration for Logback, which is the industry standard for Java applications. It also includes default routing for Java Util Logging, Log4J2, and SLF4J, ensuring that even if third-party libraries use different logging frameworks, they are all funneled into a single, cohesive output stream.
Default Log Format
When you start a Spring Boot application, the default console output provides essential information formatted for readability. The standard output includes:
- Date and Time: Millisecond precision.
- Log Level: ERROR, WARN, INFO, DEBUG, or TRACE.
- Process ID: The PID of the running JVM.
- Separator: A
--- marker to distinguish the start of the actual log message.
- Thread Name: Enclosed in square brackets
[].
- Logger Name: Usually the abbreviated source class name.
- Message: The actual log content.
2026-02-15 16:30:15.123 INFO 12345 --- [main] c.e.demo.DemoApplication : Starting DemoApplication...
Log Levels and Configuration
Logging levels can be configured directly in your application.properties or application.yml file. You can set the "root" logger level (which affects the entire application) or specify levels for individual packages or classes.
| Level |
Description |
| OFF |
No logging at all. |
| ERROR |
Critical issues that cause function failure. |
| WARN |
Potentially harmful situations or deprecated usage. |
| INFO |
(Default) Informational messages highlighting progress. |
| DEBUG |
Fine-grained events useful for debugging. |
| TRACE |
Most detailed information; high volume. |
Example: Scoped Logging Configuration
# Set the root logging level
logging.level.root=INFO
# Set specific package to DEBUG for troubleshooting
logging.level.org.springframework.web=DEBUG
logging.level.com.example.service=TRACE
# Hibernate SQL logging
logging.level.org.hibernate.SQL=DEBUG
File Output and Rotation
By default, Spring Boot only logs to the console. For production environments, you must configure the framework to write to physical files. Spring Boot simplifies this by providing built-in support for log rotation, ensuring that log files do not grow indefinitely and consume all available disk space.
| Property |
Example Value |
Description |
logging.file.name |
myapp.log |
Names the log file. |
logging.file.path |
/var/log/ |
Directory where logs will be stored. |
logging.logback.rollingpolicy.max-file-size |
10MB |
Threshold to trigger a new log file. |
logging.logback.rollingpolicy.max-history |
7 |
Number of days to keep archived logs. |
Logging in Java Code
The recommended way to log within your classes is via the SLF4J (Simple Logging Facade for Java) API. This keeps your code independent of the underlying logging implementation (Logback, Log4J2, etc.).
package com.example.demo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
@Service
public class OrderService {
// Define the logger using the SLF4J Factory
private static final Logger logger = LoggerFactory.getLogger(OrderService.class);
public void processOrder(String orderId) {
logger.info("Processing order with ID: {}", orderId);
try {
// Business logic here
} catch (Exception e) {
logger.error("Failed to process order {}: {}", orderId, e.getMessage(), e);
}
}
}
Customizing Logback (Advanced)
For complex requirements—such as sending logs to an external ELK stack (Elasticsearch, Logstash, Kibana) or using specific XML layouts—you can include a logback-spring.xml file in your src/main/resources folder. By using the -spring suffix in the filename, you allow Spring Boot to provide advanced features like Profile-specific logging configurations.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<springProfile name="dev">
<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<root level="DEBUG">
<appender-ref ref="CONSOLE" />
</root>
</springProfile>
<springProfile name="prod">
<root level="INFO">
</root>
</springProfile>
</configuration>
Note: Color-Coded Logs
If your terminal supports ANSI, Spring Boot can produce color-coded logs to help distinguish levels (e.g., ERROR in red, WARN in yellow). You can force this behavior by setting spring.output.ansi.enabled=ALWAYS in your properties file.
Warning: Performance Impact of DEBUG/TRACE
Avoid leaving your root logging level at DEBUG or TRACE in production. High-volume logging is a synchronous operation by default in Logback and can significantly degrade application throughput and fill up disk partitions in minutes.
Internationalization (i18n)
Internationalization, commonly abbreviated as i18n, is the process of designing an application so that it can be adapted to various languages and regions without requiring engineering changes to the source code. Spring Boot provides robust support for i18n by leveraging Spring’s MessageSource abstraction. It automates the resolution of localized text based on the user's Locale (the combination of language and country), allowing your application to serve a global audience seamlessly.
The framework looks for Resource Bundles—collections of properties files containing key-value pairs—where the keys remain constant across all languages while the values are translated.
Resource Bundle Configuration
By default, Spring Boot looks for message resources in a file named messages.properties located at the root of the classpath (src/main/resources). To support additional languages, you create siblings of this file appended with ISO language codes.
| File Name |
Locale |
Description |
messages.properties |
Default |
The fallback file used if no specific locale matches. |
messages_en.properties |
English |
Specific strings for the English language. |
messages_fr.properties |
French |
Specific strings for the French language. |
messages_zh_CN.properties |
Chinese (China) |
Specific strings for simplified Chinese. |
Example: Resource Bundle Content
# messages.properties
welcome.message=Welcome to our service!
user.login.success=Hello, {0}! You have successfully logged in.
# messages_fr.properties
welcome.message=Bienvenue dans notre service !
user.login.success=Bonjour, {0} ! Vous vous êtes connecté avec succès.
The MessageSource Bean
Spring Boot auto-configures a ResourceBundleMessageSource bean. You can customize its behavior (such as changing the base name of the files or the default encoding) in your application.properties.
# Customizing MessageSource settings
spring.messages.basename=i18n/messages
spring.messages.encoding=UTF-8
spring.messages.fallback-to-system-locale=false
spring.messages.cache-duration=3600s
Accessing Localized Messages in Code
To retrieve messages in your Java code, you inject the MessageSource interface. You must provide the message key, an array of arguments for placeholders (like {0}), and the current Locale.
package com.example.demo.service;
import org.springframework.context.MessageSource;
import org.springframework.context.i18n.LocaleContextHolder;
import org.springframework.stereotype.Service;
import java.util.Locale;
@Service
public class GreetingService {
private final MessageSource messageSource;
public GreetingService(MessageSource messageSource) {
this.messageSource = messageSource;
}
public String getGreeting(String username) {
// Retrieve the locale from the current request thread
Locale locale = LocaleContextHolder.getLocale();
// Resolve the message with a placeholder for the username
return messageSource.getMessage(
"user.login.success",
new Object[]{username},
locale
);
}
}
Locale Resolution in Web Applications
In a web context, Spring Boot needs to determine which locale the user prefers. It uses a LocaleResolver bean to accomplish this. The default implementation is the AcceptHeaderLocaleResolver, which reads the Accept-Language header sent by the user's browser.
Common LocaleResolvers
| Resolver |
Strategy |
Best Use Case |
| AcceptHeaderLocaleResolver |
Uses HTTP Accept-Language header. |
Standard web applications (Automatic). |
| SessionLocaleResolver |
Stores locale in the user's HTTP session. |
Apps where users manually toggle language. |
| CookieLocaleResolver |
Stores locale in a browser cookie. |
Stateless apps where language preference must persist. |
| FixedLocaleResolver |
Hardcodes a single locale. |
Internal apps restricted to one region. |
Switching Locales Dynamically
If you want users to change the language manually (e.g., clicking a flag icon), you must register a LocaleChangeInterceptor. This interceptor monitors incoming requests for a specific parameter (e.g., ?lang=fr) and updates the user's locale.
@Configuration
public class WebConfig implements WebMvcConfigurer {
@Bean
public LocaleResolver localeResolver() {
// Use a Session-based resolver so the choice persists across pages
SessionLocaleResolver slr = new SessionLocaleResolver();
slr.setDefaultLocale(Locale.US);
return slr;
}
@Bean
public LocaleChangeInterceptor localeChangeInterceptor() {
LocaleChangeInterceptor lci = new LocaleChangeInterceptor();
lci.setParamName("lang"); // Intercepts ?lang=es
return lci;
}
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(localeChangeInterceptor());
}
}
Note: Using Messages in Thymeleaf
If you are using the Thymeleaf template engine, you don't need to call MessageSource manually in the controller. You can use the #{...} syntax directly in your HTML: <p th:text="#{welcome.message}">Fallback Text</p>.
Warning: UTF-8 Encoding for Properties
Historically, Java .properties files used ISO-8859-1 encoding. While Spring Boot 3 default to UTF-8 for message bundles, always ensure your IDE is configured to save these files in UTF-8, especially when dealing with non-Latin characters (e.g., Cyrillic, Kanji). Failure to do so will result in "garbled" text or "mojibake" in your UI.
JSON Support (Jackson, Gson)
In modern web development, JSON (JavaScript Object Notation) is the primary format for data exchange. Spring Boot provides seamless, out-of-the-box support for JSON processing, allowing developers to convert Java objects to JSON and vice versa automatically. This is achieved through the HttpMessageConverter interface. While Spring Boot primarily defaults to the Jackson library, it also offers first-class support for Gson and JSON-B.
The framework uses auto-configuration to detect which library is on the classpath and configures a global mapper (such as Jackson's ObjectMapper) that is used across the entire application—including REST controllers, specialized clients like RestTemplate, and Spring Data's REST support.
Jackson Support (Default)
Jackson is the preferred JSON library for Spring Boot. If you include the spring-boot-starter-web (or webflux), Jackson is automatically included and configured. Spring Boot provides a Jackson2ObjectMapperBuilder bean that allows you to customize the default ObjectMapper without replacing it entirely.
Customizing Jackson Behavior
You can control how Jackson handles date formats, null values, and property naming strategies directly through application.properties.
| Property Key |
Example Value |
Description |
spring.jackson.date-format |
yyyy-MM-dd HH:mm:ss |
Sets a global date format. |
spring.jackson.default-property-inclusion |
non_null |
Excludes fields with null values from JSON output. |
spring.jackson.property-naming-strategy |
SNAKE_CASE |
Converts camelCase Java fields to snake_case JSON keys. |
spring.jackson.serialization.indent_output |
true |
Enables "pretty-printing" of JSON. |
JSON Mapping in Java Code
To control how specific classes or fields are serialized, you use Jackson annotations. This allows you to rename fields, ignore sensitive data, or handle complex object relationships.
package com.example.demo.model;
import com.fasterxml.jackson.annotation.JsonFormat;
import com.fasterxml.jackson.annotation.JsonIgnore;
import com.fasterxml.jackson.annotation.JsonProperty;
import java.time.LocalDateTime;
public class UserProfile {
// Renames the JSON key from 'firstName' to 'given_name'
@JsonProperty("given_name")
private String firstName;
// Prevents the password from ever being sent in a JSON response
@JsonIgnore
private String password;
// Formats the date precisely for the JSON output
@JsonFormat(pattern = "dd-MM-yyyy HH:mm")
private LocalDateTime lastLogin;
// Constructors, Getters, and Setters...
}
Switching to Gson or JSON-B
If your project requires Gson or JSON-B instead of Jackson, Spring Boot makes the transition simple. To switch to Gson, you must exclude the Jackson dependency and add the Gson dependency to your build file. Once detected, Spring Boot will automatically register a GsonHttpMessageConverter.
Maven Configuration for Gson
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-json</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
</dependency>
Comparison of JSON Libraries
| Library |
Spring Boot Status |
Key Advantage |
| Jackson |
Default |
Highly performant, extensive annotation support, native Kotlin/Scala support. |
| Gson |
Supported |
Excellent for handling complex generic types and very small memory footprint. |
| JSON-B |
Supported |
Standardized Java EE/Jakarta EE API; minimal dependency weight. |
Handling Edge Cases: Circular References
A common issue in JSON serialization occurs when two objects reference each other (e.g., a Parent has a list of Children, and each Child has a reference back to the Parent). By default, this causes a StackOverflowError.
Jackson provides @JsonManagedReference and @JsonBackReference to solve this. The "managed" side is serialized normally, while the "back" reference is omitted during serialization to break the loop.
public class Parent {
private String name;
@JsonManagedReference
private List<Child> children;
}
public class Child {
private String name;
@JsonBackReference
private Parent parent;
}
Working with Raw JSON: JsonNode
Sometimes you may receive a JSON payload with a dynamic structure that doesn't map cleanly to a POJO. In these cases, you can use Jackson's JsonNode to navigate the JSON tree manually.
@PostMapping("/process")
public void processPayload(@RequestBody com.fasterxml.jackson.databind.JsonNode payload) {
// Navigate the tree without a concrete class
String type = payload.get("metadata").get("type").asText();
if ("ADMIN".equals(type)) {
// Handle logic
}
}
Note: Testing JSON Serialization
Spring Boot provides the @JsonTest annotation for slice testing. This loads only the JSON-related infrastructure (Jackson/Gson) and provides a JacksonTester utility to verify that your Java objects serialize into the exact JSON format expected by your API consumers.
Warning: Default Constructor Requirement
Most JSON libraries, including Jackson and Gson, require a no-args constructor to instantiate Java objects during deserialization. If you use a custom constructor without providing a default one (or using the @JsonCreator annotation), your API will return a 400 Bad Request or throw a MismatchedInputException.
Task Execution & Scheduling
In a production environment, applications often need to perform tasks asynchronously or at specific intervals—such as sending nightly email reports, cleaning up temporary database records, or processing high-latency webhooks without blocking the main request thread. Spring Boot provides an abstraction layer for these requirements through the Task Executor and Task Scheduler interfaces.
By default, Spring Boot auto-configures a ThreadPoolTaskExecutor and a ThreadPoolTaskScheduler if it detects that asynchronous or scheduled tasks are enabled. This prevents the application from spawning an unlimited number of threads, which could lead to resource exhaustion.
Enabling Execution and Scheduling
To use these features, you must explicitly enable them in one of your @Configuration classes (often the main application class). This triggers the post-processors that search for @Async and @Scheduled annotations.
@SpringBootApplication
@EnableAsync // Enables asynchronous method execution
@EnableScheduling // Enables scheduled task execution
public class TaskApplication {
public static void main(String[] args) {
SpringApplication.run(TaskApplication.class, args);
}
}
Asynchronous Task Execution (@Async)
The @Async annotation allows a method to execute in a separate thread. When a caller invokes an @Async method, the call returns immediately, and the actual execution occurs in a thread managed by the Spring TaskExecutor.
Example: Non-blocking Service
@Service
public class EmailService {
@Async
public void sendEmail(String recipient) {
// This logic runs in a background thread
try {
Thread.sleep(5000); // Simulate network latency
System.out.println("Email sent to " + recipient);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
Scheduling Tasks (@Scheduled)
The @Scheduled annotation is used to trigger methods based on time. Spring Boot supports three types of scheduling: Fixed Delay, Fixed Rate, and Cron Expressions.
| Scheduling Type |
Attribute |
Description |
| Fixed Delay |
fixedDelay |
Waits for N ms after the previous execution finishes. |
| Fixed Rate |
fixedRate |
Starts an execution every N ms, regardless of previous finish time. |
| Cron |
cron |
Uses Unix-style cron expressions for complex timing (e.g., "0 0 * * *"). |
| Initial Delay |
initialDelay |
Number of ms to wait before the very first execution. |
Implementation Example
@Component
public class ReportScheduler {
// Runs every 10 seconds
@Scheduled(fixedRate = 10000)
public void trackInventory() {
System.out.println("Inventory checked at: " + System.currentTimeMillis());
}
// Runs at 1:00 AM every day
@Scheduled(cron = "0 0 1 * * ?")
public void cleanupLogs() {
System.out.println("Executing daily log cleanup...");
}
}
Configuring Thread Pools
Using the default "Simple" executors is generally discouraged for production. Instead, you should configure the thread pool properties in your application.properties to match your hardware and workload.
| Property Key |
Default |
Description |
spring.task.execution.pool.core-size |
8 |
Minimum number of threads to keep alive. |
spring.task.execution.pool.max-size |
Integer.MAX |
Maximum allowed number of threads. |
spring.task.execution.pool.queue-capacity |
Integer.MAX |
Capacity of the queue before new threads are spawned. |
spring.task.scheduling.pool.size |
1 |
Number of threads available for scheduled tasks. |
Operational Edge Cases & Best Practices
- Self-Invocation:
@Async and @Scheduled use Spring AOP (Aspect-Oriented Programming) proxies. If you call an @Async method from another method within the same class, the proxy is bypassed, and the method will run synchronously in the caller's thread.
- Return Types:
@Async methods should return void or a CompletableFuture<T>. Returning a standard object will still result in the caller receiving null because the proxy cannot "wait" for the background result unless wrapped in a Future.
- Exception Handling: Since
@Async methods run in separate threads, exceptions do not propagate back to the main caller. You must implement AsyncUncaughtExceptionHandler to capture and log these errors.
Note: Using Virtual Threads
In Spring Boot 3.2+ on Java 21, you can set spring.threads.virtual.enabled=true. This causes @Async and @Scheduled tasks to run on Virtual Threads instead of heavy platform threads, allowing you to handle millions of concurrent background tasks with minimal memory overhead.
Warning: Scheduled Overlap
By default, scheduled tasks are execution-serialized because the pool size is 1. If one @Scheduled task hangs, it will block all other scheduled tasks in the application. Always increase spring.task.scheduling.pool.size if you have multiple critical cron jobs.
The "Spring Web" Starter
The spring-boot-starter-web is the primary dependency used to build web applications, including RESTful services and traditional HTML-rendering applications. In the Spring Boot ecosystem, a "Starter" is a curated set of transitive dependencies that work together to provide a specific capability. When you include the Web starter, Spring Boot assumes you are building a servlet-based web application and automatically configures the necessary infrastructure to handle HTTP requests.
This starter is built upon Spring MVC (Model-View-Controller), the robust web framework that powers the majority of Java web applications globally. It simplifies the development process by providing an opinionated setup for web servers, JSON/XML processing, and validation, allowing you to go from a blank project to a running "Hello World" endpoint in seconds.
Core Transitive Dependencies
When you add spring-boot-starter-web to your Maven or Gradle configuration, it pulls in several critical libraries. Understanding these dependencies is vital for troubleshooting and customizing the web layer.
| Dependency |
Purpose |
| spring-webmvc |
Provides the core MVC framework (Controllers, View Resolvers, etc.). |
| spring-boot-starter-tomcat |
The default embedded servlet container (Tomcat). |
| spring-boot-starter-json |
Pulls in Jackson for automatic JSON serialization/deserialization. |
| spring-boot-starter-validation |
Provides Hibernate Validator for JSR-380 bean validation (as of Spring Boot 3). |
| spring-web |
Contains core web abstractions used by both MVC and WebFlux. |
The Role of the Embedded Server
One of the most significant advantages of the Spring Web starter is the Embedded Servlet Container. Traditionally, Java web apps were packaged as .war files and manually deployed into a standalone server like Tomcat or Glassfish. With the Web starter, the server is part of your application.
When the application starts, Spring Boot launches the server (Tomcat by default) within the same JVM process. This makes the application highly portable and "Cloud Native," as it can be executed as a standard JAR file on any environment with a JRE.
// Logic inside the starter that detects the environment
@Configuration
@ConditionalOnClass({ Servlet.class, Tomcat.class })
@ConditionalOnWebApplication(type = Type.SERVLET)
public class EmbeddedTomcatConfiguration {
// Spring Boot automatically configures Tomcat on port 8080
}
Auto-Configuration Mechanism
The Web starter triggers a cascade of auto-configurations. If Spring Boot detects spring-webmvc on the classpath, it automatically performs the following:
- DispatcherServlet Registration: It configures and registers the
DispatcherServlet to the / path. This is the front controller that intercepts all incoming requests and routes them to your @Controller beans.
- Static Resource Handling: It sets up default locations for static content (such as JS, CSS, and images) at
/static, /public, /resources, or /META-INF/resources.
- Message Converters: It registers
HttpMessageConverters (like MappingJackson2HttpMessageConverter) to automatically transform Java objects into JSON or XML based on request headers.
- Error Handling: It provides a default
/error mapping (the "Whitelabel Error Page") to handle exceptions gracefully.
Starter Implementation
To use the Web starter, you simply add it to your build file. Note that you do not need to specify a version if you are using the Spring Boot Parent POM.
Maven Implementation
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
Gradle Implementation
implementation 'org.springframework.boot:spring-boot-starter-web'
Customizing the Default Container
While Tomcat is the default, the Web starter is flexible. You can easily switch to Jetty or Undertow by excluding the Tomcat starter and adding the preferred one.
| Requirement |
Action |
| Use Jetty |
Exclude spring-boot-starter-tomcat, include spring-boot-starter-jetty. |
| Use Undertow |
Exclude spring-boot-starter-tomcat, include spring-boot-starter-undertow. |
| Change Port |
Set server.port=9000 in application.properties. |
| Enable SSL |
Configure server.ssl.* properties (key-store, password, etc.). |
Note: Web vs. WebFlux
The spring-boot-starter-web is designed for blocking I/O based on the Servlet API. If you are building a high-concurrency, non-blocking application using the Reactive streams, you should use spring-boot-starter-webflux instead. You should generally not include both in the same project unless you have a very specific architectural reason.
Warning: Validation Starter Change
In older versions of Spring Boot (prior to 2.3), validation was included directly in the Web starter. In Spring Boot 3.x, spring-boot-starter-validation is a separate dependency. If your @Valid or @NotNull annotations are not working, ensure you have explicitly added the validation starter to your project.
Spring MVC (Servlet Stack)
The Spring Web MVC framework is a "Model-View-Controller" architecture built on top of the Servlet API. It is the traditional, blocking-I/O stack that has been the foundation of Spring applications for years. In this model, each request is handled by a single thread (the "thread-per-request" model). While newer reactive stacks exist, the Servlet stack remains the industry standard for most enterprise applications due to its massive ecosystem, ease of debugging, and compatibility with synchronous data access layers like JPA/Hibernate.
The Front Controller Pattern
At the heart of the Spring MVC Servlet stack is the DispatcherServlet. This is an actual implementation of the Front Controller design pattern. Instead of having multiple servlets for different URLs, the DispatcherServlet acts as a central entry point. It receives every incoming HTTP request and orchestrates the processing by delegating tasks to specialized components.
The lifecycle of a request in the Servlet stack follows a precise sequence:
- Handler Mapping: The
DispatcherServlet consults the HandlerMapping to find which Controller method is mapped to the incoming URL.
- Handler Adapter: Once a controller is found, the
HandlerAdapter invokes the method, handling parameter resolution and type conversion.
- Controller Execution: The business logic inside your
@Controller is executed.
- View Resolution / Response Body: If returning a View (HTML), the
ViewResolver finds the template. If returning data (JSON/REST), the HttpMessageConverter writes the data directly to the response stream.
Controller Implementation Styles
Spring MVC distinguishes between traditional Web Controllers (which return HTML views) and REST Controllers (which return data).
- REST Controllers
A @RestController is a specialized version of a controller where every method's return value is automatically written into the HTTP response body, bypassing view resolution.
package com.example.demo.controller;
import org.springframework.web.bind.annotation.*;
import java.util.Map;
@RestController
@RequestMapping("/api/v1/products")
public class ProductController {
/**
* @PathVariable extracts data from the URI path.
* @RequestParam extracts query parameters (e.g., ?status=active).
*/
@GetMapping("/{id}")
public Map<String, String> getProduct(
@PathVariable Long id,
@RequestParam(defaultValue = "standard") String mode) {
return Map.of(
"id", id.toString(),
"mode", mode,
"status", "Available"
);
}
}
Traditional View Controllers
If you are building a server-side rendered application (using Thymeleaf or FreeMarker), you use the @Controller annotation. These methods typically return a String representing the template name.
@Controller
public class WebController {
@GetMapping("/welcome")
public String welcomePage(org.springframework.ui.Model model) {
model.addAttribute("message", "Hello from the Servlet Stack!");
return "welcome"; // Resolves to src/main/resources/templates/welcome.html
}
}
Core Annotations and Parameters
Spring MVC provides a rich set of annotations to handle various parts of the HTTP protocol.
| Annotation |
Location |
Description |
@RequestMapping |
Class/Method |
The general mapping for URLs and HTTP methods. |
@GetMapping |
Method |
Shortcut for @RequestMapping(method = RequestMethod.GET). |
@PostMapping |
Method |
Shortcut for @RequestMapping(method = RequestMethod.POST). |
@RequestBody |
Parameter |
Maps the entire HTTP request body to a Java object (JSON to POJO). |
@RequestHeader |
Parameter |
Accesses specific HTTP headers (e.g., User-Agent, Authorization). |
@ResponseStatus |
Method |
Defines the HTTP status code to return (e.g., 201 Created). |
Data Binding and Type Conversion
When a request enters the Servlet stack, Spring MVC performs "Data Binding." It attempts to convert string-based parameters from the URL or form-data into Java types. For example, if a method expects a ,UUID or a LocalDateTime, Spring uses its internal ConversionService to parse the string automatically.
If binding fails (e.g., a user sends "abc" for a numeric ID), Spring throws a MethodArgumentTypeMismatchException, which can be handled globally using an @ExceptionHandler.
Interceptors and Filters
In the Servlet stack, you can hook into the request/response lifecycle at two levels:
- Filters: Part of the Servlet container. They run before the request even reaches the
DispatcherServlet. Useful for security, logging, and CORS.
- Interceptors Part of the Spring MVC context. They have access to the
Handler (the controller) and are better suited for application-level logic like permission checks or theme changes.
Note: Multipart File Support
To handle file uploads in the Servlet stack, Spring Boot auto-configures a StandardServletMultipartResolver. You can limit file sizes in application.properties using spring.servlet.multipart.max-file-size=2MB.
Warning: Blocking vs. Non-blocking
In the Servlet stack, if your controller calls a slow external API, the request thread is "blocked" until the API responds. Under high load, this can lead to Thread Pool Exhaustion, where the server has no threads left to accept new connections even if CPU usage is low. For such use cases, consider using CompletableFuture return types or the WebFlux stack.
Spring WebFlux (Reactive Stack)
Spring WebFlux is the non-blocking, reactive-stack web framework introduced in Spring Framework 5.0 to handle massive concurrency with a small number of threads. Unlike the traditional Servlet stack (Spring MVC), which assigns one thread per request, WebFlux is built on Project Reactor and utilizes the Netty server by default. It is designed for event-loop based execution, making it ideal for I/O-intensive applications, streaming, and systems that require high scalability.
The Reactive Philosophy
At the heart of WebFlux is the concept of Non-Blocking Backpressure. In a reactive system, a consumer can signal to the producer how much data it can handle, preventing the system from being overwhelmed. WebFlux operates on a small, fixed number of threads (usually equal to the number of CPU cores). These threads never "sleep" while waiting for I/O (like a database query or an API call); instead, they register a callback and move on to the next task.
The framework relies on two primary reactive types from Project Reactor:
- Mono<T>: Represents a stream of 0 or 1 element.
- Flux<T>: Represents a stream of 0 to N elements (potentially infinite).
Programming Models
WebFlux offers two distinct ways to define your web endpoints. You can choose the one that best fits your team's preference or the complexity of the requirements.
- Annotated Controllers
This model is identical to Spring MVC, using @RestController and @GetMapping. This makes it the easiest path for developers migrating from the Servlet stack.
@RestController
@RequestMapping("/api/reactive")
public class ReactiveProductController {
private final ProductRepository repository;
public ReactiveProductController(ProductRepository repository) {
this.repository = repository;
}
// Returns a single item asynchronously
@GetMapping("/{id}")
public Mono<Product> getProduct(@PathVariable String id) {
return repository.findById(id);
}
// Streams multiple items as they become available
@GetMapping(value = "/stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Product> getAllProducts() {
return repository.findAll();
}
}
- Functional Routing
This is a more programmatic approach where routes are defined as beans. It offers better startup performance and is easier to test in isolation, as it separates routing logic from handling logic.
@Configuration
public class ProductRouter {
@Bean
public RouterFunction<ServerResponse> route(ProductHandler handler) {
return RouterFunctions
.route(GET("/functional/products"), handler::allProducts)
.andRoute(GET("/functional/products/{id}"), handler::getProduct);
}
}
WebFlux vs. Spring MVC Comparison
The following table highlights the fundamental differences in infrastructure and behavior between the two stacks.
| Feature |
Spring MVC (Servlet Stack) |
Spring WebFlux (Reactive Stack) |
| Concurrency Model |
Thread-per-request (Blocking) |
Event-loop (Non-blocking) |
| Dependencies |
spring-boot-starter-web |
spring-boot-starter-webflux |
| Default Server |
Tomcat |
Netty |
| Data Access |
Imperative (JDBC, JPA) |
Reactive (R2DBC, MongoDB Reactive) |
| Backpressure |
Not supported at the API level |
Fully supported via Project Reactor |
| Ideal For |
Standard CRUD, CPU-bound tasks |
High-concurrency, I/O-bound, Streaming |
The WebClient
Spring WebFlux includes WebClient, a modern, functional, and reactive alternative to the deprecated RestTemplate. It is the recommended tool for making HTTP requests in both reactive and servlet applications because it supports both synchronous and asynchronous operations.
public Mono<String> fetchExternalData() {
WebClient client = WebClient.create("https://api.example.com");
return client.get()
.uri("/data")
.retrieve()
.bodyToMono(String.class)
.timeout(Duration.ofSeconds(2)) // Built-in resilience
.onErrorReturn("Fallback Data");
}
Important Operational Constraints
The most common pitfall in WebFlux is blocking the Event Loop. Because there are so few threads, if you call a blocking method (like Thread.sleep() or a standard JDBC call) inside a WebFlux sequence, you will freeze the entire server.
| Constraint |
Description |
| No JPA/JDBC |
Standard JPA/Hibernate is blocking. You must use R2DBC for reactive SQL access. |
| Library Support |
Every library in the call chain (Security, Logging, DB) must be non-blocking. |
| Debuggability |
Stack traces in reactive code are notoriously difficult to read due to the asynchronous nature. |
Note: Using WebFlux in Servlet Apps
You can include spring-boot-starter-webflux in a standard Spring MVC project just to gain access to WebClient. In this scenario, the application still runs on Tomcat, but you can perform non-blocking outbound HTTP calls.
Warning: The "Block" Method
Never call .block() or .blockFirst() inside a WebFlux controller or service. Doing so defeats the purpose of the reactive stack and can lead to runtime exceptions or "deadlock" scenarios where the event loop is waiting on itself.
Embedded Servlet Containers (Tomcat, Jetty, Undert
Traditionally, Java web applications were packaged as Web Archive (.WAR) files and manually deployed into a standalone application server. Spring Boot revolutionized this model by "inverting" the relationship: the web server is embedded directly into the application artifact. This means the server is just another library on the classpath, and the application is a self-contained, executable unit.
By default, Spring Boot uses Apache Tomcat as the embedded container, but it provides seamless, pluggable support for Jetty and Undertow. This flexibility allows developers to choose a container based on specific performance requirements, memory footprints, or concurrency models.
Container Characteristics
Each of the three supported containers has distinct architectural advantages. While Tomcat is the most common and robust, Jetty is often preferred for its smaller footprint, and Undertow is known for its high-performance, non-blocking I/O capabilities.
| Container |
Best Use Case |
Key Features |
| Tomcat |
Standard Enterprise Apps |
Industry-standard, highly compatible, and extensive documentation. |
| Jetty |
Microservices & IoT |
Low memory footprint, excellent for high-concurrency websocket applications. |
| Undertow |
High-Performance APIs |
Developed by JBoss; uses a non-blocking architecture; extremely lightweight. |
Switching the Embedded Container
To switch containers, you must use the dependency exclusion mechanism in your build tool. You exclude the default spring-boot-starter-tomcat and include the starter for your preferred server.
Example: Switching to Undertow (Maven)
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-undertow</artifactId>
</dependency>
Example: Switching to Jetty (Gradle)
configurations {
implementation.exclude group: 'org.springframework.boot', module: 'spring-boot-starter-tomcat'
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.boot:spring-boot-starter-jetty'
}
Common Configuration Properties
Regardless of which container you choose, Spring Boot provides a unified set of properties under the server.* namespace to manage the server's behavior. These properties are applied to the embedded instance during the startup phase.
| Property |
Default |
Description |
server.port |
8080 |
The HTTP port. Use 0 to scan for a free random port. |
server.address |
all |
Network address to which the server should bind. |
server.servlet.context-path |
/ |
The base path for all web mappings. |
server.max-http-header-size |
8KB |
Maximum size of the HTTP message header. |
server.tomcat.threads.max |
200 |
Maximum amount of worker threads (Tomcat specific). |
Programmatic Customization
If the standard properties are insufficient, you can customize the container programmatically by implementing the WebServerFactoryCustomizer interface. This gives you access to the underlying server API (e.g., raw Tomcat Connector objects).
package com.example.config;
import org.springframework.boot.web.server.WebServerFactoryCustomizer;
import org.springframework.boot.web.servlet.server.ConfigurableServletWebServerFactory;
import org.springframework.stereotype.Component;
@Component
public class CustomContainerConfig implements WebServerFactoryCustomizer<ConfigurableServletWebServerFactory> {
@Override
public void customize(ConfigurableServletWebServerFactory factory) {
// Programmatic override of properties
factory.setPort(9001);
factory.setContextPath("/v1");
// You can also add specific error pages
// factory.addErrorPages(new ErrorPage(HttpStatus.NOT_FOUND, "/404.html"));
}
}
SSL/TLS Configuration
Enabling HTTPS in an embedded container is straightforward. You place your keystore in the classpath and point to it in your configuration. This ensures that the embedded server starts as a secure endpoint immediately.
# SSL Configuration
server.port=8443
server.ssl.key-store=classpath:keystore.p12
server.ssl.key-store-password=password
server.ssl.key-store-type=PKCS12
server.ssl.key-alias=tomcat
Note: Using Port 0
Setting server.port=0 is particularly useful for integration tests or when running multiple instances of a microservice on the same host. Spring Boot will find an unallocated port and you can retrieve the actual port used by listening for the ServletWebServerInitializedEvent.
Warning: Performance Tuning
While the default settings are fine for development, production environments often require tuning the thread pools. For instance, in an I/O heavy application, the default 200 threads in Tomcat might be too high (leading to context switching overhead) or too low (leading to request queuing). Always benchmark your specific container under load.
Graceful Shutdown
Graceful shutdown is a critical operational feature that ensures an application terminates without interrupting active requests or leaving resources in an inconsistent state. When a shutdown signal (such as SIGTERM) is received, an application without graceful shutdown might kill active HTTP connections instantly, leading to 502/503 errors for clients and potential data corruption in long-running processes.
With graceful shutdown enabled, Spring Boot's embedded web server stops accepting new requests but provides a "grace period" for existing, active requests to complete their execution before the JVM finally exits.
Enabling Graceful Shutdown
As of Spring Boot 2.3 and later, graceful shutdown is supported across all four major embedded containers (Tomcat, Jetty, Undertow, and Netty). By default, this feature is disabled (set to immediate), meaning the server shuts down the moment it receives a kill signal.
To enable it, you must configure two specific properties in your application.properties or application.yml:
| Property |
Value Options |
Description |
server.shutdown |
graceful, immediate |
Switches from instant termination to waiting for active requests. |
spring.lifecycle.timeout-per-shutdown-phase |
Duration (e.g., 30s) |
The maximum time the application waits for requests to finish. |
Example Configuration
# Enable graceful shutdown
server.shutdown=graceful
# Wait up to 30 seconds for active requests to finish
spring.lifecycle.timeout-per-shutdown-phase=30s
The Shutdown Workflow
When a graceful shutdown is initiated, the application follows a strict sequence of events to ensure a clean exit.
- Request Blocking: The web server stops accepting new incoming connections at the network layer.
- Grace Period: The server allows the thread pool to continue processing the requests that were already "in-flight" at the moment of the signal.
- Timeout Check:0000 If all requests finish before the
timeout-per-shutdown-phase the server closes immediately.
- Forced Termination: If the timeout is reached and some requests are still active, the server forcefully kills the remaining threads and closes the application context.
Implementing Shutdown Logic in Code
Beyond the web server, you may have custom logic—such as closing file handles, flushing buffers, or unsubscribing from message brokers—that must run during the shutdown phase. You can achieve this using the @PreDestroy annotation or by implementing the DisposableBean interface.
package com.example.demo.component;
import jakarta.annotation.PreDestroy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;
@Component
public class DatabaseCleanupTask {
private static final Logger logger = LoggerFactory.getLogger(DatabaseCleanupTask.class);
/**
* This method is called by the Spring Container during the
* application context closure, before the JVM exits.
*/
@PreDestroy
public void onShutdown() {
logger.info("Closing persistent file handles and flushing caches...");
// Logic to ensure data integrity before exit
}
}
Container-Specific Behaviors
While Spring Boot provides a unified configuration, the underlying containers handle the "rejection" of new requests slightly differently:
| Container |
Rejection Behavior |
| Tomcat |
Stops the connector; new requests receive a connection refused error at the TCP level. |
| Jetty |
Sends a 503 Service Unavailable response to any new requests during the grace period. |
| Undertow |
Stops accepting new requests at the listener level. |
| Netty (WebFlux) |
Stops accepting new connections; existing ones continue until the timeout. |
Health Checks and Shutdown
In cloud environments like Kubernetes, graceful shutdown is often paired with Liveness and Readiness Probes. When a pod is marked for termination, Kubernetes sends a SIGTERM. If graceful shutdown is active, the application will stop being "Ready" but will stay "Alive" until the grace period ends.
If you are using the Spring Boot Actuator, the /actuator/health endpoint can be configured to reflect the shutdown state, helping external load balancers steer traffic away during the final seconds of the application's life.
Note: SIGKILL vs. SIGTERM
Graceful shutdown only works with signals like SIGTERM (standard stop) or SIGINT (Ctrl+C). It cannot intercept a SIGKILL (kill -9), which terminates the process at the OS level immediately, bypassing all JVM shutdown hooks and Spring lifecycle logic.
Warning: Task Executor Configuration
If you use custom @Async executors, ensure they are also configured to wait for tasks to complete. By default, Spring's ThreadPoolTaskExecutor does not wait. You must set setWaitForTasksToCompleteOnShutdown(true) and setAwaitTerminationSeconds(...) on the executor bean specifically.
Error Handling
Spring Boot provides a comprehensive, automated error-handling infrastructure that eliminates the need to manually map error codes in a web.xml file. By default, Spring Boot registers a Global Error Controller that handles all errors in a sensible way based on the client making the request. If the client is a browser, it renders a "Whitelabel Error Page" (HTML); if the client is a REST client, it returns a structured JSON response containing the status code and error details.
The framework's philosophy is to provide a central mechanism to catch exceptions and translate them into meaningful HTTP responses, ensuring that internal stack traces are not leaked to the end user.
Default Error Attributes
When an error occurs, Spring Boot populates a set of attributes that are used to build the final response. This information is gathered by the DefaultErrorAttributes class.
| Attribute |
Description |
timestamp |
The exact time the error occurred. |
status |
The HTTP status code (e.g., 404, 500). |
error |
The HTTP status reason phrase (e.g., "Not Found"). |
exception |
The class name of the root exception (disabled by default in 2.x+). |
message |
The detailed exception message. |
path |
The URL path where the exception was raised. |
Customizing the JSON Response
For RESTful APIs, you often need a specific JSON structure that conforms to your frontend requirements. The most robust way to handle this is using the @ControllerAdvice and @ExceptionHandler annotations. This allows you to centralize error logic for the entire application.
package com.example.demo.exception;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import java.time.LocalDateTime;
import java.util.LinkedHashMap;
import java.util.Map;
@ControllerAdvice
public class GlobalExceptionHandler {
/**
* Specifically handles business-logic exceptions (e.g., ProductNotFound).
* Maps the exception to a 404 Not Found status.
*/
@ExceptionHandler(ProductNotFoundException.class)
public ResponseEntity<Object> handleProductNotFound(ProductNotFoundException ex) {
Map<String, Object> body = new LinkedHashMap<>();
body.put("timestamp", LocalDateTime.now());
body.put("message", ex.getMessage());
body.put("code", "ERR_PROD_001");
return new ResponseEntity<>(body, HttpStatus.NOT_FOUND);
}
/**
* Fallback handler for any unexpected server errors.
*/
@ExceptionHandler(Exception.class)
public ResponseEntity<Object> handleGeneralError(Exception ex) {
return new ResponseEntity<>("An internal error occurred", HttpStatus.INTERNAL_SERVER_ERROR);
}
}
Customizing the HTML Error Page
If you are using a template engine like Thymeleaf, you can override the default "Whitelabel" page by placing HTML files in a specific directory structure under src/main/resources/templates/error/.
Spring Boot follows a naming convention to resolve these files:
- Exact Match:
404.html (Handles only 404 errors)
- Series Match:
5xx.html (Handles all 500-level server errors)
| Location |
Result |
templates/error/404.html |
Custom page for "Not Found" errors. |
templates/error/403.html |
Custom page for "Forbidden" errors. |
templates/error/5xx.html |
Fallback page for all server-side exceptions. |
static/error/404.html |
Static HTML fallback if no template engine is used. |
Configuration Properties
You can fine-tune the default error behavior using application.properties. As a security best practice, Spring Boot 2.3+ hides the exception message and stack trace by default.
# Include the exception message in the JSON output
server.error.include-message=always
# Include the Java stack trace (only recommended for dev)
server.error.include-stacktrace=on_param
# Change the default path for the error controller
server.error.path=/oops
Custom ErrorController (Advanced)
If you need to completely replace Spring Boot's error handling logic (for example, to add custom logging or complex branching based on the error type), you can implement the ErrorController interface. This gives you total control over the /error endpoint.
@RestController
public class MyCustomErrorController implements ErrorController {
@RequestMapping("/error")
public String handleError(HttpServletRequest request) {
Object status = request.getAttribute(RequestDispatcher.ERROR_STATUS_CODE);
if (status != null) {
Integer statusCode = Integer.valueOf(status.toString());
if (statusCode == HttpStatus.NOT_FOUND.value()) {
return "Custom 404 Message: We couldn't find that page.";
}
}
return "Generic error occurred";
}
}
Note: Integration with Validation
When using @Valid or @Validated on request bodies, Spring Boot throws a MethodArgumentNotValidException if validation fails. It is highly recommended to create a @ExceptionHandler specifically for this exception to return a list of field-specific validation errors to the client.
Warning: Information Leakage
Never set server.error.include-stacktrace=always in a production environment. Exposing stack traces can reveal sensitive information about your database schema, library versions, and internal package structures, which can be exploited by attackers.
Calling REST Services (RestClient & WebClient)
In a microservices architecture, applications rarely exist in isolation; they must frequently communicate with external APIs or other internal services. Spring Boot provides two primary, high-level abstractions for making HTTP requests: RestClient and WebClient.
Historically, RestTemplate was the standard choice, but it has been relegated to maintenance mode in favor of these newer, more functional alternatives. RestClient (introduced in Spring Framework 6.1) offers a synchronous, fluent API designed for the Servlet stack, while WebClient remains the cornerstone for non-blocking, reactive communication.
RestClient (The Modern Synchronous Choice)
The RestClient is a synchronous HTTP client that provides a functional-style API. It serves as a modern successor to RestTemplate, offering a more readable, chainable syntax while still running on the traditional blocking Servlet stack. It uses the same infrastructure as RestTemplate but significantly reduces the boilerplate code required to handle headers, status codes, and body conversions.
Basic Implementation
To use RestClient, you typically define it as a Bean using a builder to set base URLs or default headers.
@Configuration
public class ClientConfig {
@Bean
public RestClient productServiceClient() {
return RestClient.builder()
.baseUrl("https://api.inventory.com/v1")
.defaultHeader("Accept", "application/json")
.build();
}
}
Performing GET and POST Requests
The API uses a fluent "prepare-and-execute" pattern.
@Service
public class InventoryService {
private final RestClient restClient;
public InventoryService(RestClient restClient) {
this.restClient = restClient;
}
public ProductDetails fetchProduct(String id) {
return restClient.get()
.uri("/products/{id}", id)
.retrieve()
.onStatus(HttpStatusCode::is4xxClientError, (request, response) -> {
throw new ProductNotFoundException("Product not found on remote server");
})
.body(ProductDetails.class);
}
}
WebClient (The Reactive Choice)
WebClient is part of the Spring WebFlux library. It is non-blocking and supports both synchronous and asynchronous operations. While it is mandatory for the WebFlux stack, it is also highly recommended for the Servlet stack when you need to perform multiple concurrent API calls, as it allows you to fire requests in parallel without blocking multiple threads.
Asynchronous Execution with Mono and Flux
WebClient returns Project Reactor types, allowing for complex orchestration like retries, timeouts, and fallbacks.
public Flux<User> fetchAllUsers() {
return WebClient.create("https://api.users.com")
.get()
.uri("/active")
.retrieve()
.bodyToFlux(User.class)
.timeout(Duration.ofSeconds(5))
.retry(3); // Automatically retry on failure
}
Client Comparison
The following table distinguishes between the three primary clients available in the Spring ecosystem.
| Feature |
RestTemplate |
RestClient |
WebClient |
| API Style |
Imperative / Template |
Functional / Fluent |
Functional / Reactive |
| I/O Model |
Blocking |
Blocking |
Non-blocking |
| Stack |
Servlet |
Servlet |
WebFlux / Servlet |
| Spring Version |
Legacy (Maintenance) |
6.1+ (Spring Boot 3.2+) |
5.0+ |
| Concurrency |
One thread per call |
One thread per call |
Event-loop / Highly concurrent |
HTTP Interfaces (Declarative Clients)
Spring Boot also supports Declarative HTTP Interfaces. Instead of writing implementation logic, you define a Java interface with annotations, and Spring generates a proxy at runtime to handle the HTTP calls. This is similar to the popular Feign library.
// 1. Define the interface
public interface UserClient {
@GetExchange("/users/{id}")
User getUser(@PathVariable String id);
}
// 2. Create the proxy in a @Configuration class
@Bean
public UserClient userClient(RestClient.Builder builder) {
RestClient restClient = builder.baseUrl("https://api.example.com").build();
RestClientAdapter adapter = RestClientAdapter.create(restClient);
HttpServiceProxyFactory factory = HttpServiceProxyFactory.builderFor(adapter).build();
return factory.createClient(UserClient.class);
}
Error Handling and Resilience
When calling external services, you must account for network instability. Both RestClient and WebClient allow you to inspect status codes and body content before the data is fully deserialized.
| Strategy |
RestClient Method |
WebClient Method |
| Filter Status |
.onStatus(...) |
.onStatus(...) |
| Default Fallback |
Try-catch or Result wrapper |
.onErrorReturn(...) |
| Timeouts |
RequestFactory config |
.timeout(Duration) |
Note: The RequestFactory
Both RestClient and RestTemplate rely on a ClientHttpRequestFactory to perform the actual I/O. By default, they use the standard JDK HTTP client. For production, it is recommended to use Apache HttpClient or OkHttp for better connection pooling and performance tuning.
Warning: Blocking in WebClient
If you are using WebClient in a standard Spring MVC (Servlet) application, you will often need to call .block() to wait for the result. While this is acceptable in the Servlet stack, it is a critical failure in a WebFlux application, where it will cause the event loop to hang and freeze the entire server.
SQL Databases (DataSource Configuration)
Data access in Spring Boot is built upon the foundational DataSource interface, which serves as a factory for physical connections to a real database. In a standard Java application, configuring a connection pool, managing driver class paths, and handling transaction managers requires significant boilerplate. Spring Boot eliminates this complexity by providing auto-configuration for the DataSource based on the libraries found on your classpath.
Whether you are using an in-memory database for testing or a high-performance production cluster, Spring Boot manages the lifecycle of the connection pool, ensuring that connections are efficiently reused and properly closed during application shutdown.
The Auto-Configuration Logic
When you include a database-related starter (like spring-boot-starter-data-jpa or spring-boot-starter-jdbc), Spring Boot’s auto-configuration attempts to create a DataSource bean by following a specific search order:
- In-Memory Database: If H2, HSQL, or Derby is on the classpath and you haven't configured any connection URLs, Spring Boot starts an embedded database.
- Connection Pool Detection: Spring Boot searches for a connection pool implementation. It prioritizes HikariCP (the default), followed by Tomcat JDBC, and finally Commons DBCP2.
- Property Binding: It binds values from your
application.properties to the detected connection pool.
External Database Configuration
To connect to a persistent SQL database like MySQL, PostgreSQL, or Oracle, you must provide the connection coordinates in your configuration files. You do not need to specify the driver-class-name manually in most cases, as Spring Boot deduces it from the JDBC URL.
Example: PostgreSQL Configuration
# Basic Connection Settings
spring.datasource.url=jdbc:postgresql://localhost:5432/inventory_db
spring.datasource.username=db_user
spring.datasource.password=secure_password
# Optional: Explicit Driver (usually inferred)
spring.datasource.driver-class-name=org.postgresql.Driver
Connection Pool Customization (HikariCP)
HikariCP is the default connection pool in Spring Boot because of its extreme performance and reliability. You can fine-tune its behavior using the spring.datasource.hikari.* prefix. Proper tuning of these values is essential for preventing "Connection Timeout" errors under heavy load.
| Property |
Default |
Description |
maximum-pool-size |
10 |
Max number of actual connections to the database. |
minimum-idle |
10 |
Minimum number of idle connections HikariCP tries to maintain. |
idle-timeout |
600000 (10m) |
How long a connection can sit idle before being retired. |
connection-timeout |
30000 (30s) |
Max time a client will wait for a connection from the pool. |
max-lifetime |
1800000 (30m) |
Max tenure of a connection in the pool. |
Working with Multiple DataSources
In some enterprise scenarios, you may need to connect to two different databases within the same application. Since auto-configuration only supports one primary DataSource, you must define the beans manually and use the @Primary annotation to tell Spring which one to use for standard operations.
@Configuration
public class MultiDbConfig {
@Bean
@Primary
@ConfigurationProperties("spring.datasource.primary")
public DataSourceProperties primaryProperties() {
return new DataSourceProperties();
}
@Bean
@Primary
public DataSource primaryDataSource() {
return primaryProperties().initializeDataSourceBuilder().build();
}
@Bean
@ConfigurationProperties("spring.datasource.secondary")
public DataSourceProperties secondaryProperties() {
return new DataSourceProperties();
}
@Bean
public DataSource secondaryDataSource() {
return secondaryProperties().initializeDataSourceBuilder().build();
}
}
Initializing the Schema
Spring Boot can also handle database initialization. By default, it looks for schema.sql (to define tables) and data.sql (to populate rows) in the root of your classpath.
| Property |
Value Options |
Description |
spring.sql.init.mode |
always, never, embedded |
Determines when to run initialization scripts. |
spring.sql.init.platform |
mysql, h2, etc. |
Allows for platform-specific scripts (e.g., schema-mysql.sql). |
Database Migration Tools
While schema.sql works for simple projects, production systems should use versioned migration tools like Flyway or Liquibase. Spring Boot provides native integration for both. When these libraries are present, Spring Boot will automatically run migrations found in db/migration (Flyway) or db/changelog (Liquibase) during startup.
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
</dependency>
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-database-postgresql</artifactId>
</dependency>
Note: Using Port 0 with H2
When using an in-memory H2 database, you can enable a web-based console to inspect your tables at runtime by setting spring.h2.console.enabled=true. Access it at http://localhost:8080/h2-console using the JDBC URL provided in the console logs.
Warning: Pool Exhaustion
A common mistake is setting the maximum-pool-size too high. Each connection consumes memory on the database server and the application server. Usually, a smaller pool of frequently reused connections performs better than a large pool that causes the database to spend significant CPU cycles on context switching and locking.
Using JdbcTemplate
While high-level abstractions like Spring Data JPA are popular, many developers prefer the direct control and performance of JdbcTemplate. Part of the core Spring Framework, JdbcTemplate simplifies the use of JDBC (Java Database Connectivity) by handling the repetitive, "boilerplate" tasks: opening/closing connections, managing statements, and iterating through ResultSet objects.
It is an ideal choice for complex SQL queries, bulk updates, or scenarios where the overhead of an ORM (Object-Relational Mapper) is unnecessary.
How JdbcTemplate Works
JdbcTemplate follows the Template Method Pattern. You provide the SQL and the logic to map rows to objects, and Spring handles the low-level resource management. This significantly reduces the risk of common errors, such as forgetting to close a connection or failing to handle a SQLException.
Core Operations
The JdbcTemplate provides various methods for different types of database interactions. These are grouped into Queries (fetching data) and Updates (modifying data).
| Method |
Purpose |
Typical Use Case |
queryForObject() |
Returns a single row/result. |
Fetching a count or a single entity by ID. |
query() |
Returns a List of objects. |
Fetching multiple rows with a RowMapper. |
update() |
Performs INSERT, UPDATE, or DELETE. |
Modifying records; returns the "rows affected" count. |
execute() |
Performs any arbitrary SQL. |
Creating tables or calling complex stored procedures. |
batchUpdate() |
Performs multiple updates in one go. |
High-performance bulk data insertion. |
Implementation Example: The RowMapper
To convert a database row into a Java object, you use the RowMapper interface. While you can write custom mappers for complex logic, Spring provides BeanPropertyRowMapper for simple cases where column names match Java field names.
package com.example.demo.repository;
import com.example.demo.model.Product;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.stereotype.Repository;
import java.util.List;
@Repository
public class ProductJdbcRepository {
private final JdbcTemplate jdbcTemplate;
// JdbcTemplate is auto-configured and injected
public ProductJdbcRepository(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
// Custom RowMapper logic
private final RowMapper<Product> productMapper = (rs, rowNum) -> {
Product p = new Product();
p.setId(rs.getLong("id"));
p.setName(rs.getString("name"));
p.setPrice(rs.getBigDecimal("price"));
return p;
};
public List<Product> findAll() {
String sql = "SELECT id, name, price FROM products";
return jdbcTemplate.query(sql, productMapper);
}
public int save(Product product) {
return jdbcTemplate.update(
"INSERT INTO products (name, price) VALUES (?, ?)",
product.getName(), product.getPrice()
);
}
}
NamedParameterJdbcTemplate
One major drawback of standard JdbcTemplate is the use of ? placeholders, which can become confusing in large queries. NamedParameterJdbcTemplate allows you to use named parameters (e.g., :id), making the SQL much more readable and less error-prone.
public Product findById(Long id) {
String sql = "SELECT * FROM products WHERE id = :id";
Map<String, Object> params = Map.of("id", id);
return namedParameterJdbcTemplate.queryForObject(sql, params, productMapper);
}
Transaction Management
Even when using JdbcTemplate, you can leverage Spring's declarative transaction management. By annotating your service methods with @Transactional, Spring ensures that all JdbcTemplate calls within that method share the same database connection and are either committed together or rolled back if an exception occurs.
@Service
public class ProductService {
private final ProductJdbcRepository repository;
@Transactional
public void updateInventory(Product p, int stockAdjustment) {
repository.save(p);
// If this next line fails, the 'save' above is rolled back automatically
repository.updateStock(p.getId(), stockAdjustment);
}
}
Note: Batch Processing
For high-performance inserts, use batchUpdate. It reduces network round-trips by sending multiple SQL commands to the database in a single packet. This is significantly faster than calling update() inside a loop.
Warning: SQL Injection
Never concatenate strings to build your SQL queries (e.g., "WHERE id = " + id). Always use placeholders (? or :name). JdbcTemplate uses PreparedStatement under the hood, which properly escapes input values and protects your application from SQL injection attacks.
JPA & Hibernate (Spring Data JPA)
Spring Data JPA is a powerful abstraction layer that sits on top of the Java Persistence API (JPA) and Hibernate (the default implementation). Its primary goal is to significantly reduce the amount of boilerplate code required to implement data access layers. Instead of writing complex DAO (Data Access Object) implementations or manually managing the EntityManager, you simply define interfaces, and Spring Data JPA provides the implementation at runtime.
The Architecture: How It Fits Together
To understand Spring Data JPA, you must understand the relationship between the three core layers:
- JPA: The standard specification for Object-Relational Mapping (ORM) in Java.
- Hibernate: The actual engine that translates Java objects into SQL queries and vice versa.
- Spring Data JPA: An abstraction that adds Repository support and query generation on top of JPA.
Defining Entities
An Entity is a lightweight persistence domain object. In JPA, every entity is a Java class mapped to a database table. You use annotations to define the mapping between Java fields and table columns.
package com.example.demo.model;
import jakarta.persistence.*;
import java.math.BigDecimal;
@Entity
@Table(name = "products") // Maps this class to the 'products' table
public class Product {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(nullable = false, length = 100)
private String name;
private BigDecimal price;
// Default constructor required by JPA
public Product() {}
// Getters and Setters...
}
Spring Data Repositories
The most significant feature of Spring Data JPA is the Repository abstraction. By extending JpaRepository, your interface automatically gains a suite of standard CRUD (Create, Read, Update, Delete) operations and paging/sorting capabilities.
| Interface |
Capabilities |
CrudRepository |
Basic CRUD (save, findById, delete). |
PagingAndSortingRepository |
Adds methods for pagination and sorting data. |
JpaRepository |
Combines both above, plus JPA-specific methods like flushing the persistence context. |
Example: Product Repository
package com.example.demo.repository;
import com.example.demo.model.Product;
import org.springframework.data.jpa.repository.JpaRepository;
import java.util.List;
public interface ProductRepository extends JpaRepository<Product, Long> {
// Query Method: Spring generates the SQL automatically!
List<Product> findByNameContainingIgnoreCase(String name);
// Using @Query for custom SQL or JPQL
@org.springframework.data.jpa.repository.Query("SELECT p FROM Product p WHERE p.price > 100")
List<Product> findPremiumProducts();
}
Query Methods vs. JPQL
Spring Data JPA allows you to define queries in three distinct ways, depending on complexity.
| Method |
Strategy |
Example |
| Derived Queries |
Parsed from the method name. |
findByEmailAddress(String email) |
| JPQL |
Object-oriented query language (targets Entities). |
@Query("SELECT u FROM User u WHERE u.active = true") |
| Native Queries |
Direct SQL (targets Tables). |
@Query(value = "SELECT * FROM users", nativeQuery = true) |
Entity Lifecycle States
Hibernate manages entities through four distinct states. Understanding these is crucial for preventing "Detached Entity" exceptions.
- Transient: New objects not yet associated with the database.
- Managed (Persistent): Objects currently tracked by the
EntityManager. Changes to these objects are automatically synced to the DB upon commit.
- Detached: Objects that were managed but the session is now closed.
- Removed: Objects scheduled for deletion from the database.
Configuration Properties
You can control Hibernate's behavior, such as SQL logging and table generation, via application.properties.
# Show the SQL generated by Hibernate in the console
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.format_sql=true
# ddl-auto: 'update' (safe for dev), 'validate' (safe for prod), 'create-drop' (testing)
spring.jpa.hibernate.ddl-auto=update
# Specify the database dialect
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
Note: The N+1 Query Problem
This is a common performance pitfall where JPA executes one query to fetch a list of parents and then $N$ additional queries to fetch the children of each parent. To solve this, use Entity Graphs or JOIN FETCH in your JPQL queries to load all data in a single SQL operation.
Warning: ddl-auto=update in Production
Never use spring.jpa.hibernate.ddl-auto=update in a production environment. While convenient for development, it can lead to unpredictable schema changes or data loss. Use a dedicated migration tool like Flyway or Liquibase for production schema management.
Database Migrations (Flyway & Liquibase)
In a professional development environment, managing database schemas manually or relying on Hibernate's ddl-auto is dangerous and unscalable. Database Migration tools allow you to treat your database schema as version-controlled code. They ensure that every environment—from a developer's local machine to production—is running the exact same version of the database schema.
Spring Boot provides first-class, automated integration for the two industry leaders: Flyway and Liquibase.
Why Use Migration Tools?
Migration tools maintain a specific table (e.g., schema_version or DATABASECHANGELOG) within your database to track which scripts have already been executed. This prevents the same script from running twice and provides a clear audit trail of who changed what and when.
| Feature |
Flyway |
Liquibase |
| Primary Format |
Plain SQL |
XML, YAML, JSON, or SQL |
| Philosophy |
Simplicity and SQL-first approach. |
Power, flexibility, and platform independence. |
| Rollbacks |
Available in Teams/Enterprise editions. |
Native support for rolling back changes. |
| Learning Curve |
Very Low (if you know SQL). |
Medium (due to various formats/features). |
- Flyway
Flyway is favored for its simplicity. It uses versioned SQL scripts that follow a strict naming convention: V<Version>__<Description>.sql.
Setup and Convention
Add the dependency flyway-core to your project. By default, Flyway looks for scripts in src/main/resources/db/migration.
CREATE TABLE products (
id BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
name VARCHAR(100) NOT NULL,
price DECIMAL(19, 2)
);
- Liquibase
Liquibase is highly flexible. It uses a "Changelog" file that points to various "ChangeSets." Because it can use XML or YAML, Liquibase can often translate your changes to different database dialects (e.g., H2 for tests and PostgreSQL for production) automatically.
Setup and Master Changelog
Add liquibase-core. By default, it looks for src/main/resources/db/changelog/db.changelog-master.yaml.
Example ChangeSet (YAML)
databaseChangeLog:
- changeSet:
id: 1
author: gemini
changes:
- createTable:
tableName: products
columns:
- column:
name: id
type: bigint
autoIncrement: true
constraints:
primaryKey: true
- column:
name: name
type: varchar(100)
Spring Boot Configuration Properties
Both tools are enabled automatically if their library is on the classpath. You can control their behavior via application.properties.
| Tool |
Property |
Purpose |
| Flyway |
spring.flyway.enabled |
Turns Flyway on or off. |
| Flyway |
spring.flyway.locations |
Changes where SQL scripts are stored. |
| Liquibase |
spring.liquibase.enabled |
Turns Liquibase on or off. |
| Liquibase |
spring.liquibase.change-log |
Points to the master changelog file. |
Best Practices for Migrations
- Immutability: Once a migration script is committed and deployed, never edit it . If you made a mistake, create a new migration script to fix it.
- Naming: Use a consistent versioning strategy (e.g., timestamps
V202602151600__description.sql) to avoid version conflicts in multi-developer teams.
- Baseline: If adding a migration tool to an existing project, use the "baseline" feature to tell the tool to ignore existing tables and start tracking from the current point forward.
Note: Running Migrations
When Spring Boot starts, the migration tool runs before the application context is fully refreshed. This ensures that the database schema is ready before your JPA repositories or services try to access it.
Warning: Parallel Deployments
If you run multiple instances of your application (e.g., in Kubernetes), both instances might try to run the migration at the same time. Both Flyway and Liquibase handle this using a database-level lock to ensure only one instance performs the update while the other waits.
NoSQL Technologies (Redis, MongoDB, Cassandra)
While relational databases are the backbone of many systems, NoSQL databases are essential for handling high-velocity data, flexible schemas, and extreme horizontal scaling. Spring Boot provides dedicated "Starters" for the most popular NoSQL technologies, using the same Spring Data repository pattern seen in JPA. This consistency allows developers to switch between storage engines with minimal changes to business logic.
- Redis (In-Memory Data Store)
Redis is primarily used as a high-performance cache, session store, or message broker. Spring Boot uses Lettuce as the default driver and provides RedisTemplate for low-level operations and RedisRepositories for high-level object mapping.
- Key Features: Sub-millisecond latency, support for complex data structures (Hashes, Lists, Sets).
- Common Use Case: Storing user sessions or caching expensive database query results.
Implementation Example
// Enabling Redis Repositories
@RedisHash("UserSession")
public class UserSession {
@Id
private String id;
private String username;
@TimeToLive
private Long expiration; // Automatically deleted by Redis after X seconds
}
- MongoDB (Document Store)
MongoDB is a JSON-like document database. It is the most popular NoSQL choice for Spring Boot developers due to its flexible schema and powerful query language. Spring Data MongoDB offers MongoTemplate and MongoRepository.
- Key Features: Schema-less (BSON format), horizontal scaling via sharding, and rich indexing.
- Common Use Case: Content management systems, product catalogs, and real-time analytics.
Implementation Example
@Document(collection = "products")
public interface ProductRepository extends MongoRepository<Product, String> {
// Derived query support
List<Product> findByCategory(String category);
// JSON-based query support
@Query("{ 'price' : { $gt: ?0, $lt: ?1 } }")
List<Product> findByPriceRange(double min, double max);
}
- Cassandra (Wide-Column Store)
Apache Cassandra is designed for massive amounts of data across many commodity servers. It offers high availability with no single point of failure. Spring Data Cassandra maps POJOs to CQL (Cassandra Query Language) tables.
Key Features: Linear scalability, tunable consistency, and high write throughput.
- Common Use Case: IoT sensor data, logging, and time-series data.
NoSQL Technology Comparison
The following table helps determine which NoSQL "Starter" fits your specific architectural requirements.
| Feature |
Redis |
MongoDB |
Cassandra |
| Starter |
spring-boot-starter-data-redis |
spring-boot-starter-data-mongodb |
spring-boot-starter-data-cassandra |
| Data Model |
Key-Value / Structures |
Document (BSON/JSON) |
Wide-Column |
| Storage |
Primarily Memory (RAM) |
Disk (SSD/HDD) |
Disk (LSM-Tree) |
| Querying |
Simple Keys / Basic Search |
Rich Queries / Aggregation |
CQL (SQL-like but restrictive) |
| Best For |
Caching & Real-time |
General Purpose NoSQL |
Massive Write-Heavy Data |
Reactive Support in NoSQL
One major advantage of NoSQL starters in Spring Boot is their native support for Reactive Programming. Unlike JPA/JDBC (which are inherently blocking), Redis, MongoDB, and Cassandra all have reactive drivers that integrate perfectly with Spring WebFlux.
| Reactive Starter |
Repository Base |
spring-boot-starter-data-redis-reactive |
ReactiveRedisTemplate |
spring-boot-starter-data-mongodb-reactive |
ReactiveMongoRepository |
spring-boot-starter-data-cassandra-reactive |
ReactiveCassandraRepository |
Note: Embedded NoSQL for Testing
For integration testing, it is recommended to use Testcontainers rather than "embedded" NoSQL libraries (like de.flapdoodle for MongoDB). Testcontainers launches a real Docker instance of the database, ensuring your tests run against the exact same version used in production.
Warning: Data Consistency
Most NoSQL databases follow the BASE consistency model (Basically Available, Soft state, Eventual consistency) rather than the ACID model used by SQL. Ensure your application logic can handle scenarios where data might not be immediately visible across all nodes after a write.
Caching
Caching is a technique used to improve application performance and reduce system load by storing the results of expensive operations (like complex database queries or external API calls) in temporary memory. Spring Boot provides a powerful caching abstraction that allows you to add caching to your application using simple annotations, without being tied to a specific cache provider.
The core philosophy is that the first time a method is called, the result is computed and stored. Subsequent calls with the same parameters return the cached result instead of executing the method logic again.
How Spring Cache Works
The caching abstraction is based on Spring AOP (Aspect-Oriented Programming). When you annotate a method, Spring creates a proxy that intercepts calls to that method. The proxy checks if the result is already in the cache before allowing the actual method to run.
Enabling Caching
To activate the caching infrastructure, you must add the @EnableCaching annotation to one of your configuration classes.
@SpringBootApplication
@EnableCaching
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
Core Annotations
Spring provides several annotations to manage the lifecycle of cached data.
| Annotation |
Purpose |
Description |
@Cacheable |
Triggers cache population. |
If the value is in the cache, it's returned; otherwise, the method runs and the result is stored. |
@CacheEvict |
Triggers cache removal. |
Removes one or more entries from the cache (e.g., when data is deleted). |
@CachePut |
Updates the cache. |
Always executes the method and updates the cache with the new result. |
@Caching |
Groups multiple annotations. |
Used when you need multiple eviction or update rules on a single method. |
@CacheConfig |
Class-level configuration. |
Allows sharing cache names and other settings across all methods in a class. |
Implementation Example
In this example, the findById method is only executed if the product is not already in the "products" cache. The updateProduct method refreshes the cache, and deleteProduct clears it.
@Service
@CacheConfig(cacheNames = "products")
public class ProductService {
@Cacheable(key = "#id")
public Product findById(Long id) {
simulateSlowService(); // This only runs on cache miss
return repository.findById(id);
}
@CachePut(key = "#product.id")
public Product updateProduct(Product product) {
return repository.save(product);
}
@CacheEvict(key = "#id")
public void deleteProduct(Long id) {
repository.deleteById(id);
}
private void simulateSlowService() {
try { Thread.sleep(3000); } catch (InterruptedException e) {}
}
}
Supported Cache Providers
If no specific cache library is found, Spring Boot defaults to ConcurrentHashMap (simple in-memory storage). For production, you should use a dedicated provider.
| Provider |
Type |
Best Use Case |
| Redis |
Distributed |
Scaling across multiple application instances. |
| Caffeine |
In-Memory |
High-performance, local Java caching (replaces Guava). |
| Ehcache |
Hybrid |
Feature-rich, supports disk overflow and clustering. |
| Hazelcast |
Distributed |
Peer-to-peer data grid with distributed processing. |
Configuring Redis as a Cache
# application.properties
spring.cache.type=redis
spring.cache.redis.time-to-live=600s
spring.cache.redis.cache-null-values=false
Conditional Caching
Sometimes you don't want to cache everything. You can use the condition or unless attributes to filter what gets stored.
condition: The cache is checked/updated only if the expression is true (checked before method execution).
unless: The result is not cached if the expression is true (checked after method execution).
// Only cache products with a price greater than 100
@Cacheable(value = "expensive_products", condition = "#price > 100")
public Product getPremiumProduct(double price) { ... }
// Do not cache null results
@Cacheable(value = "products", unless = "#result == null")
public Product findOptionalProduct(Long id) { ... }
Note: The Cache Key
By default, Spring uses the method parameters to generate a cache key. If you have multiple parameters, it combines them into a SimpleKey. You can customize the key using SpEL (Spring Expression Language) as shown in the examples above (key = "#id").
Warning: Proxy Limitations
Similar to @Async and @Transactional, @Cacheable relies on Spring Proxies. If you call a cached method from another method within the same class (self-invocation), the cache will be bypassed because the proxy cannot intercept the internal call.
Default Security Configuration
Spring Security is a powerful, highly customizable authentication and access-control framework. In a Spring Boot environment, the spring-boot-starter-security triggers a comprehensive "secure by default" posture. This means that the moment you add the dependency, your application becomes protected from common vulnerabilities and requires authentication for every endpoint.
The philosophy of Spring Boot Security is to start with a "locked door" and require the developer to explicitly open specific paths, rather than starting open and requiring the developer to remember to lock them.
The Default "Secure" State
When the security starter is detected on the classpath, Spring Boot automatically configures several protective layers:
| Feature |
Default Behavior |
| Authentication |
All HTTP endpoints require a logged-in user. |
| User Account |
A single default user is created (username: user). |
| Password |
A random password is generated and printed to the console at startup. |
| Form Login |
A default HTML login page is served at /login. |
| Logout |
A default logout handler is available at /logout. |
| Exploit Protection |
CSRF protection is enabled; security headers (HSTS, XSS) are added to responses. |
The Security Filter Chain
Spring Security works through a chain of Servlet Filters. Every incoming request must pass through these filters before reaching your @RestController or @Controller.
The most important filters in the default chain include:
CsrfFilter: Protects against Cross-Site Request Forgery.
UsernamePasswordAuthenticationFilter: Intercepts login attempts.
DefaultLoginPageGeneratingFilter: Renders the built-in login form.
AuthorizationFilter: Checks if the current user has permission to access the requested URI.
Overriding Default User Credentials
While the generated password is fine for local testing, you typically want to define your own credentials in application.properties for simple internal tools or development environments.
# Defining a single in-memory user
spring.security.user.name=admin
spring.security.user.password=secret123
spring.security.user.roles=ADMIN
Customizing Security via SecurityFilterChain
In modern Spring Security (Spring Boot 3+), configuration is done via a Component-based approach rather than extending a base class. You define a bean of type SecurityFilterChain to specify which paths should be public and which should be protected.
package com.example.demo.config;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.web.SecurityFilterChain;
@Configuration
public class ProjectSecurityConfig {
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http
.authorizeHttpRequests(auth -> auth
.requestMatchers("/public/**", "/h2-console/**").permitAll() // Allow these paths
.anyRequest().authenticated() // All others require login
)
.formLogin(form -> form.defaultSuccessUrl("/home")) // Enable default login
.httpBasic(basic -> {}); // Enable Basic Auth (for API testing)
return http.build();
}
}
Common Security Properties
| Property |
Purpose |
server.ssl.enabled |
Enables HTTPS (essential for security). |
spring.security.filter.order |
Changes the order of the security filter chain. |
spring.security.oauth2.* |
Configurations for OAuth2/OpenID Connect. |
Note: The Console Password
If you don't define a custom user, look for a log message at startup that looks like this:
Using generated security password: 7f4a-bc32-11ef...
This password changes every time the application restarts unless you define a static one in your properties.
Warning: Disabling CSRF
Developers often disable CSRF protection (http.csrf().disable()) to make POST requests easier during development with tools like Postman. While acceptable for stateless REST APIs (using JWTs), it is a critical security vulnerability for stateful web applications using cookies/sessions.
Method-Level Security
While URL-level security (configured in the SecurityFilterChain) protects entire paths, Method-Level Security allows you to secure specific business logic inside your services or controllers. This provides a more granular "Defense in Depth" strategy, ensuring that even if a user bypasses a URL filter, they cannot execute sensitive internal code without the proper authority.
Method security is powered by Spring AOP (Aspect-Oriented Programming). When you call a secured method, Spring intercepts the call, checks the current user's
S\ecurityContext, and either allows the execution or throws an AccessDeniedException.
Enabling Method Security
Method security is not enabled by default. You must add the @EnableMethodSecurity annotation to a configuration class. In Spring Boot 3, this annotation enables Pre/Post annotations by default and is the modern replacement for the older @EnableGlobalMethodSecurity.
@Configuration
@EnableMethodSecurity // Activates @PreAuthorize, @PostAuthorize, etc.
public class MethodSecurityConfig {
}
Core Annotations
Spring Security provides several annotations to define access rules directly on method signatures using SpEL (Spring Expression Language).
| Annotation |
Timing |
Description |
@PreAuthorize |
Before execution |
Checks the expression before the method starts. Most common choice. |
@PostAuthorize |
After execution |
Checks the expression after execution; can use the returnObject in the check. |
@PreFilter |
Before execution |
Filters a collection argument based on custom rules before the method runs. |
@PostFilter |
After execution |
Filters the returned collection based on the user's permissions. |
Implementation Examples
- Role-Based Access with @PreAuthorize
This is the most frequent use case, ensuring only users with specific roles can trigger an action.
@Service
public class PayrollService {
// Only users with the 'ADMIN' role can call this
@PreAuthorize("hasRole('ADMIN')")
public void processSalaries() {
// Logic...
}
// Supports complex logic: User must be an ADMIN OR the specific Manager
@PreAuthorize("hasRole('ADMIN') or #managerName == authentication.name")
public void approveBonus(String managerName) {
// Logic...
}
}
- Data-Driven Access with @PostAuthorize
Used when you need to see the data returned from a database before deciding if the user is allowed to see it (e.g., an owner-only record).
@Service
public class DocumentService {
// Method executes, but access is denied if the owner doesn't match the logged-in user
@PostAuthorize("returnObject.owner == authentication.name")
public Document getSensitiveDocument(Long id) {
return repository.findById(id).orElse(null);
}
}
Commonly Used SpEL Expressions
Method security is powerful because of its integration with the SecurityContext.
| Expression |
Description |
hasRole('ROLE') |
Returns true if the user has the specified role (checks for ROLE_ prefix). |
hasAuthority('PERM') |
Returns true if the user has a specific permission/authority. |
isAuthenticated() |
Returns true if the user is not anonymous. |
authentication.name |
Refers to the username of the currently logged-in user. |
#variableName |
Refers to a method argument by name. |
Method Security vs. URL Security
| Feature |
URL-Level (Filter) |
Method-Level (AOP) |
| Implementation |
SecurityFilterChain |
Annotations on Beans |
| Granularity |
Coarse (Paths) |
Fine (Logic/Arguments) |
| Primary Use |
Web Request entry |
Service Layer / Internal Logic |
| Processing |
Faster (early in chain) |
Slower (requires proxying) |
Note: Using 'hasRole' vs 'hasAuthority'
In Spring Security, a Role is just an authority that starts with the prefix ROLE_. When you use hasRole('USER'), Spring actually checks for ROLE_USER. When you use hasAuthority('READ'), it looks for the exact string READ.
Warning: Self-Invocation
Just like @Transactional and @Cacheable, method security will fail silently if you call a secured method from another method within the same class. The call must come from an external bean for the Spring Proxy to intercept the request and enforce the security check.
OAuth2 Client & Resource Server
In modern distributed systems, security is rarely handled by a single monolithic application. Instead, applications use OAuth2 and OpenID Connect (OIDC) to delegate authentication to specialized Identity Providers (IdPs) like Google, Okta, or GitHub. Spring Boot simplifies this by providing dedicated starters that handle the complex "handshakes" required to exchange codes for tokens.
Depending on your application's role, you will configure it as either an OAuth2 Client or an OAuth2 Resource Server.
- OAuth2 Client
An OAuth2 Client is an application that initiates the login process. It redirects users to an Identity Provider and, upon success, receives an Access Token (to call APIs) and an ID Token (to get user profile info).
- Primary Use Case: Web applications where users "Log in with Google."
- Dependency:
spring-boot-starter-oauth2-client
Configuration Example (Google Login)
Spring Boot has "common providers" (Google, GitHub, Facebook) pre-configured. You only need to provide your credentials in application.properties.
spring.security.oauth2.client.registration.google.client-id=your-id
spring.security.oauth2.client.registration.google.client-secret=your-secret
spring.security.oauth2.client.registration.google.scope=profile,email
Customizing the Filter Chain
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http
.authorizeHttpRequests(auth -> auth.anyRequest().authenticated())
.oauth2Login(withDefaults()); // Enables the OAuth2 redirect flow
return http.build();
}
- OAuth2 Resource Server
A Resource Server is an API that protects data. It does not have a login UI. Instead, it expects every request to contain a Bearer Token (usually a JWT) in the Authorization header. It validates this token with the Identity Provider before allowing access.
- Primary Use Case: Backend REST APIs or Microservices.
- Dependency:
spring-boot-starter-oauth2-resource-server
Configuration Example (JWT Validation)
You must tell the Resource Server where to find the public keys (JWKS) to verify token signatures.
spring.security.oauth2.resourceserver.jwt.issuer-uri=https://dev-12345.okta.com/oauth2/default
Customizing the Filter Chain
@Bean
public SecurityFilterChain apiFilterChain(HttpSecurity http) throws Exception {
http
.authorizeHttpRequests(auth -> auth
.requestMatchers("/api/admin/**").hasAuthority("SCOPE_admin")
.anyRequest().authenticated())
.oauth2ResourceServer(oauth2 -> oauth2.jwt(withDefaults())); // Enables JWT validation
return http.build();
}
Key Concepts & Terms
| Term |
Description |
| Access Token |
A string (usually a JWT) that proves the bearer has permission to access a resource. |
| Issuer URI |
The base URL of the Identity Provider (e.g., Auth0, Keycloak). |
| Scopes |
Granular permissions requested by the client (e.g., read:products, openid). |
| JWKS Endpoint |
A URL where the Resource Server fetches public keys to verify JWT signatures. |
| Introspection |
A method where the Resource Server asks the IdP if an opaque token is still valid. |
Comparison: Client vs. Resource Server
| Feature |
OAuth2 Client |
OAuth2 Resource Server |
| Responsibility |
Directs user to Login page. |
Validates tokens on incoming requests. |
| Output |
Obtains tokens for the user. |
Returns data to the user. |
| UI Needed? |
Yes (Redirects, Login buttons). |
No (Headless API). |
| Token Handling |
Stores tokens in Session/Cookie. |
Statelessly verifies tokens in Headers. |
Using the Token in Code
Once authenticated, you can access the user's details or the raw token directly in your controllers using specialized annotations.
@RestController
public class UserController {
// For OAuth2 Client (fetching user profile)
@GetMapping("/user-info")
public Map<String, Object> getUser(@AuthenticationPrincipal OAuth2User principal) {
return principal.getAttributes();
}
// For Resource Server (inspecting JWT claims)
@GetMapping("/api/claims")
public Map<String, Object> getClaims(@AuthenticationPrincipal Jwt jwt) {
return jwt.getClaims();
}
}
Note: Using Keycloak or Okta
When using providers other than the "Big 4" (Google, GitHub, etc.), you must provide the full provider configuration, including the authorization-uri, token-uri, and jwk-set-uri in your properties.
Warning: Token Expiration
Access tokens are short-lived. If your application is a Client, you should also configure Refresh Tokens to obtain new Access Tokens without forcing the user to log in again. Resource Servers should always verify the exp (expiration) claim in the JWT.
SAML 2.0 Integration
SAML 2.0 (Security Assertion Markup Language) is an XML-based standard for exchanging authentication and authorization data between an Identity Provider (IdP) (e.g., Okta, AD FS, PingFederate) and a Service Provider (SP) (your Spring Boot application). Unlike OAuth2, which is often used for authorization and API access, SAML is specifically designed for Enterprise Single Sign-On (SSO).
Spring Boot provides the spring-boot-starter-oauth2-client dependency (which includes SAML support) or the specific spring-security-saml2-service-provider library to handle the complex XML signing, encryption, and metadata exchange required by the SAML protocol.
Core SAML Concepts
To integrate SAML, you must understand the "handshake" between the two parties. In this scenario, your Spring Boot app acts as the Service Provider.
| Term |
Role in Spring Boot |
| Service Provider (SP) |
Your Spring Boot application. |
| Identity Provider (IdP) |
The external system that authenticates users (e.g., Azure AD). |
| Metadata |
XML files exchanged by both parties containing keys and endpoints. |
| Assertion |
The XML "package" from the IdP confirming the user's identity. |
| ACS URL |
The Assertion Consumer Service—the endpoint where your app receives SAML responses. |
Implementation Steps
- Dependency Configuration
You must add the SAML 2.0 Service Provider dependency to your build file.
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-saml2-service-provider</artifactId>
</dependency>
- Application Properties
Spring Boot uses properties to configure the connection. Most IdPs provide a Metadata URL or an XML file. You must also provide your SP's private key for signing requests.
# IdP Details (usually from your IT department)
spring.security.saml2.relyingparty.registration.okta.assertingparty.metadata-uri=https://dev-123.okta.com/app/exk.../sso/saml/metadata
# SP Details (Your App)
spring.security.saml2.relyingparty.registration.okta.signing.credentials[0].private-key-location=classpath:credentials/sp-private.key
spring.security.saml2.relyingparty.registration.okta.signing.credentials[0].certificate-location=classpath:credentials/sp-certificate.crt
- Security Filter Chain
You enable SAML by adding saml2Login() to your SecurityFilterChain.
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http
.authorizeHttpRequests(auth -> auth.anyRequest().authenticated())
.saml2Login(withDefaults()) // Triggers SAML authentication
.saml2Logout(withDefaults()); // Enables SAML Single Logout (SLO)
return http.build();
}
SAML vs. OAuth2/OIDC
While both facilitate SSO, they are fundamentally different in their structure and typical use cases.
| Feature |
SAML 2.0 |
OAuth2 / OpenID Connect |
| Format |
XML (Heavyweight) |
JSON / JWT (Lightweight) |
| Security |
XML Digital Signatures/Encryption |
JWS / JWE (JSON Web Security) |
| Primary Use |
Enterprise Intranets / Corporate SSO |
Web & Mobile Apps / API Access |
| Transport |
Browser Redirects (Front-channel) |
Front-channel & Back-channel |
| History |
Older, proven enterprise standard |
Modern, developer-friendly standard |
Important Endpoints
Spring Boot automatically generates the necessary endpoints for the SAML handshake based on the registration ID you provide in the properties (e.g., okta).
- Metadata:
http://localhost:8080/saml2/service-provider-metadata/{registrationId}
- Share this URL (or the XML it returns) with your IdP administrator.
- ACS (Login):
http://localhost:8080/login/saml2/sso/{registrationId}
- This is where the IdP sends the user after successful login.
Note: Relying Party vs. Asserting Party
In Spring Security terminology, your application is the Relying Party (RP) because it "relies" on the authentication. The IdP is the Asserting Party (AP) because it "asserts" the identity of the user.
Warning: Certificate Expiration
SAML relies heavily on certificates for signing and encryption. If the certificate used by your Service Provider or provided by the Identity Provider expires, the SSO flow will break immediately. Modern enterprise apps often implement "Certificate Rotation" to prevent downtime.
SSL / TLS Configuration
In modern web development, SSL/TLS (Secure Sockets Layer / Transport Layer Security) is mandatory for protecting data in transit. It encrypts the communication between the client (browser) and the server, preventing man-in-the-middle attacks and ensuring data integrity. Spring Boot makes it straightforward to enable HTTPS by configuring the embedded web server (Tomcat, Jetty, or Undertow) via simple properties.
- Keystore vs. Truststore
To enable SSL, you must understand the two types of storage files used to manage digital certificates:
| Component |
Purpose |
Use Case |
| Keystore |
Stores the server's private key and public certificate. |
Used by the server to identify itself to clients. |
| Truststore |
Stores public certificates from trusted Certificate Authorities (CAs). |
Used by the server to verify the identity of clients (mTLS). |
- Enabling HTTPS
To enable HTTPS, you first need a certificate. For production, you obtain one from a CA (like Let's Encrypt). For local development, you can generate a self-signed certificate using the JDK keytool:
keytool -genkeypair -alias springboot -keyalg RSA -keysize 2048 -storetype PKCS12 -keystore keystore.p12 -validity 365
Once you have your keystore.p12 file, place it in src/main/resources and add the following to application.properties:
# Change port to the standard HTTPS port
server.port=8443
# SSL Configuration
server.ssl.key-store=classpath:keystore.p12
server.ssl.key-store-password=yourpassword
server.ssl.key-store-type=PKCS12
server.ssl.key-alias=springboot
- HTTP to HTTPS Redirection
Since an embedded container can only listen on one port by default, you cannot simply "enable" both HTTP (8080) and HTTPS (8443) via properties alone. To redirect traffic, you must define a second Connector programmatically.
@Configuration
public class HttpRedirectConfig {
@Bean
public ServletWebServerFactory servletContainer() {
TomcatServletWebServerFactory tomcat = new TomcatServletWebServerFactory() {
@Override
protected void postProcessContext(Context context) {
SecurityConstraint securityConstraint = new SecurityConstraint();
securityConstraint.setUserConstraint("CONFIDENTIAL");
SecurityCollection collection = new SecurityCollection();
collection.addPattern("/*");
securityConstraint.addCollection(collection);
context.addConstraint(securityConstraint);
}
};
tomcat.addAdditionalTomcatConnectors(redirectConnector());
return tomcat;
}
private Connector redirectConnector() {
Connector connector = new Connector(TomcatServletWebServerFactory.DEFAULT_PROTOCOL);
connector.setScheme("http");
connector.setPort(8080);
connector.setSecure(false);
connector.setRedirectPort(8443);
return connector;
}
}
- Mutual TLS (mTLS) / Two-Way SSL
In high-security environments, the server also needs to verify the client's identity. This is known as mTLS. You achieve this by adding a truststore to your configuration and setting the client-auth requirement.
# Enable Client Authentication
server.ssl.client-auth=need
# Truststore configuration to verify incoming client certificates
server.ssl.trust-store=classpath:truststore.p12
server.ssl.trust-store-password=trustpassword
server.ssl.trust-store-type=PKCS12
- Security Best Practices
- HTTP Strict Transport Security (HSTS): Spring Security enables HSTS by default, telling browsers to only interact with the server using HTTPS for a specified period.
- Cipher Suites: You can restrict the server to only use strong, modern encryption algorithms:
server.ssl.ciphers=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS Versions: Disable older, vulnerable versions like TLS 1.0 or 1.1:
server.ssl.enabled-protocols=TLSv1.2,TLSv1.3
Note: Using PEM Files
As of Spring Boot 2.7+, you can use PEM-encoded certificates (commonly used by Let's Encrypt) directly in your properties without converting them to a PKCS12 keystore first:
server.ssl.certificate=classpath:server.crt
server.ssl.certificate-private-key=classpath:server.key
Warning: Self-Signed Certificates
When using a self-signed certificate for local development, your browser (and RestTemplate/WebClient) will throw security warnings. You must either import the certificate into your OS trust store or configure your HTTP clients to ignore SSL validation for testing purposes only.
JMS (Java Message Service)
JMS (Java Message Service) is a standard Java API for message-oriented middleware. It allows components of a distributed application to communicate asynchronously by sending and receiving messages through a broker. Spring Boot provides excellent support for JMS through the spring-boot-starter-activemq or spring-boot-starter-artemis starters, abstracting away the low-level complexity of connection management and session handling.
JMS is primarily used for decoupling systems: the producer doesn't need to know who the consumer is, or even if the consumer is online at the exact moment the message is sent.
Core Messaging Models
JMS supports two fundamental messaging patterns, which dictate how messages are distributed to consumers.
| Model |
Pattern |
Description |
| Queue |
Point-to-Point (P2P) |
A message is delivered to exactly one consumer. Once processed, it is removed. |
| Topic |
Publish-Subscribe (Pub/Sub) |
A message is delivered to all active subscribers. Ideal for broadcasting updates. |
JmsTemplate: Sending Messages
Similar to JdbcTemplate, the JmsTemplate is the central class for synchronous JMS operations. Spring Boot auto-configures a JmsTemplate bean as long as a JMS provider (like ActiveMQ) is on the classpath.
@Service
public class MessageProducer {
private final JmsTemplate jmsTemplate;
public MessageProducer(JmsTemplate jmsTemplate) {
this.jmsTemplate = jmsTemplate;
}
public void sendOrder(Order order) {
// Sends the 'order' object; Spring converts it to a JMS Message automatically
jmsTemplate.convertAndSend("order-queue", order);
}
}
@JmsListener: Receiving Messages
To consume messages asynchronously, you use the @JmsListener annotation. This creates a "Message Driven POJO" that reacts whenever a new message arrives at a specified destination.
@Component
public class MessageConsumer {
@JmsListener(destination = "order-queue")
public void receiveOrder(Order order) {
System.out.println("Received order for: " + order.getCustomerName());
// Business logic here...
}
}
Configuration and Connection Pools
Spring Boot defaults to an in-memory broker if no external broker is configured, which is useful for testing. For production, you connect to an external broker like Apache ActiveMQ Artemis.
Properties Configuration
# Connection settings for an external ActiveMQ broker
spring.activemq.broker-url=tcp://localhost:61616
spring.activemq.user=admin
spring.activemq.password=secret
# Enable connection pooling for better performance
spring.jms.cache.enabled=true
spring.jms.cache.session-cache-size=10
Message Conversion (JSON Support)
By default, JMS uses Java Serialization, which is often insecure and fragile. It is a best practice to use JSON for message payloads. You can configure a MessageConverter bean to handle this automatically.
@Bean
public MessageConverter jacksonJmsMessageConverter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("_type"); // Important for deserialization
return converter;
}
JMS Transaction Management
JMS operations can be part of a local or distributed transaction. By using @Transactional, you ensure that if your database update fails, the message is not removed from the queue (or not sent), preventing data loss.
| Feature |
Description |
| Session Transacted |
Ensures the message is only acknowledged after the listener method completes successfully. |
| Acknowledgement Mode |
AUTO_ACKNOWLEDGE is the default, but
CLIENT_ACKNOWLEDGE provides more manual control.
|
| Redelivery Policy |
Determines how many times the broker should retry sending a message if the consumer fails. |
Note: ActiveMQ vs. Artemis
While the original "ActiveMQ 5.x" is still widely used, ActiveMQ Artemis is the successor (donated by Red Hat's HornetQ). Spring Boot supports both, but Artemis is generally recommended for new projects due to its superior performance and modern architecture.
Warning: Poison Pill Messages
If a message causes an exception in your listener, it may be returned to the queue and retried indefinitely, effectively blocking the queue. Always implement a Dead Letter Queue (DLQ) and configure a maximum retry limit to move "poison" messages aside for manual inspection.
AMQP (RabbitMQ)
RabbitMQ is a widely used open-source message broker that implements the Advanced Message Queuing Protocol (AMQP). Unlike JMS, which is a Java-specific API, AMQP is a wire-level protocol, allowing applications written in different languages (e.g., a Python producer and a Java consumer) to communicate seamlessly.
The core differentiator of RabbitMQ is its Exchange-based routing , which provides significantly more flexibility than the simple "Queue" and "Topic" models found in standard JMS.
The RabbitMQ Architecture
In RabbitMQ, producers do not send messages directly to queues. Instead, they send messages to an Exchange, which then routes the messages to one or more Queues based on specific rules called Bindings.
| Component |
Responsibility |
| Producer |
Sends messages to an Exchange with a "Routing Key." |
| Exchange |
Receives messages and decides which queue(s) they belong to. |
| Binding |
A link/rule that connects an Exchange to a Queue. |
| Queue |
A buffer that stores messages until they are consumed. |
| Consumer |
Attaches to a queue to receive and process messages. |
Exchange Types
The "Type" of exchange determines the logic used to route messages.
| Type |
Routing Logic |
Use Case |
| Direct |
Match based on an exact Routing Key. |
Target specific services (e.g., "email-service"). |
| Fanout |
Ignores keys; broadcasts to all bound queues. |
Real-time updates, configuration refreshes. |
| Topic |
Pattern match on keys (using wildcards like * and #). |
Routing based on categories (e.g., orders.europe.#). |
| Headers |
Match based on message header attributes. |
Complex routing that isn't easily represented by strings. |
Implementation in Spring Boot
Spring Boot provides the spring-boot-starter-amqp dependency, which auto-configures the RabbitTemplate (for sending) and the listener container (for receiving).
- Defining Infrastructure (Beans)
You can define your exchanges, queues, and bindings as Java Beans. Spring Boot will automatically create them on the RabbitMQ broker at startup.
@Configuration
public class RabbitConfig {
@Bean
public Queue orderQueue() {
return new Queue("orders.incoming", true); // durable: true
}
@Bean
public DirectExchange orderExchange() {
return new DirectExchange("order.exchange");
}
@Bean
public Binding binding(Queue orderQueue, DirectExchange orderExchange) {
return BindingBuilder.bind(orderQueue).to(orderExchange).with("order.routing.key");
}
}
- Sending and Receiving
@Service
public class OrderProducer {
private final RabbitTemplate rabbitTemplate;
public OrderProducer(RabbitTemplate rabbitTemplate) {
this.rabbitTemplate = rabbitTemplate;
}
public void placeOrder(Order order) {
rabbitTemplate.convertAndSend("order.exchange", "order.routing.key", order);
}
}
@Component
public class OrderConsumer {
@RabbitListener(queues = "orders.incoming")
public void handleOrder(Order order) {
System.out.println("Processing: " + order.getId());
}
}
Reliability and Performance
RabbitMQ offers several features to ensure messages are not lost during transit or broker failure.
- Message Acknowledgments (ACK): The broker waits for the consumer to send an "OK" before deleting the message. If the consumer crashes, the message is requeued.
- Durability: Queues and messages can be marked as "durable" and "persistent," meaning they survive a broker restart.
- Publisher Confirms: The broker notifies the producer once the message has been successfully received and routed.
Configuration Properties
# Connection details
spring.rabbitmq.host=localhost
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
# Performance tuning
spring.rabbitmq.listener.simple.concurrency=3
spring.rabbitmq.listener.simple.max-concurrency=10
spring.rabbitmq.template.retry.enabled=true
Note: RabbitMQ Management UI
RabbitMQ provides a powerful web-based UI (usually on port 15672). It is highly recommended to enable it during development to visualize exchanges, monitor queue depth, and manually purge or publish messages.
Warning: The "Infinite Loop" Requeue
By default, if a listener throws an exception, RabbitMQ will immediately requeue the message. If the error is permanent (e.g., malformed data), this creates an infinite loop of failure. Always configure a Dead Letter Exchange (DLX) to route failed messages to a separate queue for investigation.
Apache Kafka Support
Apache Kafka is a distributed event-streaming platform designed for high-throughput, fault-tolerant, and real-time data pipelines. While RabbitMQ and JMS are traditional "Message Brokers" focused on delivery and consumption, Kafka is a Distributed Log. It stores streams of records in a persistent, partitioned way, allowing multiple consumers to read the same data at their own pace.
Spring Boot provides the spring-boot-starter-kafka dependency, which integrates Spring for Apache Kafka. This abstraction simplifies the use of the Kafka Producer and Consumer APIs, providing template-based sending and listener-based receiving.
Core Kafka Concepts
Kafka’s architecture differs significantly from traditional messaging. Instead of queues that delete messages after acknowledgment, Kafka uses Topics that act as append-only logs.
| Component |
Description |
| Topic |
A category or feed name to which records are published. |
| Partition |
Topics are divided into partitions for scalability; each partition is an ordered sequence. |
| Producer |
Application that sends records to Kafka topics. |
| Consumer Group |
A group of consumers that share the work of reading from a topic. |
| Offset |
A unique identifier for a record within a partition, used to track consumer progress. |
KafkaTemplate: Producing Events
Spring Boot auto-configures a KafkaTemplate<K, V>, which handles the connection to the Kafka cluster and the serialization of keys and values.
@Service
public class EventProducer {
private final KafkaTemplate<String, String> kafkaTemplate;
public EventProducer(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendMessage(String topic, String message) {
// Asynchronous send with a callback
CompletableFuture<SendResult<String, String>> future = kafkaTemplate.send(topic, message);
future.whenComplete((result, ex) -> {
if (ex == null) {
System.out.println("Sent message with offset: " + result.getRecordMetadata().offset());
} else {
System.err.println("Unable to send message: " + ex.getMessage());
}
});
}
}
@KafkaListener: Consuming Events
To consume messages, use the @KafkaListener annotation. Kafka consumers are typically long-running threads that poll the broker for new data.
@Component
public class EventConsumer {
@KafkaListener(topics = "user-logins", groupId = "analytics-group")
public void listen(String message) {
System.out.println("Received Event: " + message);
}
}
Kafka vs. Traditional Brokers
| Feature |
RabbitMQ / JMS |
Apache Kafka |
| Data Handling |
Message is deleted after consumption. |
Message is retained (based on time or size). |
| Ordering |
Guaranteed per queue. |
Guaranteed only within a partition. |
| Consumer State |
Managed by the broker. |
Managed by the consumer (via Offsets). |
| Performance |
Excellent for low latency. |
Unrivaled for high throughput/volume. |
Configuration Properties
Kafka requires at least the bootstrap-servers property to locate the cluster. You should also define serializers and deserializers for your data types.
# Cluster Address
spring.kafka.bootstrap-servers=localhost:9092
# Producer Config
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
# Consumer Config
spring.kafka.consumer.group-id=my-app-group
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
Topic Auto-Creation
By default, if you send a message to a non-existent topic, Kafka may create it with default settings (1 partition, 1 replica). In Spring Boot, you can define NewTopic beans to ensure topics are created with specific configurations at startup.
@Configuration
public class KafkaTopicConfig {
@Bean
public NewTopic logsTopic() {
return TopicBuilder.name("user-logs")
.partitions(3)
.replicas(1)
.build();
}
}
Note: Consumer Concurrency
Unlike JMS listeners, which can scale threads easily, Kafka scaling is tied to the number of partitions. If you have 3 partitions, you can have a maximum of 3 consumers in the same group reading in parallel. Adding a 4th consumer will result in it being idle.
Warning: Data Loss on Send
Kafka's send() method is asynchronous by default. If your application shuts down immediately after calling send() without waiting for the future or calling flush(), the message might still be in the producer's local buffer and will be lost.
RSocket
RSocket is a binary, peer-to-peer communication protocol for use on byte-stream transports such as TCP, WebSockets, and Aeron. Unlike HTTP, which is a strictly request-response protocol, RSocket is fully reactive and supports bi-directional communication with built-in Backpressure.
Spring Boot provides the spring-boot-starter-rsocket dependency, which integrates the RSocket Java implementation with Spring’s programming model, allowing you to use familiar annotations like @MessageMapping.
The Four Interaction Models
One of RSocket's greatest strengths is that it supports four distinct communication patterns within a single connection.
| Model |
Flow |
Description |
| Request-Response |
1 → 1 |
Similar to HTTP; sends one signal and receives one response. |
| Fire-and-Forget |
1 → 0 |
Sends a signal but does not expect or wait for a response. |
| Request-Stream |
1 → N |
Sends one request and receives a continuous stream of data. |
| Channel |
N → N |
A bi-directional stream where both sides send and receive data asynchronously. |
RSocket Server Implementation
To create an RSocket server, you define a controller and use @MessageMapping to route incoming messages.
@Controller
public class RSocketProductController {
// Request-Response model
@MessageMapping("get-product")
public Mono<Product> getProduct(Long id) {
return Mono.just(new Product(id, "Reactive Console"));
}
// Request-Stream model
@MessageMapping("product-updates")
public Flux<ProductUpdate> streamUpdates() {
return Flux.interval(Duration.ofSeconds(1))
.map(i -> new ProductUpdate("Price Change", 99.99));
}
}
RSocket Client (RSocketRequester)
Spring Boot provides the RSocketRequester bean to initiate communication. It uses a fluent API similar to WebClient.
@Service
public class ProductClient {
private final RSocketRequester requester;
public ProductClient(RSocketRequester.Builder builder) {
// Connect to the RSocket server over TCP
this.requester = builder.tcp("localhost", 7000);
}
public Flux<ProductUpdate> getUpdates() {
return requester.route("product-updates")
.retrieveFlux(ProductUpdate.class);
}
}
Key Advantages of RSocket
- Multiplexing: Multiple logical streams can share a single physical connection, reducing overhead.
- Resumption: If a connection is dropped (e.g., a mobile device switching networks), RSocket can resume the session without losing state.
- Backpressure: The consumer can tell the producer, "Send me only 10 items," preventing the consumer from being overwhelmed by a fast stream.
- Binary Format: It uses a compact binary encoding, making it faster and more efficient than text-based protocols like JSON/HTTP.
Configuration Properties
You can configure the server port and transport type in your application.properties.
# Enable the RSocket server on a specific port
spring.rsocket.server.port=7000
spring.rsocket.server.transport=tcp
# Optional: Configuration for frame sizes or metadata
spring.rsocket.server.mapping-path=/rsocket
Comparison: RSocket vs. gRPC
| Feature |
RSocket |
gRPC |
| Transport |
TCP, WebSocket, Aeron |
HTTP/2 |
| Serialization |
Flexible (CBOR, JSON, Protobuf) |
Protobuf (Required) |
| Backpressure |
Application-level (Reactive Streams) |
Flow control via HTTP/2 |
| Complexity |
Dynamic routing; peer-to-peer |
Static service definitions; client-server |
Note: Using CBOR
By default, Spring Boot RSocket uses CBOR (Concise Binary Object Representation) for serialization. It is a binary alternative to JSON that is significantly faster to parse while maintaining a similar data structure.
Warning: Server-Side Push
In RSocket, both the client and server can act as "Requesters." This means once a connection is established, the server can initiate a request to the client. Ensure your security layer is configured to handle these incoming requests on the client side if necessary.
WebSockets
While RSocket (6.4) provides a advanced binary protocol, WebSockets remain the industry standard for full-duplex, bi-directional communication over the web. Unlike the standard HTTP request-response cycle, a WebSocket connection starts with an HTTP "Handshake" and then upgrades to a long-lived TCP connection. This allows the server to push data to the client in real-time without the client needing to poll for updates.
Spring Boot simplifies WebSocket development through the spring-boot-starter-websocket dependency, which provides support for both low-level WebSocket APIs and the high-level STOMP sub-protocol.
Communication Model: STOMP over WebSockets
Using raw WebSockets is like using raw TCP: you only get a stream of bytes. To make it useful for applications, Spring uses STOMP (Simple Text Oriented Messaging Protocol). STOMP defines how messages should look (destination, headers, body), similar to how HTTP defines the structure of web requests.
| Feature |
Raw WebSockets |
STOMP over WebSockets |
| Level |
Low-level (TCP-like) |
High-level (Messaging-like) |
| Routing |
Manual (String parsing) |
Automatic (@MessageMapping) |
| Messaging |
One-to-one |
Pub/Sub support via a Message Broker |
| Complexity |
High (Handling frames) |
Low (Handled by Spring) |
Configuring the WebSocket Message Broker
In Spring Boot, you configure the "Message Broker" to handle how messages are routed between clients.
@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
@Override
public void configureMessageBroker(MessageBrokerRegistry config) {
// Enable a simple memory-based broker for prefixes like "/topic"
config.enableSimpleBroker("/topic");
// Prefix for messages bound for @MessageMapping methods
config.setApplicationDestinationPrefixes("/app");
}
@Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
// The URL where clients connect to start the handshake
registry.addEndpoint("/ws-chat").withSockJS();
}
}
Handling Messages: The Controller
You use the @MessageMapping annotation to handle incoming messages and @SendTo to broadcast the return value to a specific topic.
@Controller
public class ChatController {
// Client sends to: /app/chat.sendMessage
// Subscribers to: /topic/public receive the message
@MessageMapping("/chat.sendMessage")
@SendTo("/topic/public")
public ChatMessage sendMessage(@Payload ChatMessage chatMessage) {
return chatMessage;
}
}
Server-to-Client Push (SimpMessagingTemplate)
Sometimes the server needs to push data to a client outside of a specific @MessageMapping (e.g., a background task or an external event). You can inject SimpMessagingTemplate to push data to any destination at any time.
@Service
public class NotificationService {
private final SimpMessagingTemplate messagingTemplate;
public NotificationService(SimpMessagingTemplate messagingTemplate) {
this.messagingTemplate = messagingTemplate;
}
public void notifyUser(String userId, String message) {
// Pushes a message directly to a specific user's topic
messagingTemplate.convertAndSendToUser(userId, "/queue/notifications", message);
}
}
SockJS Fallback
Not all browsers or network proxies support WebSockets. SockJS is a client-side library that Spring supports natively. If a direct WebSocket connection fails, SockJS will automatically fall back to alternative transport protocols like HTTP Long Polling or Streaming, ensuring your real-time features work everywhere.
WebSocket Security
Security is handled via the same Spring Security filter chain, but since WebSockets are persistent, the security check happens during the HTTP Handshake.
- CSRF: Standard CSRF protection doesn't apply to WebSockets in the same way, so Spring Security provides specific
ChannelInterceptor logic to validate tokens on the CONNECT frame.
- Authentication: Usually handled by passing a JWT or session cookie during the initial handshake.
Note: Using an External Broker
While the "Simple Broker" is great for development, it lives in your app's memory and doesn't support clustering. For production, you can point Spring to an external broker like RabbitMQ or ActiveMQ using config.enableStompBrokerRelay("/topic").
Warning: Connection Leaks
WebSockets are "stateful." Each connection consumes a thread and memory on your server. Ensure you monitor your server's open file descriptors and memory usage, especially if you expect thousands of concurrent users.
Content goes here..
Spring Integration
Spring Integration is an extension of the Spring programming model that implements the patterns described in the seminal book Enterprise Integration Patterns (EIP). It provides a high-level abstraction for connecting disparate systems, allowing them to communicate via Messages and Channels without being tightly coupled.
Think of it as a "Lego set" for data: you define small, reusable components (Service Activators, Transformers, Filters) and snap them together using "pipes" (Channels) to build complex, asynchronous workflows.
Core Components of EIP
The framework is built on four fundamental pillars that define how data moves through a system.
| Component |
Responsibility |
Analogous To... |
| Message |
A generic wrapper for data (Payload) and metadata (Headers). |
An envelope with a letter inside. |
| Message Channel |
The pipe through which messages travel; decouples producers from consumers. |
A physical delivery route. |
| Message Endpoint |
A component that performs logic on a message (e.g., routing, transforming). |
A processing facility. |
| Message Gateway |
An entry/exit point that hides the messaging logic from your business code. |
A customer service counter. |
The Programming Model
Spring Integration supports three ways to configure integration flows: XML (legacy), Annotations, and the modern Java DSL.
Example: Java DSL Integration Flow
This flow reads files from a directory, transforms their content to uppercase, and sends the result to a database.
@Configuration
public class FileIntegrationConfig {
@Bean
public IntegrationFlow fileToDatabaseFlow() {
return IntegrationFlow
.from(Files.inboundAdapter(new File("input-dir")).patternFilter("*.txt"),
e -> e.poller(Pollers.fixedDelay(1000))) // Poll every second
.transform(String.class, String::toUpperCase) // Logic: Uppercase
.filter(payload -> !payload.contains("SECRET")) // Logic: Filter
.handle(message -> {
System.out.println("Processing: " + message.getPayload());
// Further logic to save to DB...
})
.get();
}
}
Common Message Endpoints
Integration flows are built by chaining these functional components together:
- Transformers: Convert a message's payload from one format to another (e.g., JSON to XML).
- Filters: Determine whether a message should be passed to the next channel based on a condition.
- Routers: Direct a message to different channels based on its content or headers.
- Service Activators: Connect a message channel to a specific method in a Spring Bean.
- Splitters & Aggregators: Break a large message into smaller parts for parallel processing, then combine them back together.
Channel Types
The behavior of your flow depends heavily on the type of channel used to connect components.
| Channel Type |
Behavior |
Blocking? |
| DirectChannel |
Point-to-point; message is handled in the sender's thread. |
Yes |
| QueueChannel |
Buffered; messages sit in a queue until a consumer is ready. |
No |
| PublishSubscribeChannel |
Broadcasts the message to all subscribers. |
Varies |
| ExecutorChannel |
Uses a ThreadPool to dispatch messages asynchronously. |
No |
Adapters and Gateways
To interact with external systems, Spring Integration provides Adapters for almost every protocol imaginable:
- Inbound/Outbound Adapters: One-way communication (e.g., watching a folder, sending an email).
- Gateways: Two-way request/reply communication (e.g., calling a REST API and waiting for the response).
Supported Protocols: File, FTP/SFTP, HTTP, JMS, AMQP, Kafka, MQTT, TCP/UDP, Mail, JDBC, and more.
Note: Error Handling
Integration flows have a dedicated errorChannel. If a component throws an exception, the original message (wrapped in an ErrorMessage) is sent there. You can subscribe to this channel to log errors or perform compensating transactions.
Warning: Blocking the Poller
When using InboundAdapters (like the File adapter), ensure your downstream processing is fast. If the handler blocks, the "Poller" thread will be stuck, and the system won't be able to pick up new data until the previous task finishes.
Spring Batch
Spring Batch is a lightweight, comprehensive framework designed to enable the development of robust batch applications—vital for modern enterprise systems. It provides reusable functions essential in processing large volumes of records, including logging/tracing, transaction management, job processing statistics, job restart, skip, and resource management.
It is built on the philosophy of "Chunk-oriented processing," where data is read, processed, and written in small, configurable batches rather than all at once, ensuring high performance and low memory footprints.
Core Architecture
The Spring Batch hierarchy follows a strict structure to manage execution state and data flow.
| Component |
Responsibility |
| Job |
The entire batch process; a container for one or more steps. |
| Step |
A sequential phase of a job (e.g., "Load CSV" or "Clean Data"). |
| JobRepository |
A database that stores metadata about jobs (start time, status, failures). |
| JobLauncher |
The interface used to start a job. |
| JobInstance |
A logical run of a job (e.g., "The End-of-Day Job for Feb 15"). |
| JobExecution |
An actual attempt to run a JobInstance (may fail and be retried). |
Chunk-Oriented Processing
Most steps in Spring Batch involve a simple "Read-Process-Write" pattern. This is handled by three specialized components:
- ItemReader: Reads data from a source (Flat files, XML, Database, Kafka).
- ItemProcessor: Transforms or filters the data. If it returns
null , the record is skipped.
- ItemWriter: Writes a "chunk" of records to a destination (Database, API, File).
Implementation Example (Java DSL)
@Bean
public Step sampleStep(JobRepository jobRepository, PlatformTransactionManager transactionManager) {
return new StepBuilder("sampleStep", jobRepository)
.<User, User>chunk(10, transactionManager) // Process 10 records at a time
.reader(itemReader())
.processor(itemProcessor())
.writer(itemWriter())
.build();
}
@Bean
public Job importUserJob(JobRepository jobRepository, Step sampleStep) {
return new JobBuilder("importUserJob", jobRepository)
.start(sampleStep)
.build();
}
Handling Errors: Skip and Retry
Batch jobs often encounter "dirty data." Spring Batch allows you to define how the system should react to specific exceptions without failing the entire multi-million record job.
| Strategy |
Description |
Use Case |
| Skip |
Ignore a specific record if a certain exception occurs. |
Skipping a row with a malformed date in a CSV. |
| Retry |
Re-run the logic for a record if a transient error occurs. |
Retrying a database write during a brief deadlock. |
| Restart |
Restart a failed job from the last successful chunk. |
Resuming a 10-hour job that crashed at hour 8. |
Spring Batch Metadata Tables
Spring Batch requires a database to store its state. It automatically creates several tables (prefixed with BATCH_) to track job progress.
- BATCH_JOB_INSTANCE: Uniquely identifies a job and its parameters.
- BATCH_JOB_EXECUTION: Tracks if a run was successful, failed, or stopped.
- BATCH_STEP_EXECUTION: Tracks the status of individual steps, including "commit count" and "rollback count."
Scheduling Jobs
Spring Batch does not include a built-in scheduler. To run jobs at specific times, it is commonly paired with:
- Spring Task Scheduler: Using the
@Scheduled annotation.
- Quartz: For complex enterprise scheduling.
- Cron Jobs / Kubernetes CronJobs: For external orchestration.
Note: Tasklets vs. Chunks
While Chunks are used for data processing, a Tasklet is used for single-task steps, such as cleaning up a directory, sending an email notification, or running a stored procedure.
Warning: Transactional Boundaries
Transactions are managed at the chunk level. If you set a chunk size of 100 and the 99th record fails to write, all 100 records in that chunk are rolled back, ensuring data integrity.
Sending Email
Spring Boot provides a simplified abstraction for sending emails through the spring-boot-starter-mail dependency. It builds upon the JavaMail library but removes the complex manual setup of sessions and transports. The core of this functionality is the JavaMailSender interface.
Configuration
To send emails, you must configure your SMTP (Simple Mail Transfer Protocol) server details in application.properties. Spring Boot uses these properties to auto-configure the JavaMailSender bean.
# SMTP Server Settings (Example for Gmail)
spring.mail.host=smtp.gmail.com
spring.mail.port=587
spring.mail.username=your-email@gmail.com
spring.mail.password=your-app-specific-password
# Protocol Properties
spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true
The Two Types of Messages
Spring supports two ways to construct an email depending on the complexity of your requirements.
| Message Type |
Class Used |
Use Case |
| Simple Text |
SimpleMailMessage |
Plain text emails without formatting or attachments. |
| Rich Content |
MimeMessage |
HTML content, inline images, and file attachments. |
- Sending Plain Text
@Service
public class EmailService {
private final JavaMailSender mailSender;
public EmailService(JavaMailSender mailSender) {
this.mailSender = mailSender;
}
public void sendSimpleEmail(String to, String subject, String body) {
SimpleMailMessage message = new SimpleMailMessage();
message.setFrom("noreply@myapp.com");
message.setTo(to);
message.setSubject(subject);
message.setText(body);
mailSender.send(message);
}
}
- Sending HTML with Attachments
For complex emails, use the MimeMessageHelper to handle the low-level MIME multi-part logic.
public void sendHtmlWithAttachment(String to, String content, File file) throws MessagingException {
MimeMessage message = mailSender.createMimeMessage();
// 'true' indicates a multipart message
MimeMessageHelper helper = new MimeMessageHelper(message, true);
helper.setTo(to);
helper.setSubject("Your Monthly Report");
helper.setText(content, true); // 'true' enables HTML
helper.addAttachment("Invoice.pdf", file);
mailSender.send(message);
}
Using Templates (Thymeleaf/FreeMarker)
Hardcoding HTML strings in Java code is difficult to maintain. It is a best practice to use a template engine like Thymeleaf to generate the email body.
- Step 1: Create an HTML template in
src/main/resources/templates/email-template.html.
- Step 2: Use TemplateEngine to process the HTML with dynamic data.
- Step 3: Pass the resulting string to the
MimeMessageHelper.
Best Practices and Performance
| Feature |
Recommendation |
| Asynchronous Sending |
Use @Async to send emails in a background thread so the user doesn't have to wait for the SMTP handshake. |
| Connection Pooling |
For high-volume applications, use a dedicated mail proxy or relay to manage connections efficiently. |
| Testing |
Use tools like MailHog or GreenMail during development to "catch" outgoing emails without actually sending them to real addresses. |
Note: App-Specific Passwords
Modern email providers (Gmail, Outlook) do not allow you to use your standard account password for SMTP. You must enable Two-Factor Authentication and generate an App Password to use in your configuration.
Warning: Blocking UI Threads
Sending an email is a network-intensive operation that can take several seconds. If you call mailSender.send() directly inside a @RestController method without @Async, your API response will be significantly delayed, leading to a poor user experience.
Validation (JSR-303/JSR-380)
Data validation is a critical aspect of any application, ensuring that the data entering your system is accurate, complete, and safe. Spring Boot provides seamless integration with the Bean Validation API (JSR-303 and its successor JSR-380). The default implementation used by Spring Boot is Hibernate Validator.
Validation is typically applied at the Controller level (validating incoming Request Bodies) or the Service level (ensuring business logic constraints are met).
Core Annotations
Constraints are defined using annotations directly on the fields of your POJOs (Plain Old Java Objects) or DTOs (Data Transfer Objects).
| Annotation |
Description |
Example |
@NotNull |
Field cannot be null. |
@NotNull String name; |
@NotEmpty |
String, Collection, or Map cannot be null or empty (size > 0). |
@NotEmpty List<Item> items; |
@NotBlank |
String must contain at least one non-whitespace character. |
@NotBlank String email; |
@Size |
Validates the size of a String, Collection, or Array. |
@Size(min=2, max=30) String name; |
@Min /
@Max
|
Validates that a numeric value is within bounds. |
@Min(18) int age; |
@Email |
Validates that the string follows a valid email format. |
@Email String email; |
@Pattern |
Validates the string against a specific Regular Expression. |
@Pattern(regexp="^[A-Z]{3}$") String code; |
@Future /
@Past
|
Validates that a date is in the future or the past. |
@Past LocalDate birthday; |
Implementation: Validating REST Requests
To trigger validation in a REST controller, you must use the @Valid (standard JSR) or @Validated (Spring-specific) annotation on the method parameter.
- The DTO with Constraints
public class UserDto {
@NotBlank(message = "Username is required")
private String username;
@Email(message = "Invalid email format")
private String email;
@Min(value = 18, message = "Must be at least 18 years old")
private int age;
// Getters and Setters...
}
- The Controller
If validation fails, Spring throws a MethodArgumentNotValidException and returns a 400 Bad Request status.
@RestController
@RequestMapping("/users")
public class UserController {
@PostMapping
public ResponseEntity<String> createUser(@Valid @RequestBody UserDto user) {
// If execution reaches here, the 'user' object is guaranteed to be valid
return ResponseEntity.ok("User is valid");
}
}
Handling Validation Errors
By default, the error response for a validation failure can be quite verbose. Most developers implement a @RestControllerAdvice to capture the MethodArgumentNotValidException and return a clean, structured JSON response.
| Error Field |
Message |
username
|
Username is required |
age
|
Must be at least 18 years old |
Validation Groups
Sometimes you need different validation rules for the same object depending on the context (e.g., id is required for an Update but must be null for a Create). Groups allow you to categorize constraints.
public interface OnCreate {}
public interface OnUpdate {}
public class UserDto {
@Null(groups = OnCreate.class)
@NotNull(groups = OnUpdate.class)
private Long id;
}
// In Controller
public void create(@Validated(OnCreate.class) @RequestBody UserDto dto) { ... }
Custom Validators
If the built-in annotations are insufficient, you can create your own. This involves creating a custom Annotation and a ConstraintValidator class.
// 1. Define Annotation
@Target({ FIELD })
@Retention(RUNTIME)
@Constraint(validatedBy = CourseCodeValidator.class)
public @interface CourseCode {
String value() default "LUV";
String message() default "must start with LUV";
Class<?>[] groups() default {};
Class<? extends Payload>[] payload() default {};
}
// 2. Define Validator Logic
public class CourseCodeValidator implements ConstraintValidator<CourseCode, String> {
private String prefix;
@Override
public void initialize(CourseCode code) { prefix = code.value(); }
@Override
public boolean isValid(String value, ConstraintValidatorContext context) {
return value != null && value.startsWith(prefix);
}
}
Note: Dependency Requirement
Since Spring Boot 2.3, the validation starter is no longer included in spring-boot-starter-web. You must explicitly add:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
Warning: Service-Level Validation
Do not rely solely on Controller-level validation if your beans are being used by other internal services. Add @Validated at the class level of your @Service and @Valid on method parameters to ensure data integrity throughout the entire application lifecycle.
Enabling Production-Ready Features
In a production environment, knowing exactly what is happening inside your application is as important as the features themselves. Spring Boot Actuator provides built-in endpoints that allow you to monitor and interact with your application. It effectively transforms your application into a "managed" service that can be audited, analyzed, and health-checked by external tools.
The Role of Actuator
Actuator is the bridge between your code and operations teams. It gathers data from the ApplicationContext and the environment to expose critical insights without requiring you to write custom monitoring logic.
| Category |
Description |
| Health Monitoring |
Checks the status of the app and its dependencies (DB, Disk, Mail). |
| Metrics |
Provides quantitative data (CPU usage, memory, request counts). |
| Diagnostics |
Exposes thread dumps, heap dumps, and environment properties. |
| Management |
Allows for remote shutdown (disabled by default) or log level changes. |
Enabling Actuator
To get started, add the following dependency to your pom.xml or build.gradle:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
By default, all Actuator endpoints are mapped under the /actuator base path.
Endpoint Exposure and Security
For security reasons, Spring Boot disables most endpoints by default, except for /health. You must explicitly "include" the endpoints you wish to expose via web (HTTP) or JMX.
Exposure Configuration
In application.properties, you can control exposure using wildcards or specific lists.
# Expose only specific endpoints
management.endpoints.web.exposure.include=health,info,metrics
# Expose all endpoints (Common in development, risky in production)
management.endpoints.web.exposure.include=*
# Exclude sensitive endpoints
management.endpoints.web.exposure.exclude=env,beans
Commonly Used Endpoints
| Endpoint |
Purpose |
Description |
/health |
Basic Health |
Shows if the app is "UP" or "DOWN". |
/info |
App Information |
Displays arbitrary info (e.g., git commit ID or build version). |
/metrics |
Performance Data |
Provides access to Micrometer-managed metrics (JVM, Tomcat, etc.). |
/loggers |
Runtime Logging |
View and modify log levels (e.g., DEBUG/INFO) without restarting. |
/env |
Environment |
Shows all ConfigurableEnvironment properties. |
/threaddump |
Diagnostics |
Performs a thread dump to identify deadlocks or CPU spikes. |
Health Indicator Details
By default, the /health endpoint only shows a simple status. To see the status of specific components (like your database connection or Redis status), you must enable details.
management.endpoint.health.show-details=always
With this enabled, the JSON output becomes much richer:
- db: Connection status and database type.
- diskSpace: Available vs. total disk space.
- ping: General reachability.
Customizing the Base Path
If you want to hide the fact that you are using Spring Boot or if /actuator conflicts with your API design, you can change the management base path:
management.endpoints.web.base-path=/manage
Note: Protecting Actuator Endpoints
If you have spring-security on your classpath, all Actuator endpoints (except /health and /info) are protected by default. You should configure a dedicated role (e.g., ADMIN) to access these sensitive diagnostic tools.
Warning: Sensitive Data Exposure
The /env and /configprops endpoints can leak sensitive information like API keys or database passwords. Spring Boot attempts to "mask" (sanitize) known secret keys with ******, but you should always restrict these endpoints to internal networks only.
Endpoints (Health, Info, Metrics)
While Actuator offers many endpoints, the "Big Three"—Health, Info, and Metrics— the backbone of most monitoring strategies. These endpoints provide the vital signs of your application and are typically the primary data sources for dashboards (like Grafana) and orchestration platforms (like Kubernetes).
- The
/health Endpoint
The Health endpoint is used to check the running status of the application. It is primarily used by Liveness and Readiness probes in containerized environments to decide if an instance should receive traffic or be restarted.
- Status Aggregation: If any single component (e.g., the Database) is
DOWN, the overall status becomes DOWN.
- Built-in Indicators: Spring Boot auto-configures indicators for technologies it detects, such as
DataSourceHealthIndicator, RedisHealthIndicator, and DiskSpaceHealthIndicator.
Custom Health Indicator
You can write your own health check by implementing the HealthIndicator interface:
@Component
public class ExternalApiHealthIndicator implements HealthIndicator {
@Override
public Health health() {
boolean isApiUp = checkExternalApi(); // Your custom logic
if (!isApiUp) {
return Health.down().withDetail("Error", "External API is unreachable").build();
}
return Health.up().build();
}
}
- The
/info Endpoint
The Info endpoint displays arbitrary information about the application. By default, it is empty. It is often used to show build versions, git commit hashes, or contact details.
Configuration Sources
You can populate this endpoint using properties or build-time plugins.
| Source |
Configuration |
| Properties |
Define any property starting with info.* in application.properties. |
| Build Info |
Add the build-info goal to the Spring Boot Maven/Gradle plugin to show project version. |
| Git Info |
Use the git-commit-id-plugin to display the specific commit the app is running on. |
# Manual Info
info.app.name=Order Service
info.app.description=Handles customer checkouts
info.app.version=2.4.0
- The
/metrics Endpoint
The Metrics endpoint provides a window into the quantitative performance of the app. Under the hood, Spring Boot uses Micrometer, a "SLF4J for metrics," which allows you to instrument your code once and export the data to various monitoring systems (Prometheus, New Relic, Datadog).
Accessing Metrics
- List all metrics: Navigate to
/actuator/metrics.
- Inspect a specific metric: Navigate to
/actuator/metrics/{metric.name} (e.g., /actuator/metrics/jvm.memory.used).
| Metric Category |
Examples |
| JVM |
jvm.memory.used, jvm.gc.pause, jvm.threads.live |
| HTTP Requests |
http.server.requests (includes status codes and latency) |
| Connectivity |
jdbc.connections.active, hikaricp.connections.usage |
| System |
system.cpu.usage, process.uptime |
Comparison Table
| Feature |
/health |
/info |
/metrics |
| Primary Audience |
Load Balancers / K8s |
Developers / Ops |
Monitoring Tools (Grafana) |
| Data Type |
Binary Status (UP/DOWN) |
Static Metadata |
Time-series Numerical Data |
| Security |
Usually Public (shrunk) |
Usually Public |
Usually Protected |
| Customizable? |
Yes, via HealthIndicator |
Yes, via properties/git |
Yes, via MeterRegistry |
Integration with Kubernetes
If Spring Boot detects it is running in a Kubernetes environment, it automatically enables specialized health groups:
/actuator/health/liveness: Tells K8s if the app is alive (if not, K8s restarts the pod).
/actuator/health/readiness: Tells K8s if the app is ready to handle traffic (if not, K8s removes it from the Service Load Balancer).
Note: Formatting Metrics
The raw /actuator/metrics JSON is meant for human browsing. To allow a tool like Prometheus to scrape your data, you must add the micrometer-registry-prometheus dependency, which creates a new endpoint: /actuator/prometheus.
Warning: Metric Cardinality
Be careful when adding "Tags" to custom metrics. If you use a unique ID (like a UserID) as a tag, you will create a "high-cardinality" problem that can crash your monitoring server and bloat your application's memory usage.
Monitoring & Management over HTTP/JMX
Spring Boot Actuator allows you to interact with your application through two primary channels: HTTP (REST endpoints) and JMX (Java Management Extensions). While HTTP is the standard for modern web-based monitoring tools, JMX remains a powerful option for local debugging and traditional enterprise monitoring systems.
- HTTP Exposure
HTTP is the most common way to consume Actuator data. It is platform-agnostic and integrates easily with tools like Prometheus, Grafana, and ELK.
- Base Path: All endpoints are hosted under
/actuator by default.
- Response Format: Data is returned as JSON, often using the HAL (Hypertext Application Language) format to provide links between related endpoints.
- Security: Because HTTP is accessible over the network, it must be secured using Spring Security to prevent leaking sensitive environment data.
Web Exposure Properties
# Include specific endpoints for HTTP
management.endpoints.web.exposure.include=health,info,metrics,loggers
# Change the base path (e.g., to /manage)
management.endpoints.web.base-path=/manage
# Map an endpoint to a different path
management.endpoints.web.path-mapping.health=checkup
- JMX (Java Management Extensions)
JMX is a standard technology for managing and monitoring Java applications. Actuator endpoints are automatically exposed as MBeans (Managed Beans).
Primary Use Case: Real-time local debugging using tools like JConsole or VisualVM.
- Primary Use Case: Real-time local debugging using tools like JConsole or VisualVM.
- Advantages: JMX allows for low-latency interactions and is often already "open" in corporate Java environments.
- Exposure: By default, all Actuator endpoints are exposed over JMX, unlike HTTP which is restricted.
JMX Configuration
# Enable/Disable JMX exposure
management.endpoints.jmx.exposure.include=*
# Define a custom JMX domain name
spring.jmx.default-domain=com.myapp.production
- Comparing HTTP vs. JMX
| Feature |
HTTP (REST) |
JMX (MBeans) |
| Accessibility |
Remote (Web/Browsers) |
Local/Remote (RMI/JMXMP) |
| Security |
Spring Security (RBAC/OIDC) |
JMX Credentials / SSL |
| Default Exposure |
Restricted ( health , info ) |
Unrestricted ( * ) |
| Best For |
Dashboards (Grafana), Cloud-Native |
Deep-dive debugging, Legacy systems |
| Interactivity |
Simple GET/POST requests |
Invoking complex MBean operations |
- Managing Log Levels at Runtime
One of the most powerful management features available over both HTTP and JMX is the ability to change log levels without restarting the application.
- HTTP: Send a
POST request to /actuator/loggers/com.example with a JSON body: {"configuredLevel": "DEBUG"}.
- JMX: Find the
Loggers MBean and execute the setLogLevel operation.
This is invaluable for troubleshooting production issues where you need more detail on a specific package but cannot afford a service restart.
- Security Best Practices
Exposing management endpoints can be a security risk if not handled correctly.
- Network Isolation: If possible, run management endpoints on a different port than your main application.
management.server.port=8081
- Role-Based Access: Require an
ADMIN role for sensitive endpoints like /heapdump, /env, or /shutdown.
- Audit Logging: Actuator provides an
AuditEvents endpoint to track who interacted with the management features and when.
Note: The /shutdown Endpoint
This endpoint is the only one that is disabled by default in the code itself, not just the exposure settings. To use it, you must set management.endpoint.shutdown.enabled=true. It allows for a graceful shutdown of the ApplicationContext.
Warning: Remote JMX Vulnerabilities
Opening JMX for remote access (com.sun.management.jmxremote) without proper SSL and authentication is a high security risk, as it can allow for unauthorized remote code execution. Stick to HTTP for remote monitoring whenever possible.
Loggers & Heap Dumps
When an application moves beyond simple health checks into active troubleshooting, the Loggers and Heap Dump endpoints become the primary tools for developers. They allow for real-time diagnostics—changing the visibility of system events and capturing the entire state of the application's memory—without requiring a restart or a debugger attachment.
- The Loggers Endpoint
The /loggers endpoint allows you to view and dynamically modify the logging levels of your application at runtime. This is particularly useful when you need to debug a specific issue in production by increasing the verbosity of a single package.
Key Operations
- GET
/actuator/loggers: Returns a list of all configured loggers and their effective levels (e.g., INFO, DEBUG, ERROR).
- GET
/actuator/loggers/{name}: Returns the level of a specific package or class.
- POST
/actuator/loggers/{name}: Updates the log level for that logger.
Example: Enabling Debug Logging for a Package
To troubleshoot a database issue in the com.example.repository package, you can send a POST request:
| Method |
URL |
Body (JSON) |
Result |
| POST |
/actuator/loggers/com.example.repository
|
{"configuredLevel": "DEBUG"}
|
Immediate debug output for that package. |
- The Heap Dump Endpoint
The /heapdump endpoint triggers a JVM heap dump and returns the resulting binary file (usually in GZipped .hprof format). This file contains a snapshot of every object currently in the application's memory.
Why use it?
- Memory Leaks: Identify which objects are filling up the heap and why they aren't being garbage collected.
- Post-Mortem Analysis: Analyze the state of the system right before an
OutOfMemoryError.
- Object Inspection: See the actual values stored in variables across the entire application.
How to use it
- Download: Access
/actuator/heapdump via a browser or curl.
- Analyze: Open the downloaded file in a tool like Eclipse MAT (Memory Analyzer Tool) or VisualVM.
- The Thread Dump Endpoint
While Heap Dumps focus on memory, the /threaddump endpoint focuses on execution. It provides a snapshot of all active threads within the JVM.
| Feature |
Description |
| Deadlock Detection |
Automatically flags threads that are waiting for each other in a circular dependency. |
| CPU Spikes |
Helps identify which thread is consuming 100% of the CPU (e.g., an infinite loop). |
| State Tracking |
Shows if threads are
RUNNABLE,
BLOCKED, or
WAITING.
|
Comparison: Diagnostics Tools
| Tool |
Format |
Focus |
Performance Impact |
| Loggers |
JSON / Text |
Logic flow & Events |
Low |
| Thread Dump |
JSON / Text |
Concurrency & CPU |
Low/Medium |
| Heap Dump |
Binary (.hprof) |
Memory & Object state |
High (Freezes JVM briefly) |
Security and Performance Warnings
- Sensitivity: Both Heap and Thread dumps can contain sensitive information, such as passwords, tokens, or PII (Personally Identifiable Information) that was stored in memory at the time of the dump. Always protect these endpoints with strict RBAC (Role-Based Access Control).
- Performance Hit: Generating a heap dump requires the JVM to perform a "Stop-the-World" pause. On a large heap (e.g., 8GB+), this can freeze the application for several seconds, potentially causing load balancers to mark the instance as unhealthy.
- Storage: Ensure the server has enough disk space to store the dump file temporarily, as they can be quite large.
Note: Log Grouping
You can define "Logger Groups" in your properties to change the levels of multiple related packages at once. For example, a "web" group could include both Spring MVC and Tomcat loggers.
Metrics (Micrometer)
In Spring Boot 2 and 3, Micrometer is the instrumentation library that powers the delivery of application metrics. It acts as a "facade" or "SLF4J for metrics," allowing developers to instrument their code with a vendor-neutral API. This means you can write your metrics once and swap out the underlying monitoring system (e.g., Prometheus, New Relic, Datadog) simply by changing a dependency.
Core Concepts: Meters and Registry
The two primary components of the Micrometer ecosystem are the Meter and the MeterRegistry.
| Component |
Description |
| Meter |
The interface for collecting a set of measurements (e.g., a counter or a timer). |
| MeterRegistry |
The "home" for meters. Each monitoring system has its own registry implementation (e.g.,
PrometheusMeterRegistry).
|
| Tags (Dimensions) |
Key-value pairs added to a meter to allow for filtering and "drilling down" into data (e.g.,
status=200, region=us-east).
|
Types of Meters
Micrometer provides several types of meters to capture different kinds of data patterns.
| Meter Type |
Use Case |
Example |
| Counter |
A value that only increases. |
Total requests handled, total errors. |
| Gauge |
A value that can go up and down (instantaneous). |
Current number of active threads, memory usage. |
| Timer |
Measures short-duration latencies and frequency. |
Time taken to execute a database query. |
| Distribution Summary |
Measures the distribution of events (not necessarily time). |
Size of HTTP response payloads. |
Implementing Custom Metrics
While Spring Boot auto-instruments many things (HTTP, JVM, Hibernate), you will often want to track business-specific metrics.
Example: Tracking Orders with Tags
@Service
public class OrderService {
private final Counter orderCounter;
public OrderService(MeterRegistry registry) {
// Define a counter with tags for dimensional analysis
this.orderCounter = Counter.builder("orders.placed")
.tag("region", "EMEA")
.description("Total number of orders placed")
.register(registry);
}
public void placeOrder() {
// Business logic...
orderCounter.increment();
}
}
Global Customization with MeterFilter
You can use a MeterFilter to globally modify how metrics are recorded, such as adding common tags to every single metric (e.g., the application name or environment).
@Bean
public MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
return registry -> registry.config().commonTags("application", "order-service", "env", "prod");
}
Integration with Prometheus
Prometheus is the most common consumer of Micrometer data in cloud-native environments. Because Prometheus uses a pull-based model, your application must expose a specific endpoint for the Prometheus server to "scrape."
- Dependency: Add
micrometer-registry-prometheus.
- Endpoint: This automatically enables
/actuator/prometheus.
- Format: The data is served in a plain-text format specifically designed for Prometheus.
Best Practices for Metrics
- Naming Conventions: Use dot-separated names (e.g.,
http.server.requests). Micrometer will automatically convert these to the format required by the backend (e.g., underscores for Prometheus).
- Avoid High Cardinality: Do not use values with high variability (like User IDs, UUIDs, or timestamps) as Tags. This creates too many unique time-series and can crash your monitoring database.
- Prefer Timers for Latency: Use
Timer instead of a manual Gauge for latencies, as Timers provide built-in statistics like percentiles (p95, p99) and max/mean values.
Note: Observation API (Spring Boot 3)
Spring Boot 3 introduced the Observation API, which unifies Metrics and Tracing. Instead of instrumenting for metrics and tracing separately, you can "observe" a block of code, and Spring will automatically handle both Micrometer metrics and Brave/OpenTelemetry traces.
Distributed Tracing
In a microservices architecture, a single user request often travels through multiple services before a response is returned. When a failure or latency issue occurs, it is difficult to identify which specific service is the bottleneck. Distributed Tracing solves this by assigning a unique ID to a request and propagating it across all service boundaries.
In Spring Boot 3, tracing is managed via Micrometer Tracing (which replaced the older Spring Cloud Sleuth). It provides a common API to record traces and exports them to visualization tools like Zipkin or Jaeger.
- Core Concepts: Traces vs. Spans
To understand tracing, you must distinguish between the overall journey and the individual stops along the way.
| Term |
Description |
Analogous To... |
| Trace |
The complete path of a request as it moves through the entire system. |
A complete flight itinerary (NYC to Tokyo). |
| Span |
A single unit of work within a service (e.g., an HTTP request or a DB query). |
A single leg of the flight (NYC to London). |
| Trace ID |
A unique ID shared by all spans in a single request. |
The Booking Reference (PNR) number. |
| Span ID |
A unique ID for a specific segment of the work. |
A specific Boarding Pass ID. |
- Enabling Tracing in Spring Boot 3
Spring Boot 3 uses the Observation API to handle tracing. To enable it, you need the Micrometer Tracing bridge and an exporter to send the data to a server.
Required Dependencies (Example for Zipkin)
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing-bridge-brave</artifactId>
</dependency>
<dependency>
<groupId>io.zipkin.reporter2</groupId>
<artifactId>zipkin-reporter-brave</artifactId>
</dependency>
- Configuration and Sampling
Tracing every single request can generate a massive amount of data and impact performance. In production, it is common to "sample" only a percentage of requests.
# Enable tracing
management.tracing.enabled=true
# Sample 10% of requests (0.1) - Use 1.0 for development to see everything
management.tracing.sampling.probability=0.1
# Zipkin server URL
management.zipkin.tracing.endpoint=http://localhost:9411/api/v2/spans
- Propagation and Logs
One of the most immediate benefits of Micrometer Tracing is that it automatically injects the traceId and spanId into your SLF4J Mapped Diagnostic Context (MDC). This allows your log files to show exactly which request generated a specific log line.
Example Log Pattern:
2026-02-19 [inventory-service, a1b2c3d4, e5f6g7h8] INFO: Checking stock...
inventory-service: The app name.
a1b2c3d4: The Trace ID (stays the same across services).
e5f6g7h8: The Span ID (changes for this specific operation).
- Visualization Tools
Tracing data is difficult to read in raw JSON format. Specialized tools provide a "Gantt chart" style view of requests.
| Tool |
Description |
| Zipkin |
A classic, easy-to-use distributed tracing system. |
| Jaeger |
A more modern, CNCF-hosted tracing platform with advanced filtering. |
| Grafana Tempo |
A high-scale traces storage backend that integrates deeply with Grafana dashboards. |
- Manual Instrumentation (Observation API)
While Spring auto-traces HTTP and DB calls, you might want to trace a specific business method manually.
@Service
public class ManualTraceService {
private final ObservationRegistry registry;
public ManualTraceService(ObservationRegistry registry) {
this.registry = registry;
}
public void complexLogic() {
Observation.createNotStarted("manual.logic", registry)
.observe(() -> {
// This block is now wrapped in a unique Span
doWork();
});
}
}
Note: OpenTelemetry (OTLP)
The industry is moving toward OpenTelemetry as the universal standard for observability. Spring Boot 3 fully supports OTLP exporters, allowing you to send traces to almost any modern provider (AWS X-Ray, Honeycomb, Azure Monitor) without changing your code.
Warning: Context Propagation
Tracing works by passing headers (like X-B3-TraceId) between services. If you use a custom thread pool or manual CompletableFuture without using Spring’s TaskExecutor or ContextPropagatingAccessor, the Trace ID will be lost, and the trace will appear "broken" in your visualization tool.
goes here..
Efficient Container Images (Docker)
In a cloud-native world, the size, security, and startup speed of your container images are critical. Simply throwing a JAR file into a basic Docker image is often inefficient. Spring Boot provides native support for creating "Cloud Native Buildpacks" and optimized Dockerfiles that leverage Layered JARs to ensure faster builds and smaller deployment footprints.
- Traditional vs. Layered Docker Images
In a traditional Docker image, the entire "Fat JAR" is one layer. If you change a single line of code, the entire layer (including all 100MB+ of dependencies) must be rebuilt and pushed to the registry.
Layered JARs solve this by splitting the JAR into four distinct layers based on how frequently they change:
| Layer |
Content |
Change Frequency |
| dependencies |
Standard library dependencies (e.g., Spring Framework). |
Very Low |
| spring-boot-loader |
The code used to launch the Fat JAR. |
Very Low |
| snapshot-dependencies |
Unstable dependencies (e.g., -SNAPSHOT versions). |
Medium |
| application |
Your classes and resources. |
High |
- Creating Images with Cloud Native Buildpacks
Spring Boot includes direct integration with Buildpacks, allowing you to create an optimized, production-ready Docker image without even writing a Dockerfile. It automatically handles the OS, JRE, and layering.
Maven:
./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=my-app:latest
Gradle:
./gradlew bootBuildImage --imageName=my-app:latest
- Optimized Custom Dockerfile
If you need more control, you should use a multi-stage Dockerfile that extracts the layers before building the image. This ensures that Docker can cache the heavy dependency layers.
# Stage 1: Extraction
FROM eclipse-temurin:17-jre-alpine as builder
WORKDIR application
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application.jar
RUN java -Djarmode=layertools -jar application.jar extract
# Stage 2: Final Image
FROM eclipse-temurin:17-jre-alpine
WORKDIR application
COPY --from=builder application/dependencies/ ./
COPY --from=builder application/spring-boot-loader/ ./
COPY --from=builder application/snapshot-dependencies/ ./
COPY --from=builder application/application/ ./
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]
- Best Practices for Spring Boot Containers
| Strategy |
Benefit |
| Use Alpine or Distroless |
Reduces the attack surface and image size by removing unused OS utilities. |
| Run as Non-Root |
Enhances security by ensuring the application doesn't have root privileges on the host. |
| Memory Limits |
Use -XX:MaxRAMPercentage instead of hard-coded heap sizes so the JVM respects Docker memory limits.
|
| Graceful Shutdown |
Set server.shutdown=graceful in properties to let active requests finish before the container stops.
|
- Health Probes for Orchestrators
When running in Kubernetes, your container needs to expose its "health" so the platform knows when to restart it (Liveness) or send it traffic (Readiness).
# Enable specialized groups for Kubernetes
management.endpoint.health.probes.enabled=true
This maps your Actuator health to:
/actuator/health/liveness
/actuator/health/readiness
Note: Spring Boot 3 & Java 17+
Ensure your base images use the same Java version as your build. For Spring Boot 3, you should use eclipse-temurin:17 or higher as your base runtime image to ensure compatibility and performance.
Warning: The "Fat JAR" in a Single Layer
Avoid using COPY target/*.jar app.jar followed by ENTRYPOINT ["java", "-jar", "app.jar"]. While simple, it negates Docker's layer caching, resulting in slow deployments and high storage costs as every minor code change forces a full 100MB+ upload.
Cloud Native Buildpacks
Cloud Native Buildpacks (CNB) provide a higher-level abstraction for building container images compared to traditional Dockerfiles. Instead of manually defining every OS layer and command, Buildpacks automatically transform your application source code into a production-ready, OCI-compliant container image.
Spring Boot integrates with Paketo Buildpacks (the default implementation) to handle dependencies, JDK installation, and performance tuning automatically.
- Why use Buildpacks instead of Dockerfiles?
While Dockerfiles offer maximum control, they often lead to "copy-paste" security vulnerabilities and inefficient layering. Buildpacks standardize the process across an organization.
| Feature |
Dockerfile |
Cloud Native Buildpacks |
| Maintenance |
Manual (You update the OS/JDK). |
Automatic (The Buildpack updates them). |
| Security |
Hard to audit across many files. |
Centralized and compliant by default. |
| Layering |
Depends on user expertise. |
Optimized automatically for Spring Boot. |
| Cross-Platform |
Requires local Docker daemon. |
Can be run via CLI or CI/CD pipelines. |
- Building Images with Spring Boot
Spring Boot's build plugins (Maven and Gradle) include the build-image goal, which communicates with a Docker daemon to package your application.
Maven Command
./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=my-registry/my-app:v1
Gradle Command
./gradlew bootBuildImage --imageName=my-registry/my-app:v1
- Key Capabilities of Paketo Buildpacks
When you run the commands above, the Buildpack performs several sophisticated steps:
- JRE Selection: It detects your Java version and installs the appropriate Libreica JRE (the default for Paketo).
- Layering: It automatically implements the Layered JAR structure (Dependencies, Loader, Snapshot, Application) to optimize Docker cache.
- SBOM (Software Bill of Materials): It generates an inventory of all software included in the image, which is vital for security auditing.
- Customizing the Build
You can influence the Buildpack behavior via environment variables in your build configuration (no code changes required).
| Requirement |
Environment Variable |
| Change Java Version |
BP_JVM_VERSION=21 |
| Enable Native Image |
BP_NATIVE_IMAGE=true |
| Add CA Certificates |
BP_INSTALL_CERT_BINDING=true |
| Custom Build Args |
BP_MAVEN_BUILD_ARGUMENTS=-DskipTests |
Example Maven Configuration:
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>

</configuration>
</plugin>
- Re-basing: The "Magic" of Buildpacks
One of the most powerful features of CNB is Re-basing. If a vulnerability is found in the underlying OS (the "Run Image"), you can swap the base image across your entire fleet of containers without rebuilding the application. This is done by simply updating the image metadata, making security patching instantaneous.
[Image comparing Docker image rebuild vs Buildpack rebasing mechanism]
Note: Requirements
To use the build-image goal, you must have a Docker daemon running locally or a remote Docker host configured. The tool uses "Docker-in-Docker" or a sidecar container to execute the build logic.
Warning: Build Times
The first time you run a Buildpack, it may be slower than a simple Dockerfile because it has to download several builder images (often several hundred MBs). However, subsequent builds are highly cached and significantly faster.
Kubernetes Deployment
Kubernetes (K8s) is the de facto standard for orchestrating containerized Spring Boot applications. While Spring Boot runs perfectly as a standalone container, it provides specific features to handle the lifecycle, configuration, and connectivity requirements of a distributed Kubernetes environment.
- Cloud-Native Lifecycle Management
In Kubernetes, pods are ephemeral. Spring Boot handles this by providing native support for Graceful Shutdown and Liveness/Readiness Probes.
| Feature |
Property |
Kubernetes Role |
| Liveness |
/actuator/health/liveness |
Determines if the container is "alive." If it fails, K8s restarts the pod. |
| Readiness |
/actuator/health/readiness |
Determines if the app is ready to handle traffic. If it fails, K8s stops sending traffic to it. |
| Graceful Shutdown |
server.shutdown=graceful |
Ensures the app finishes processing active requests before the pod terminates. |
- Configuration: ConfigMaps and Secrets
Rather than bundling configuration inside the JAR, Spring Boot can load properties directly from Kubernetes ConfigMaps and Secrets.
- ConfigMaps: Used for non-sensitive data (e.g., log levels, feature flags).
- Secrets: Used for sensitive data (e.g., database passwords, API keys).
Spring Cloud Kubernetes allows these to be mapped directly to your @ConfigurationProperties by mounting them as files or environment variables.
# Example Kubernetes Deployment segment
env:
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
- Resource Constraints and the JVM
Since Java 10+, the JVM is "container-aware." It respects the memory and CPU limits set in the Kubernetes deployment manifest.
Best Practice: Always set requests and limits in your YAML to prevent a single pod from consuming all node resources.
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
- Service Discovery and Load Balancing
In Kubernetes, you don't need a separate Service Discovery tool like Netflix Eureka. K8s provides a built-in Service abstraction that handles internal DNS and load balancing.
- ClusterIP: For internal communication between services.
- NodePort/LoadBalancer For exposing the application to external traffic.
- Ingress: To manage external access (HTTP/HTTPS) to services with advanced routing rules.
- Deployment Strategies
Spring Boot's stateless nature allows for zero-downtime deployments using standard Kubernetes strategies:
| Strategy |
Description |
Benefit |
| Rolling Update |
Replaces old pods with new ones one-by-one. |
No downtime; gradual transition. |
| Blue/Green |
Provisions a full new version (Green) alongside the old (Blue) before switching traffic. |
Easy rollback; zero downtime. |
| Canary |
Routes a small percentage of traffic to the new version to test stability. |
Minimal risk for new features. |
- Example: Minimal Deployment Manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-app
spec:
replicas: 3
selector:
matchLabels:
app: spring-boot-app
template:
metadata:
labels:
app: spring-boot-app
spec:
containers:
- name: app
image: my-registry/spring-boot-app:latest
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
Note: Spring Cloud Kubernetes
While basic K8s features work out of the box, the Spring Cloud Kubernetes library offers advanced features like reloading @ConfigurationProperties automatically when a ConfigMap changes, without needing a pod restart.
Warning: Termination Grace Period
Kubernetes has a default terminationGracePeriodSeconds of 30s. Ensure your Spring Boot spring.lifecycle.timeout-per-shutdown-phase is shorter than the K8s grace period to prevent K8s from forcefully killing the app while it's still trying to shut down gracefully.
GraalVM Native Images (AOT Compilation)
For years, the Java ecosystem struggled with slow startup times and high memory overhead compared to languages like Go or Rust. GraalVM Native Image technology changes this by using Ahead-of-Time (AOT) compilation to transform a Spring Boot application into a standalone executable (a native binary).
This binary includes the application classes, dependencies, and a subset of the JVM (Substrate VM) required to run the code, resulting in instant startup and a significantly reduced memory footprint.
- JIT vs. AOT Compilation
To understand Native Images, you must compare the traditional Java execution model with the AOT model.
| Feature |
Standard JVM (JIT) |
GraalVM Native (AOT) |
| Compilation |
Happens at runtime (Just-In-Time). |
Happens at build time (Ahead-Of-Time). |
| Startup Time |
Seconds to Minutes (Warm-up needed). |
Milliseconds (Instant-on). |
| Memory Usage |
High (Requires JVM + Metadata). |
Low (Only includes used code). |
| Throughput |
High (JIT optimizes based on usage). |
Slightly lower (Static optimization). |
| Artifact |
Platform-independent .jar. |
Platform-specific binary (e.g., Linux executable). |
- The "Closed World" Assumption
AOT compilation relies on aClosed World Assumption. The compiler must know all the bytecode that will be executed at runtime during the build process.
- Dead Code Elimination: The compiler removes any code that isn't reachable, drastically reducing the binary size.
- Reflection & Proxies: Since Java's dynamic features (Reflection, Dynamic Proxies, Classpath Scanning) happen at runtime, they are difficult for AOT compilers.
- Spring AOT Engine: Spring Boot 3 includes a specialized engine that processes your beans and configurations at build time, generating the necessary "hints" so GraalVM knows how to handle reflection.
- Building a Native Image
Spring Boot provides two primary ways to generate a native binary.
Option A: Using Cloud Native Buildpacks (Docker required)
This is the easiest method as it doesn't require installing GraalVM locally.
./mvnw spring-boot:build-image -Pnative
Option B: Using the Native Build Tools (GraalVM installed)
This generates an executable file directly on your local machine.
./mvnw native:compile -Pnative
- Performance Benchmarks
A typical Spring Boot "Hello World" application demonstrates dramatic improvements when converted to a native image:
| Metric |
JVM (Standard) |
GraalVM Native |
| Startup Time |
~3.5 seconds |
~0.04 seconds |
| Memory (RSS) |
~250 MB |
~40 MB |
| Executable Size |
~50 MB (JAR + JRE) |
~70 MB (Standalone) |
- Ideal Use Cases for Native Images
- Serverless (AWS Lambda, Google Cloud Functions): Fast startup eliminates the "cold start" problem.
- Scale-to-Zero: Ideal for environments where instances are spun up and down instantly based on traffic.
- Resource-Constrained Environments: When running many small microservices on a single Kubernetes node.
- Limitations and Challenges
- Build Time: Compiling a native image is resource-intensive and can take several minutes.
- Reduced Peak Throughput: Because the JIT compiler can't optimize code based on real-time profiling, a native binary may perform slightly slower than a "warmed-up" JVM in long-running processes.
- Compatibility: Some third-party libraries that rely heavily on reflection without providing "Reachability Hints" may fail at runtime.
Note: Reachability Hints
If you use a library that isn't yet "native-ready," you can manually provide hints in src/main/resources/META-INF/native-image/. These JSON files tell GraalVM which classes will be accessed via reflection.
Warning: Debugging Native Binaries
You cannot use a standard Java debugger (like IntelliJ's debugger) on a native binary. You must use tools like GDB or LLDB, which operate at the OS level, making troubleshooting logic errors significantly more complex.