Mastering Concurrency in Java
Table of Contents
Understanding CompletableFuture
CompletableFuture is a class in Java that represents the future result of an asynchronous computation. Unlike Future, which only allows you to retrieve the result or handle exceptions, CompletableFuture offers more flexibility. It allows for non-blocking operations and composable handling of asynchronous tasks.
Example: Running Multiple Asynchronous Tasks
1CompletableFuture<String> future1 = CompletableFuture.supplyAsync(() -> "Hello");
2CompletableFuture<String> future2 = CompletableFuture.supplyAsync(() -> "Beautiful");
3CompletableFuture<String> future3 = CompletableFuture.supplyAsync(() -> "World");
4
5CompletableFuture<Void> combinedFuture = CompletableFuture.allOf(future1, future2, future3);
6
7// Wait for all futures to complete
8combinedFuture.get();
9
10// Verify that all futures are complete
11assertTrue(future1.isDone());
12assertTrue(future2.isDone());
13assertTrue(future3.isDone());In the above example, we are creating three CompletableFuture instances that run asynchronously. The allOf method is used to wait until all futures are complete. After the get() method is called, we assert that all futures have been completed.
Example: Chaining CompletableFutures with thenCompose
Another powerful feature of CompletableFuture is the ability to chain asynchronous tasks. You can chain one future's result to another asynchronous computation, allowing for more complex workflows. Here's an example of chaining tasks using thenCompose:
1CompletableFuture<String> completableFuture = CompletableFuture.supplyAsync(() -> "Hello")
2 .thenCompose(s -> CompletableFuture.supplyAsync(() -> s + " World"));
3
4assertEquals("Hello World", completableFuture.get());In this example, we first supply an initial value "Hello" asynchronously. Then, we use thenCompose to chain another asynchronous task that appends " World" to the initial value. Finally, we retrieve the result using get() and assert that the result is "Hello World".
Key Benefits of Using CompletableFuture
Non-blocking Execution: Tasks run asynchronously, which means the main thread is not blocked while waiting for results.
Task Composition: Easily chain or combine multiple asynchronous tasks to build complex workflows.
Exception Handling: With handle and exceptionally, you can define how to handle errors in asynchronous computations.
Parallel Execution: Multiple tasks can be executed concurrently, making the best use of system resources.
Real example in Spring Boot
Below is an example of how to use CompletableFuture in a Spring Boot application. In this example, we fetch user data, send an email notification to the user, and simultaneously process an order. By leveraging CompletableFuture, we can run these tasks asynchronously to speed up execution and improve system responsiveness.
1import org.springframework.stereotype.Service;
2import java.util.concurrent.*;
3
4@Service
5public class TaskService {
6 private final UserService userService;
7 private final OrderService orderService;
8 private static final ExecutorService executorService = Executors.newFixedThreadPool(5);
9
10 // Constructor Injection for UserService & OrderService
11 public TaskService(UserService userService, OrderService orderService) {
12 this.userService = userService;
13 this.orderService = orderService;
14 }
15
16 public void runTasks() {
17 System.out.println("Starting dependent & independent tasks...");
18
19 Long userId = 123L;
20 Long orderId = 456L;
21
22 // Step 1: Fetch user data (Dependent Task 1)
23 CompletableFuture<String> fetchUserFuture = CompletableFuture.supplyAsync(() ->
24 userService.fetchUserById(userId), executorService
25 );
26
27 // Step 2: Send email once user data is fetched (Dependent Task 2)
28 CompletableFuture<String> sendEmailFuture = fetchUserFuture.thenApplyAsync(user ->
29 userService.sendEmailNotification(user), executorService
30 );
31
32 // Independent Task: Process order (Does NOT depend on user)
33 CompletableFuture<String> processOrderFuture = CompletableFuture.supplyAsync(() ->
34 orderService.processOrder(orderId), executorService
35 );
36
37 // Wait for all tasks to complete
38 CompletableFuture.allOf(sendEmailFuture, processOrderFuture).join();
39 // Print final results
40 System.out.println("Final Email Status: " + sendEmailFuture.join());
41 System.out.println("Final Order Status: " + processOrderFuture.join());
42
43 System.out.println("All tasks completed!");
44
45 // Shutdown ExecutorService properly
46 shutdownExecutor();
47 }
48
49 private void shutdownExecutor() {
50 executorService.shutdown();
51 try {
52 if (!executorService.awaitTermination(5, TimeUnit.SECONDS)) {
53 executorService.shutdownNow();
54 }
55 } catch (InterruptedException e) {
56 executorService.shutdownNow();
57 }
58 }
59}Concurrent locks in java
Java provides two main approaches to handling concurrency: using synchronized blocks and using the Lock API. While both mechanisms ensure thread safety, they differ significantly in terms of flexibility and control.
A synchronized block is simple and concise, but it is limited to being fully contained within a method or block. It also doesn't support advanced features like fairness or interruptibility. Any thread can acquire the lock once it's released, and there's no way to specify which thread should acquire it next.
In contrast, the Lock API provides more flexibility. With methods like lock() and unlock(), you can acquire and release locks across different methods. Additionally, locks can be configured with a fair policy, ensuring that the longest-waiting thread gets access first.
The Lock API also offers non-blocking capabilities with the tryLock() method, allowing a thread to attempt acquiring a lock only if it's immediately available. This helps reduce unnecessary blocking and improves responsiveness in high-concurrency scenarios.
Another key feature is lockInterruptibly(), which allows threads waiting for a lock to be interrupted. This is something synchronized blocks do not support — threads blocked on a synchronized lock cannot be interrupted.
Beyond the basic Lock interface, Java also offers the ReadWriteLock interface, which maintains two separate locks: one for read-only operations and another for write operations. Multiple threads can safely acquire the readLock() simultaneously as long as no thread holds the writeLock(). This improves performance in scenarios with frequent read access and infrequent writes.
ReentrantLock in Java
Java's ReentrantLock is a flexible and feature-rich implementation of the Lock interface. Unlike synchronized blocks, it provides greater control over locking, including try-based locking, interruptible lock acquisition, and fairness policies.
A ReentrantLock allows a thread to reacquire the same lock it already holds without causing a deadlock — hence the name "reentrant". Below is a simple example demonstrating its usage:
1import java.util.concurrent.locks.ReentrantLock;
2
3public class SharedCounter {
4 private final ReentrantLock lock = new ReentrantLock();
5 private int counter = 0;
6
7 public void increment() {
8 lock.lock();
9 try {
10 counter++;
11 System.out.println("Counter is now: " + counter);
12 } finally {
13 lock.unlock(); // Always release the lock in finally block
14 }
15 }
16}It's crucial to place unlock() inside a finally block to ensure the lock is always released, even if an exception is thrown. This prevents potential deadlocks.
Why Reacquire a Lock?
The point of acquiring the same lock multiple times — the"reentrant" part — is to allow a thread to safely call methods that lock the same resource without accidentally deadlocking itself. Without reentrancy, if a thread already held a lock and tried to lock it again (perhaps indirectly by calling another method), it would block forever, waiting for itself.
1public void outer() {
2 lock.lock(); // acquire once
3 try {
4 inner(); // inner() also locks
5 } finally {
6 lock.unlock();
7 }
8}
9
10public void inner() {
11 lock.lock(); // acquire again
12 try {
13 // work
14 } finally {
15 lock.unlock();
16 }
17}Without reentrancy, when inner() calls lock.lock(), it would wait forever because the same thread already has the lock. With a ReentrantLock, the lock is aware of its owner: if the same thread tries to acquire it again, it simply increments an internal hold count and lets the thread proceed.
ReentrantLock keeps:lock():unlock():In short: Reentrancy prevents self-deadlock when the same thread needs to acquire the same lock multiple times.
How to avoid waiting indefinitely for a lock?
If you want to avoid waiting indefinitely for a lock, you can use tryLock() with a timeout. This allows a thread to attempt to acquire the lock for a specified duration, and gracefully back off if the lock is unavailable:
1import java.util.concurrent.TimeUnit;
2import java.util.concurrent.locks.ReentrantLock;
3
4public class SharedCounter {
5 private final ReentrantLock lock = new ReentrantLock();
6
7 public void tryIncrement() {
8 try {
9 if (lock.tryLock(1, TimeUnit.SECONDS)) {
10 try {
11 // Critical section
12 System.out.println("Lock acquired, performing operation.");
13 } finally {
14 lock.unlock();
15 }
16 } else {
17 System.out.println("Could not acquire lock, skipping operation.");
18 }
19 } catch (InterruptedException e) {
20 Thread.currentThread().interrupt();
21 System.out.println("Thread was interrupted while waiting for the lock.");
22 }
23 }
24}In the above example, the thread waits for up to one second to acquire the lock. If it fails to do so, it continues execution without entering the critical section. This pattern is useful in high-concurrency systems where threads should remain responsive.
ReentrantReadWriteLock in java
ReentrantReadWriteLock is a class that implements the ReadWriteLock interface in Java. It provides a pair of associated locks—one for read-only operations and another for write operations.
Here are the rules for how threads can acquire the read or write lock:
Read Lock: Multiple threads can acquire the read lock as long as no thread holds the write lock or is waiting to acquire it.
Write Lock: Only one thread can acquire the write lock, and only if there are no active readers or writers.
This locking mechanism helps improve performance in read-heavy scenarios while still ensuring thread safety for write operations.
1import java.util.HashMap;
2import java.util.Map;
3import java.util.concurrent.locks.Lock;
4import java.util.concurrent.locks.ReadWriteLock;
5import java.util.concurrent.locks.ReentrantReadWriteLock;
6
7public class SynchronizedHashMapWithReadWriteLock {
8
9 Map<String, String> syncHashMap = new HashMap<>();
10 ReadWriteLock lock = new ReentrantReadWriteLock();
11 Lock writeLock = lock.writeLock();
12 Lock readLock = lock.readLock();
13
14 public void put(String key, String value) {
15 try {
16 writeLock.lock();
17 syncHashMap.put(key, value);
18 } finally {
19 writeLock.unlock();
20 }
21 }
22
23 public String remove(String key) {
24 try {
25 writeLock.lock();
26 return syncHashMap.remove(key);
27 } finally {
28 writeLock.unlock();
29 }
30 }
31
32 public String get(String key) {
33 try {
34 readLock.lock();
35 return syncHashMap.get(key);
36 } finally {
37 readLock.unlock();
38 }
39 }
40
41 public boolean containsKey(String key) {
42 try {
43 readLock.lock();
44 return syncHashMap.containsKey(key);
45 } finally {
46 readLock.unlock();
47 }
48 }
49}For both write operations, the critical section must be enclosed within the write lock—only one thread can access it at a time. For read operations, the critical section should be wrapped with the read lock. Multiple threads can access this section concurrently, as long as no write operation is currently in progress.
Condition in java
The Condition class allows a thread to wait for certain conditions to be met while inside a critical section. This is useful when a thread acquires a lock but cannot proceed until a specific condition is true. For example, a reader thread may acquire a lock on a shared queue that currently has no data to consume.
Traditionally, Java provides wait(), notify(), and notifyAll() methods for thread coordination. However, the Condition interface offers a more flexible and powerful alternative when used with explicit locks like ReentrantLock.
1
2public class ReentrantLockWithCondition {
3
4 Stack<String> stack = new Stack<>();
5 int CAPACITY = 5;
6
7 ReentrantLock lock = new ReentrantLock();
8 Condition stackEmptyCondition = lock.newCondition();
9 Condition stackFullCondition = lock.newCondition();
10
11 public void pushToStack(String item){
12 try {
13 lock.lock();
14 while(stack.size() == CAPACITY) {
15 stackFullCondition.await();
16 }
17 stack.push(item);
18 stackEmptyCondition.signalAll();
19 } finally {
20 lock.unlock();
21 }
22 }
23
24 public String popFromStack() {
25 try {
26 lock.lock();
27 while(stack.size() == 0) {
28 stackEmptyCondition.await();
29 }
30 return stack.pop();
31 } finally {
32 stackFullCondition.signalAll();
33 lock.unlock();
34 }
35 }
36}CompletionService in Java
A CompletionService is an interface in Java that combines the power of an Executor with a BlockingQueue. It is designed to make it easier to manage and retrieve results from asynchronous tasks, especially when you care about the order of completion.
The Problem
Normally, when you submit multiple Callable tasks to an ExecutorService, you receive a collection of Future objects. To get results, you must call future.get() on each one. However, this approach has two limitations: future.get() blocks until the specific task finishes, and you cannot easily know which task will complete first. This means you may end up waiting unnecessarily even though other tasks have already finished.
The Solution
CompletionService addresses this problem by internally maintaining a queue of completed tasks. Instead of checking every Future, you can simply take results from this queue as soon as tasks finish — regardless of the order in which they were submitted.
Key Features
Using a CompletionService feels very similar to working with a regular executor: you submit tasks in the same way, but the key difference is in how results are retrieved. Instead of calling get() on each individual Future, you can obtain results in the order they finish. This means faster tasks are returned immediately, even if slower tasks are still running. To support this, the API provides convenient methods such as poll() and take(), which let you retrieve completed results directly from the internal queue.
Main Methods
submit(Callable<V> task) → submits a task and returns a Future<V>.poll() → retrieves and removes the next completed task, or null if none are available.take() → retrieves and removes the next completed task, blocking if none are available.Example
1import java.util.concurrent.*;
2
3public class CompletionServiceExample {
4 public static void main(String[] args) throws Exception {
5 ExecutorService executor = Executors.newFixedThreadPool(3);
6 CompletionService<String> completionService =
7 new ExecutorCompletionService<>(executor);
8
9 // Submit 3 tasks
10 completionService.submit(() -> {
11 Thread.sleep(3000);
12 return "Task 1 done";
13 });
14
15 completionService.submit(() -> {
16 Thread.sleep(1000);
17 return "Task 2 done";
18 });
19
20 completionService.submit(() -> {
21 Thread.sleep(2000);
22 return "Task 3 done";
23 });
24
25 // Retrieve results as they complete
26 for (int i = 0; i < 3; i++) {
27 Future<String> future = completionService.take();
28 System.out.println(future.get());
29 }
30
31 executor.shutdown();
32 }
33}ScheduledExecutorService
The ScheduledExecutorService is a specialized executor in Java that allows you to schedule tasks to run after a given delay or to execute them periodically at fixed intervals. It is particularly useful when background jobs need to run automatically without manual thread management, such as refreshing data, monitoring resources, or performing routine maintenance.
Unlike a normal executor, a scheduled executor comes with built-in methods for task scheduling. You can set up tasks to run once after a specified delay, or repeatedly at fixed intervals. When scheduling tasks periodically, you have two options: running at a fixed rate or running with a fixed delay. At a fixed rate, the executor attempts to maintain a constant execution frequency, starting new runs according to the schedule regardless of whether previous runs have finished. With a fixed delay, however, the executor always waits for the previous task to complete and then applies the delay before starting the next run. This ensures there is never overlap between runs.
The executor also manages its own pool of worker threads internally, so you don't need to worry about creating or supervising threads manually. This makes it both convenient and safer to use in concurrent applications where timing precision and thread reuse are important.
1import java.util.concurrent.*;
2
3public class ScheduledExecutorExample {
4 public static void main(String[] args) {
5 // Create a ScheduledExecutorService with 2 threads
6 ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(2);
7
8 // Task that runs once after 3 seconds
9 scheduler.schedule(() -> {
10 System.out.println("One-time task after 3s");
11 }, 3, TimeUnit.SECONDS);
12
13 // Task that runs repeatedly every 5 seconds, starting after 2 seconds
14 scheduler.scheduleAtFixedRate(() -> {
15 System.out.println("Repeating task at fixed rate: " + System.currentTimeMillis());
16 }, 2, 5, TimeUnit.SECONDS);
17
18 // Task that runs repeatedly with a 5-second delay after the previous one finishes
19 scheduler.scheduleWithFixedDelay(() -> {
20 System.out.println("Repeating task with fixed delay: " + System.currentTimeMillis());
21 }, 2, 5, TimeUnit.SECONDS);
22 }
23}To summarize the difference: scheduleAtFixedRate tries to run tasks on a strict timeline, which may lead to overlapping if tasks take longer than the interval, while scheduleWithFixedDelay waits for each task to finish before starting the next one, guaranteeing no overlap between executions.
Volatile Keyword
Modern processors use multiple CPU cores, and each core has its own L1,L2, and sometimes L3 caches to speed up access to data. Once a value is fetched from main memory (RAM), the CPU may cache it locally so that future reads are faster. The downside of this optimization is that if one core updates a shared variable, other cores may still see an outdated value from their own cache instead of the latest version in memory. This leads to visibility problems and inconsistent program behavior.
Declaring a variable as volatile solves this issue. The keyword tells the JVM and CPU that the variable must always be read directly from main memory, and that any write to it must be flushed immediately back to main memory. As a result, when one thread updates a volatile variable, other threads are guaranteed to see the most recent value without relying on synchronization blocks or locks. While this ensures visibility, it does not guarantee atomicity for compound actions such as count++.
Happens-before Relationship
The volatile keyword does more than just prevent a variable from being cached in a register. Under the Java Memory Model, it also establishes a happens-before relationship between writes and subsequent reads of that variable. This means that changes made by one thread are always visible to others, not only for the volatile variable itself but also for other variables modified before the volatile write.
This property is crucial for ensuring correct visibility across threads. A common example is when one thread runs a loop controlled by a finished flag. Initially, the flag is set to false, and another thread updates it to true to signal termination. Without volatile, the loop might never stop because the update may not be visible across threads.
The memory model guarantees that when one thread sets finished = true, all prior operations in that thread (such as the work inside doStuff()) will also become visible to other threads before they observe the updated flag. This prevents unsafe reordering and ensures predictable behavior. Using a synchronized block or lock could achieve the same result, but it is generally more expensive than relying on volatile.
1volatile boolean finished = false;
2
3while (!finished) {
4 doMyWork();
5}1doStuff();
2finished = true;Lock-free Programming
1import java.util.concurrent.atomic.AtomicInteger;
2
3public class LockFreePseudoRandom {
4 private AtomicInteger seed;
5
6 public LockFreePseudoRandom(int initialSeed) {
7 this.seed = new AtomicInteger(initialSeed);
8 }
9
10 public int nextInt(int n) {
11 while (true) {
12 int current = seed.get();
13 int next = calculateNext(current);
14
15 // Atomically update seed only if it has not been changed by another thread
16 if (seed.compareAndSet(current, next)) {
17 int remainder = current % n;
18 return remainder >= 0 ? remainder : remainder + n;
19 }
20 // If CAS fails, loop retries with the latest value
21 }
22 }
23
24 private int calculateNext(int s) {
25 return (s * 48271) % Integer.MAX_VALUE;
26 }
27}This example demonstrates how to use atomic variables to implement a lock-free class. The critical section is the update of the seed in the nextInt() method. Normally this fetch–update–store sequence would require synchronization to prevent race conditions. However, using AtomicInteger, we can achieve the same result without blocking threads.
The compareAndSet() (CAS) method plays a key role. It attempts to update the value only if it still matches the expected value. If another thread has already changed it, the CAS operation fails, and the loop retries with the new value. This ensures atomicity without locks.
Lock-free approaches avoid thread blocking, which improves scalability and responsiveness. Under heavy contention, performance may be similar to using locks, since threads may repeatedly retry. However, under moderate or low contention, lock-free code is usually much faster because threads never wait—they always make progress.
CountDownLatch
A CountDownLatch is a synchronization aid in Java that allows one or more threads to wait until a set of operations being performed in other threads are completed. Think of it like a gate that remains closed until a certain number of signals have been received—once the count reaches zero, the gate opens and all waiting threads can continue.
The latch is initialized with a count (an integer). Each time a worker thread calls countDown(), the count decreases by one. Meanwhile, any threads calling await() will block until the count hits zero. Once it does, all waiting threads are released at the same time.
A common use case is when the main thread needs to wait for several worker threads to complete before moving forward. This is useful in scenarios like waiting for multiple services to start up, loading resources before continuing, or synchronizing phases of execution.
1import java.util.concurrent.CountDownLatch;
2
3public class CountDownLatchExample {
4 public static void main(String[] args) throws InterruptedException {
5 int numWorkers = 3;
6 CountDownLatch latch = new CountDownLatch(numWorkers);
7
8 // Worker threads
9 for (int i = 1; i <= numWorkers; i++) {
10 final int workerId = i;
11 new Thread(() -> {
12 try {
13 System.out.println("Worker " + workerId + " is working...");
14 Thread.sleep((long)(Math.random() * 2000)); // simulate work
15 System.out.println("Worker " + workerId + " finished.");
16 } catch (InterruptedException e) {
17 e.printStackTrace();
18 } finally {
19 latch.countDown(); // signal completion
20 }
21 }).start();
22 }
23
24 System.out.println("Main thread waiting for workers...");
25 latch.await(); // waits until count reaches 0
26 System.out.println("All workers finished. Main thread proceeding.");
27 }
28}Thread-Safe Singleton Implementation
The Singleton pattern is one of the most common design patterns in software development. It ensures that a class has only a single instance throughout the application lifecycle, while providing global access to that instance.
Typical use cases include database connection pools that manage limited connections efficiently, centralized logger instances, cache managers that share data across components, and thread pools for concurrent operations.
However, implementing a Singleton in multi-threaded environments introduces challenges. Without proper thread safety, multiple threads may simultaneously create separate instances, breaking the Singleton guarantee and leading to inconsistent state or resource conflicts.
1public class SimpleSingleton {
2 private static SimpleSingleton instance;
3
4 private SimpleSingleton() { }
5
6 public static SimpleSingleton getInstance() {
7 if (instance == null) {
8 instance = new SimpleSingleton();
9 }
10 return instance;
11 }
12}The above implementation works fine in a single-threaded application. But in a multi-threaded context, two threads calling getInstance() at the same time may both create different instances. One way to solve this is by synchronizing the method:
1public static synchronized SynchronizedSingleton getInstance() { ... }This guarantees mutual exclusion, ensuring only one instance is created. However, it comes with a performance penalty since every call to getInstance() is synchronized, even after the instance has been initialized.
Another common approach is eager initialization, where the instance is created at class loading time:
1private static final EagerSingleton INSTANCE = new EagerSingleton();This is inherently thread-safe since the JVM ensures that class initialization is atomic. The downside is that the instance is created whether or not it is ever used, which can be wasteful if it holds expensive resources.
A more elegant and widely recommended approach is the Bill Pugh Singleton, which leverages a static inner class:
1public class SingletonObject {
2 private SingletonObject() { }
3
4 private static class SingletonHelper {
5 private static final SingletonObject SINGLETON_INSTANCE = new SingletonObject();
6 }
7
8 public static SingletonObject getInstance() {
9 return SingletonHelper.SINGLETON_INSTANCE;
10 }
11}Here, the Singleton instance is created only when the inner class is first referenced. This provides both laziness (the instance is not created until needed) and thread safety without the cost of synchronization.