Site Search:

Java concurrency in practice with examples

This java study materials arranged around Brian Goetz's book Java concurrency in practice, added java 7 and 8 features and project Loom. Runnable java examples are provided for each chapters.

Chapter 1 Introduction

I Fundamentals


Chapter 2 Thread Safety

Chapter 3 Sharing Objects

Chapter 4 Composing Objects

Chapter 5 Building blocks

II Structuring Concurrent Applications




Chapter 9 GUI Applications

III Liveness, Performance, and Testing




IV Advanced Topics







V Concurrency in Java 8 (lamda and parallel Stream)








VI Concurrency beyond Java 8 (concurrency in other languages)







Understanding the C Memory Model and pthreads: The Foundation of Modern Concurrency

Scoped Values and Thread-Local Alternatives

 Back>

Scoped Values and Thread-Local Alternatives

We have explored how structured concurrency makes multithreaded code safer and easier to manage. Now we look at a new primitive in the Java concurrency toolbox: ScopedValue. This feature replaces many use cases of ThreadLocal with a model better suited for virtual threads.


What Is ScopedValue?

ScopedValue is a safe, immutable, and inheritable thread context designed to replace ThreadLocal in virtual-thread environments. Unlike ThreadLocal, which relies on mutable state tied to the thread, ScopedValue is a single-assignment object bound to a well-defined scope.

You define a value in a scope, and all code within that scope — including tasks running in virtual threads — can access it.


Why Not ThreadLocal?

ThreadLocal has long been used for per-thread data like user sessions, log context, and request-scoped state. But in the virtual thread world, it poses problems:

  • It’s mutable, leading to subtle bugs in shared environments
  • It leaks memory if not cleaned up properly
  • It’s expensive to maintain across millions of virtual threads

ScopedValue solves these issues by being immutable and scope-bound.


Basic Usage

static final ScopedValue<String> USER = ScopedValue.newInstance();

void handleRequest(String userId) {
    ScopedValue.where(USER, userId).run(() -> {
        logRequest();       // prints: user=alice
        processRequest();   // can access USER.get()
    });
}

Inside the ScopedValue.where(...).run() block, the value is safely accessible. Outside, it throws IllegalStateException if you try to read it.


ScopedValue vs ThreadLocal Comparison

FeatureThreadLocalScopedValue
MutableYesNo
Garbage-safetyManual cleanup neededAuto scoped
Virtual thread friendlyNoYes
Default value supportYesNo (must be explicitly set)
InheritanceYes (but buggy)Explicit

ScopedValue and Structured Concurrency

ScopedValue works naturally with StructuredTaskScope. For example:

ScopedValue.where(CTX, "alice").run(() -> {
    try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
        scope.fork(() -> logContext());   // "alice"
        scope.fork(() -> audit());        // "alice"
        scope.join();
    }
});

All subtasks share the same value safely without mutable thread-local state.


Best Practices

  • Use ScopedValue for all new context-passing code
  • Avoid ThreadLocal in virtual thread-based systems
  • Use ScopedValue.get() only inside ScopedValue.where(...) blocks

Conclusion

ScopedValue is a modern replacement for ThreadLocal that aligns with virtual threads, structured concurrency, and clean context propagation. It brings safety and immutability to multithreaded environments and avoids common pitfalls of legacy thread-local storage.

Structured Concurrency

Back> 

Structured Concurrency

We explored how virtual threads enable lightweight, high-scale concurrency with traditional blocking logic. Now, we introduce structured concurrency, a powerful model that brings clarity, safety, and predictability to multithreaded programming in Java.


What Is Structured Concurrency?

Structured concurrency is a programming model that treats multiple concurrent tasks running in a method as part of a single unit of work. All child threads must complete (or be canceled) before the parent scope exits. This ensures threads are well-scoped, making the code easier to reason about, debug, and manage.

Think of it as structured control flow for threads — just like try-with-resources manages resource lifecycles, structured concurrency manages the lifecycle of spawned threads.


Java 21’s API: StructuredTaskScope

Java 21 introduces StructuredTaskScope in java.util.concurrent. It provides a way to spawn concurrent subtasks, wait for them to complete, and cancel others on success or failure.

try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
    Future<String> user  = scope.fork(() -> fetchUser());
    Future<String> order = scope.fork(() -> fetchOrder());

    scope.join();           // Wait for all subtasks
    scope.throwIfFailed();  // Propagate exceptions

    return user.result() + " - " + order.result();
}

This model ensures both fetchUser() and fetchOrder() complete within the method’s scope. If one fails, the other is canceled automatically.


StructuredTaskScope Variants

Java provides three built-in variants:

  • StructuredTaskScope.ShutdownOnSuccess — cancels remaining tasks when any subtask succeeds
  • StructuredTaskScope.ShutdownOnFailure — cancels all tasks when one fails (for fault tolerance)
  • StructuredTaskScope (manual join, no auto shutdown)

These offer granular control over what should happen when subtasks succeed or fail.


Benefits of Structured Concurrency

  • Automatic cancellation of unused tasks
  • Exception propagation from subtasks to parent
  • Scoped lifetimes – no forgotten threads running in the background
  • Better observability – threads appear nested and scoped in traces

Best Practices

  • Use StructuredTaskScope for request-scoped parallelism
  • Prefer ShutdownOnFailure for fail-fast logic
  • Don’t leak tasks outside the scope
  • Combine with virtual threads for low-cost task spawning

What’s Next?

Structured concurrency is one of the cleanest additions to Java’s concurrent toolkit in years. Next, we’ll explore ScopedValue, Java's alternative to thread-local variables, optimized for virtual threads and structured concurrency.

Transforming Java Concurrency with Virtual Threads (Project Loom)

Back> 

Transforming Java Concurrency with Virtual Threads (Project Loom)

With the arrival of virtual threads in Java 21, concurrent programming in Java has taken a major leap forward. Virtual threads are lightweight, memory-efficient threads managed by the JVM rather than the OS, allowing millions of concurrent tasks to run efficiently. This post explains how Java supports virtual threads, how they interact with the Java memory model, and how to modernize your existing code using Thread, ExecutorService, and Future.


What Are Virtual Threads?

Virtual threads (part of Project Loom) are lightweight user-mode threads. Unlike platform threads (which map 1:1 with OS threads), virtual threads are scheduled by the JVM and can be suspended or resumed without kernel intervention.

They support blocking operations without blocking the underlying kernel thread, which dramatically simplifies concurrent applications that traditionally relied on callbacks, thread pools, or complex async APIs.

In traditional Java, every Thread corresponds to a native OS thread. These threads are expensive to create (megabytes of memory per stack) and limited in number (~thousands max). In contrast, virtual threads are user-mode threads scheduled by the JVM, not the OS. The JVM can multiplex millions of virtual threads onto a small pool of carrier (platform) threads.


Threading Model Comparison:

Feature Platform Thread Virtual Thread
Mapped to OS ThreadYes (1:1)No (many:few)
Thread Creation CostHigh (kernel allocation)Low (JVM-managed)
ScalabilityThousandsMillions
Blocking I/OBlocks OS threadUnmounts, then resumes
Stack SizeFixed (~1MB)Small, growable, pausable

Memory Model

Virtual threads fully comply with Java’s memory model. Just like platform threads, they:

  • Have their own call stack
  • Share heap memory with other threads
  • Respect synchronized, volatile, and other concurrency constructs

However, virtual threads can be unmounted (paused) when blocked and remounted on another carrier thread later. This behavior is fully transparent to developers but allows far better scalability.




Migrating Traditional Concurrency to Virtual Threads

Replace new Thread() with Thread.startVirtualThread()

// Before
new Thread(() -> {
    handleRequest();
}).start();

// After
Thread.startVirtualThread(() -> {
    handleRequest();
});

Replace Fixed Thread Pools with Virtual Thread Executors

// Before
ExecutorService pool = Executors.newFixedThreadPool(10);

// After
ExecutorService pool = Executors.newVirtualThreadPerTaskExecutor();

With virtual threads, there’s no need to manually limit thread pool size in most applications — the JVM efficiently schedules virtual threads.

Futures Work Seamlessly

Future<Integer> result = pool.submit(() -> {
    return computeValue();
});
int value = result.get(); // blocking is fine with virtual threads

Native Blocking and Pinning

Certain operations can "pin" virtual threads to carrier threads:

  • Using native I/O or JNI that blocks
  • Holding locks for a long time
  • Waiting on synchronized blocks during blocking I/O

To avoid pinning, prefer:

  • java.nio (non-blocking channels)
  • Lock over synchronized if locks are held during I/O

Testing & Observability

Use jcmd or thread dumps to verify that virtual threads are being used:

jcmd <pid> Thread.dump

Look for thread names like VirtualThread[#] to confirm Loom is working as expected.


Migration Checklist

  • ✅ Replace new Thread(...) with Thread.startVirtualThread()
  • ✅ Switch from fixed thread pools to Executors.newVirtualThreadPerTaskExecutor()
  • ✅ Avoid native blocking APIs or wrap them in CompletableFuture.supplyAsync()
  • ✅ Review thread-local usage — avoid leaking references
  • ✅ Profile to avoid long-held locks in synchronized sections

Conclusion

Virtual threads modernize Java’s threading model, making high-concurrency programming simpler, safer, and more scalable. You can now write direct-style, blocking code that performs as well as complex async code — and converting existing thread-based applications is mostly straightforward.

Project Loom doesn't replace the need for understanding Java concurrency — but it reduces boilerplate and expands what’s possible in memory- and thread-efficient applications.