Java interview questions

Table of Contents

Lambdas vs Anonymous Classes

In Java, both anonymous classes and lambda expressions allow developers to define small blocks of behavior inline — typically to pass as callbacks or functional arguments. However, the way they are compiled and executed under the hood is quite different, with important implications for performance and memory efficiency.

An anonymous inner class is a full-fledged, unnamed class created at compile time. Each one generates a separate .class file (for example, OuterClass$1.class) that must be loaded and verified at runtime. This introduces additional overhead related to class loading, memory allocation, and object instantiation. Each use of an anonymous class typically results in a new object created on the heap.

On the other hand, lambda expressions are implemented more efficiently using theinvokedynamic instruction introduced in Java 7. Instead of creating a new class file, the JVM links the lambda to its corresponding functional interface at runtime, often reusing the same instance if no variables are captured from the enclosing scope. This means that lambdas typically have far less overhead in both memory and class-loading time.

Modern JVMs further optimize lambdas through techniques like method inlining — especially when a lambda is small or frequently invoked in a tight loop. In some cases, such as with method references, the compiler can even bypass object creation altogether, directly referencing the existing method.

Key differences:
• Anonymous classes create a separate .class file for each instance.
• Lambdas use invokedynamic for dynamic runtime binding, avoiding extra class files.
• Anonymous classes always allocate a new object on the heap.
• Stateless lambdas (that do not capture variables) can be reused and optimized by the JVM.
• Lambdas enable more efficient inlining and reduced method call overhead.

In short, while anonymous classes provide flexibility and full object-oriented semantics, lambdas are lighter, more efficient, and better aligned with functional programming styles. They are preferred in modern Java for concise, performant, and cleaner code.

What is Inlining

Inlining is one of the most fundamental and powerful optimizations performed by the JVM's Just-In-Time (JIT) compiler. Inlining means replacing a method call with the method's actual body of code. Instead of performing a separate function call — which involves stack setup, jumps, and returns — the JIT copies the method's body directly into the caller at runtime.

1int square(int x) {
2    return x * x;
3}
4
5int compute(int a, int b) {
6    return square(a) + square(b);
7}

Without inlining, the compiled bytecode performs:

• Two separate method invocations (square(a) and square(b))
• Two additional stack frames (one per call)
• Two jumps and returns

When the JIT compiler detects that square() is small and frequently called, it inlines the method. The resulting optimized code looks like this:

1int compute(int a, int b) {
2    return (a * a) + (b * b);
3}

By inlining, the JVM avoids unnecessary method calls and enables further optimizations. The benefits include:

Reduced overhead: Eliminates call/return and stack frame setup costs, especially in tight loops.
Deeper optimization opportunities: After inlining, the JIT can perform additional optimizations such as constant folding, loop unrolling, and dead code elimination.
Improved branch prediction: The resulting code becomes smaller and more predictable, allowing the CPU to optimize execution paths.

However, inlining is not always beneficial. Excessive inlining can lead to code bloat, which increases the size of compiled machine code. This can:

• Reduce instruction cache locality (larger code takes longer to fetch).
• Increase JIT compilation time.
• Occasionally degrade performance in very large applications.

In summary, inlining trades off a small increase in code size for a large potential gain in runtime performance. It's one of the JVM's most important optimizations for achieving the speed of native code while maintaining Java's flexibility and portability.

What is invokedynamic?

Traditionally, the JVM supported four main bytecode instructions for calling methods — invokestatic,invokevirtual,invokespecial, and invokeinterface. These worked perfectly for statically typed languages like Java.

But for dynamic languages on the JVM (like Groovy, JRuby, or Kotlin's lambdas), this model was too rigid. Each language had to generate complex, custom bytecode just to support dynamic method dispatch — that is, deciding which method to call only at runtime.

That's where invokedynamic comes in. It allows the JVM to defer method linkage — meaning *what method is actually called* — until runtime. This makes the JVM itself responsible for dynamic resolution, rather than forcing every language to reinvent the wheel.

• When the JVM encounters an invokedynamic instruction for the first time, it runs a bootstrap method.
• That bootstrap method acts like a factory — it decides how to link the call site and returns a CallSite object.
• The CallSite contains a MethodHandle — a direct reference to the target method.
• After this first setup, the JVM caches the linkage, so subsequent calls go straight to that MethodHandle — no extra overhead.
1Runnable r = () -> System.out.println("Hello");

When you write a lambda like the one above, the Java compiler doesn't generate a new class file for the lambda. Instead, it emits an invokedynamic instruction. At runtime, the JVM:

• Calls LambdaMetafactory.metafactory(...).
• Dynamically generates a hidden class that implements Runnable.
• Links that generated instance to the call site.
• Optionally returns a cached singleton lambda instance if possible.
• Future lambda calls go directly through the cached MethodHandle — no reflection, no overhead.

In essence, invokedynamic makes dynamic features — such as lambdas, method references, and even entire dynamic languages — run efficiently on the JVM. It bridges the gap between *statically compiled bytecode* and *runtime flexibility*.

Think of invokedynamic as the JVM's built-in way to say: “I'll figure out what method to call later — but once I know, I'll make it fast.”

Standard memory model

In the JVM memory model, local variables declared inside a method (e.g.,MyObject obj) store references, which are essentially pointers to objects. These references are stored in the current thread's stack frame, making them thread-local and fast to access.

The actual objects themselves are typically allocated on the heap when using new. The heap is shared across threads and managed by the garbage collector, which handles memory allocation and reclamation automatically.

However, modern JVMs do not strictly follow this simple stack-versus-heap model due to aggressive runtime optimizations.

One important optimization is escape analysis. If the JVM determines that an object does not escape the scope of a method (for example, it is not returned or shared with other threads), it may allocate the object on the stack instead of the heap. This avoids heap allocation overhead and reduces pressure on the garbage collector.

Another optimization is scalar replacement, where the JVM eliminates the object allocation entirely. Instead of creating an object, its individual fields are broken down and stored directly in registers or on the stack, allowing for more efficient execution.

In summary, while the conceptual model is “references on the stack and objects on the heap,” modern JVMs dynamically optimize this behavior at runtime to reduce allocation costs and improve performance.

Serialization and Deserialization

Serialization is the process of converting an object's state into a byte stream, while deserialization is the reverse process of reconstructing an object from that byte stream. This mechanism is commonly used for persistence (saving objects to files or databases) and network communication (sending objects between different JVMs).

Serializable Interface: A marker interface (it contains no methods) that a class must implement to be eligible for serialization.

ObjectOutputStream: Used to serialize objects via the writeObject() method.

ObjectInputStream: Used to deserialize objects via the readObject() method.

transient Keyword: Marks fields that should not be serialized, such as sensitive data (e.g., passwords) or temporary values.

static Fields: These are not serialized because they belong to the class rather than individual object instances.

serialVersionUID: A unique identifier used for class versioning. If the sender and receiver have mismatched IDs, deserialization will fail with an InvalidClassException.

1import java.io.*;
2
3class User implements Serializable {
4    private static final long serialVersionUID = 1L;
5
6    private String name;
7    private transient String password; // Will not be serialized
8
9    public User(String name, String password) {
10        this.name = name;
11        this.password = password;
12    }
13
14    public static void main(String[] args) {
15        // Serialization
16        try (ObjectOutputStream oos =
17                 new ObjectOutputStream(new FileOutputStream("user.ser"))) {
18
19            User user = new User("Alice", "secret123");
20            oos.writeObject(user);
21
22        } catch (IOException e) {
23            e.printStackTrace();
24        }
25
26        // Deserialization
27        try (ObjectInputStream ois =
28                 new ObjectInputStream(new FileInputStream("user.ser"))) {
29
30            User user = (User) ois.readObject();
31            System.out.println(user.name);      // Prints "Alice"
32            System.out.println(user.password);  // Prints null (transient)
33
34        } catch (IOException | ClassNotFoundException e) {
35            e.printStackTrace();
36        }
37    }
38}

You can override writeObject() and readObject() within your class to add custom logic, such as encryption or validation during serialization and deserialization.

If a parent class implements Serializable, all of its subclasses are automatically serializable. However, if a parent class is not serializable, its no-argument constructor will be invoked during deserialization.

Externalizable Interface: An alternative that gives you full control over the serialization process. You must explicitly implement writeExternal() and readExternal().

Security Note: Deserializing untrusted data is a serious security risk and can lead to remote code execution vulnerabilities. It is recommended to use serialization filters (introduced in Java 9) or safer alternatives such as JSON or Protocol Buffers when handling untrusted data.