animesh kumar

Running water never grows stale. Keep flowing!

Posts Tagged ‘Concurrent Programming

Synchronized Java

with one comment

synchronizedswimmingIn the Java Language the job of managing coordination between threads is largely pushed on to the developer. The primary tool for managing coordination between threads in Java programs is the synchronized keyword, in absence of which the JVM is free to take a great deal of liberty in the timing and ordering of operations (Refer JLS – Java Language Specification) executing in different threads. Most of the time this is desirable, but if not administered properly, such optimizations could compromise program’s correctness.

What is Synchronized?

Think of Java as in where “each thread runs on its own processor with its own local memory, each talking to and synchronizing with a shared main memory.”

While the semantics of synchronized include mutual exclusion (mutex) and atomicity, it’s far more than this in reality. It’s a guarantee that only one thread has access to the protected section at one time, but there are rules about the synchronizing thread’s interaction with main memory. In particular, the acquisition or release of a lock triggers a memory barrier — a forced synchronization between the thread’s local memory and main memory. When a thread exits a synchronized block, it performs a write barrier — it must flush out any variables modified in that block to main memory before releasing the lock. Similarly, when entering a synchronized block, it performs a read barrier — it is as if the local memory has been invalidated, and it must fetch any variables that will be referenced in the block from main memory.

Benefits:

When threads A and B synchronize on the same object, JMM guarantees that thread B sees the changes made by thread A, and that changes made by thread A inside the synchronized block appear atomically (either the whole block executes or none of it does) to thread B. Also, JMM ensures that synchronized blocks that synchronize on the same object will appear to execute in the same order as they do in the program.

Consequences of failing:

Data corruption and Race conditions (A race condition is a situation in which two or more threads are reading or writing some shared data, and the final result depends on the timing of how the threads are scheduled.): they can cause programs to crash, or behave unpredictably, or produce incorrect results. Worse, these conditions are likely to occur only rarely and sporadically making the problem hard to detect and reproduce.

Performance:

In tuning an application’s use of synchronization we should try hard to reduce the amount of actual contention, rather than simply trying to avoid using synchronization at all. During contending for the lock there will be several thread switches and system calls, raising the performance penalty substantially higher. According to a source, A synchronized call to an empty method may be 20 times slower than an unsynchronized call to an empty method.

Volatile?

It is a commonly held belief that since JLS guarantees that 32-bit reads will be atomic; you do not need to acquire a lock to simply read an object’s fields. This intuition is incorrect. Unless the fields are declared volatile, JMM guarantees no cache coherency and no sequential consistency. Also, however, while JMM prevents writes to volatile variables from being reordered with respect to one another and ensures that they are flushed to main memory immediately, it still permits reads and writes of volatile variables to be reordered with respect to nonvolatile reads and writes.

Techniques to write thread-safe programs:

Latency and scalability are two factors affecting program’s performance. Latency describes how long it takes for a given task to complete, and scalability describes how a program’s performance varies under increasing load or given increased computing resources. A high degree of contention is bad for both. When multiple threads contend for the same monitor, the JVM has to maintain a queue of threads waiting for that monitor (and this queue must be synchronized across processors), which means more time spent in the JVM or OS code and less time spent in your program code. Also, Contention impairs scalability because it forces the scheduler to serialize operations. When one thread is executing a synchronized block, any thread waiting to enter that block is stalled and processors may sit idle.

In short, we must reduce contention for critical resources in order to be able to maintain the scalability of our program.

Idea 1: Do only what needs to be done: Make synchronized blocks as short as possible. Do any thread-safe pre-processing or post-processing outside of the synchronized block.

Idea 2: Lock what needs to be safeguarded: Spread your synchronizations over more locks.

Idea 3: Provide a synchronized wrapper: The Collections classes are a good example of this technique; they are unsynchronized, but for each interface defined in the framework, there is a synchronized wrapper (for example, Collections.synchronizedMap()) that wraps each method with a synchronized version.

Idea 3: Collapse Locks: Obtain a broader lock once. In the code below, a broader lock has been obtained on the vector object before loops start. So, when elementAt() attempts to acquire the lock, the JVM sees that the current thread already has the lock, It lengthens the synchronized block, and less time will be spent on scheduling overhead. Though this violets Idea-1, it can be considerably faster, as less time will be lost to the scheduling overhead.

Vector v;

...

synchronized (v) {

for (int i=0; i

String s = (String) v.elementAt(i);

...

}

}

Idea 3: Reduce contention by giving each thread its own copy of certain critical objects using ThreadLocal which allows us to bypass the complexity of determining when to synchronize and it improves scalability because it doesn’t require any synchronization since each thread holds a separate copy of critical object.


References:

Image courtesy Phoenix Synchronized Swimming

Advertisements

Written by Animesh

April 24, 2009 at 8:56 am