Why are synchronize expensive in Java?

Maybe it’s not as bad as you think

It used to be terrible (which is possibly why you read that it was “very expensive”). These memes can take a long time to die out

How expensive is synchronization?

Because of the rules involving cache flushing and invalidation, a synchronized block in the Java language is generally more expensive than the critical section facilities offered by many platforms, which are usually implemented with an atomic “test and set bit” machine instruction. Even when a program contains only a single thread running on a single processor, a synchronized method call is still slower than an un-synchronized method call. If the synchronization actually requires contending for the lock, the performance penalty is substantially greater, as there will be several thread switches and system calls required.

Fortunately, continuous improvements in the JVM have both improved overall Java program performance and reduced the relative cost of synchronization with each release, and future improvements are anticipated. Further, the performance costs of synchronization are often overstated. One well-known source has cited that a synchronized method call is as much as 50 times slower than an un-synchronized method call. While this statement may be true, it is also quite misleading and has led many developers to avoid synchronizing even in cases where it is needed.

Having said that – concurrent programming can still be slow, but not so much of it is purely Java’s fault now. There is a trade-off between fine and coarse locking. Too coarse is obviously bad, but it’s possible to be too fine too, as locks have a non zero cost.

It’s important to consider the particular resource under contention. Mechanical hard disks are an example where more threads can lead to worse performance.

Leave a Comment