Why are synchronize expensive in Java?
This isn't that specific to Java. Synchronization can be considered "expensive" in any multi-threaded environment if not done correctly. Whether it's particularly bad in Java, I do not know.
It prevents threads from running concurrently if they use the same resource. But, since they do use the same resource, there's no better option (it has to be done).
The problem is that people often protect a resource with too big a scope. For example, a badly designed program may synchronize an entire array of objects rather than each individual element in the array (or even a section of the array).
This would mean that a thread trying to read element 7 must wait for a thread reading or writing element 22. Not necessary. If the granularity of the synchronization were at the element level instead of the array level, those two threads wouldn't interfere with each other.
Only when two threads tried to access the same element would there be resource contention. That's why the general rule is to only protect as small a resource as possible (subject to limitations on number of synchronizations, of course).
But, to be honest, it doesn't matter how expensive it is if the alternative is data corruption due to two threads fighting over a single resource. Write your application correctly and only worry about performance problems if and when they appear ("Get it working first then get it working fast" is a favorite mantra of mine).
Maybe it's not as bad as you think
It used to be terrible (which is possibly why you read that it was "very expensive"). These memes can take a long time to die out
How expensive is synchronization?
Because of the rules involving cache flushing and invalidation, a synchronized block in the Java language is generally more expensive than the critical section facilities offered by many platforms, which are usually implemented with an atomic "test and set bit" machine instruction. Even when a program contains only a single thread running on a single processor, a synchronized method call is still slower than an un-synchronized method call. If the synchronization actually requires contending for the lock, the performance penalty is substantially greater, as there will be several thread switches and system calls required.
Fortunately, continuous improvements in the JVM have both improved overall Java program performance and reduced the relative cost of synchronization with each release, and future improvements are anticipated. Further, the performance costs of synchronization are often overstated. One well-known source has cited that a synchronized method call is as much as 50 times slower than an un-synchronized method call. While this statement may be true, it is also quite misleading and has led many developers to avoid synchronizing even in cases where it is needed.
Having said that - concurrent programming can still be slow, but not so much of it is purely Java's fault now. There is a trade-off between fine and coarse locking. Too coarse is obviously bad, but it's possible to be too fine too, as locks have a non zero cost.
It's important to consider the particular resource under contention. Mechanical hard disks are an example where more threads can lead to worse performance.
It is expensive because if you are using threads, and a number of threads have to go through a synchronized section of code, only one of them may be executed at a time.
It is like a bottleneck.
It is even expensive when you use a single thread, because it has to check anyway if he is allowed to run.
If you reduce the use of synchronized segments your thread won't have to stop to see if they can run ( of course, they don't have to share data )
A high level overview of how synchronization works may be found here
http://img20.imageshack.us/img20/2066/monitor28synchronizatioc.png
A Java style monitor