Why do some applications run faster on a 32-bit guest VM than on the 64-bit host machine?
Since you mention Java, what is the version of your JVM and is it running in 32-bit or 64-bit mode on the host?
Compressed Oops
An "oop", or ordinary object pointer in Java Hotspot parlance, is a managed pointer to an object. An oop is normally the same size as a native machine pointer, which means 64 bits on an LP64 system. On an ILP32 system, maximum heap size is somewhat less than 4 gigabytes, which is insufficient for many applications. On an LP64 system, the heap used by a given program might have to be around 1.5 times larger than when it is run on an ILP32 system. This requirement is due to the expanded size of managed pointers. Memory is inexpensive, but these days bandwidth and cache are in short supply, so significantly increasing the size of the heap and only getting just over the 4 gigabyte limit is undesirable.
Managed pointers in the Java heap point to objects which are aligned on 8-byte address boundaries. Compressed oops represent managed pointers (in many but not all places in the JVM software) as 32-bit object offsets from the 64-bit Java heap base address. Because they're object offsets rather than byte offsets, they can be used to address up to four billion objects (not bytes), or a heap size of up to about 32 gigabytes. To use them, they must be scaled by a factor of 8 and added to the Java heap base address to find the object to which they refer. Object sizes using compressed oops are comparable to those in ILP32 mode.
The term decode is used to express the operation by which a 32-bit compressed oop is converted into a 64-bit native address into the managed heap. The inverse operation is referred to as encoding.
Compressed oops is supported and enabled by default in Java SE 6u23 and later. In Java SE 7, use of compressed oops is the default for 64-bit JVM processes when
-Xmx
isn't specified and for values of-Xmx
less than 32 gigabytes. For JDK 6 before the 6u23 release, use the-XX:+UseCompressedOops
flag with the java command to enable the feature.
The larger memory footprint of 64-bit JVMs has very significant performance implications.
Applications may be faster in VM due to caching. Since VM store their disks in files, the host operation system may cache this files in RAM and they will work noticeable faster. The real world difference between 32bit and 64bit applications is a few percents.