So really, what is the overhead of virtualization and when should I be concerned?
Solution 1:
Things which I would never put in a VM:
Anything which uses specific hardware which cannot be virtualized: usually graphics, quite a few hardware security modules, anything with customized drivers (special purpose network drivers, for example).
Systems with license issues. Some software charges per physical CPU or core, no matter how few you have allocated to the VM. You'd get hit in an audit if you had software licensed for a single core running in a VM on a 32-core server.
Things which I would discourage putting in a VM:
Software which already makes an effort to use all resources in commodity hardware. Machines working as a part of a "big data" effort like hadoop are typically designed to run on bare metal.
Anything which is going to be finely tuned to make use of resources. When you really begin tuning a database, VMs contending for resources are really going to throw a wrench in the work.
Anything which already has a big bottleneck. It already doesn't play well with itself, it will not likely play well with others.
There are some things which are quite awesome for putting in VMs:
Anything which spends quite a lot of time idle. Utility hosts like mail and DNS have a difficult time generating enough load on modern hardware to warrant dedicated servers.
Applications which do not scale well (or easily) on their own. Legacy code quite frequently falls into this category. If the app won't expand to take up the server, use lots of little virtual servers.
Projects/applications which start small but grow. It's much easier to add resources to a VM (as well as move to newer, bigger hardware) as opposed to starting on bare metal.
Also, I'm not sure if you are exaggerating about putting a huge number of VMs on a single host, but if you are trying for a large VM:HW ratio, you may want to consider ESX, Xen, KVM instead. You'll fare much better than using VMware or virtualbox on Windows.
Solution 2:
Disk subsystem. This usually the least shareable resource. Memory, of course, but that one is apparent.
Disk subsystem limitations work in both ways. If a system uses a lot of disk I/O other guests slow down. If this guest is in production it propably needs fast response to web queries. This can be very frustrating and also a big reason why not to rent virtual hardware. You can minimize this problem by using dedicated disks.
Using only 512 MB memory in Guests puts all disk cache on the host. And it's not equally divided among guests.
Do not worry about CPU IO. In this way virtualization is very efficient, often related as only multiple processes running on same system. I seldom see multi-xeon systems running 100% on CPU.
edit: typos
Solution 3:
There are two points to virtualization performance.
- shared bottleneck
- emulation
On shared bottlenecks, who else is on the same iron? If you are co-located in a virtualized environment, you are very dependent on the hosting partner being honest with you.
I think the main question for raw performance (particularly interactivity) to ask is which parts of the virtualization system are emulated. This differs depending on setup. Disk and network are the typical candidates. As a rule of thumb, emulation doubles the performance "cost" of performing an action, so any hardware latency time should be counted double and any thruput number should be halved.
Solution 4:
Ultimately, any high performance load shouldn't be virtualized. The performance overhads of virtualization are non-trivial. See the results of my tests here:
https://altechnative.net/virtual-performance-or-lack-thereof/
OTOH, if you are looking to consolidate a number of machines that are mostly idle all the time, virtualization is the way forward.