Tuning NFS client/server stack
Just to clarify, you're getting 50MB/sec with NFS over a single Gb ethernet connection?
And the host server is running CentOS with VMware Server installed, which is in turn running the 7 VMs? Is there a particular reason you've gone with CentOS and VMware Server combined, rather than VMware ESXi which is a higher performance solution?
The 50MB/sec isn't great, but it's not much below what you'd expect over a single Gb network cable - once you've put in the NFS tweaks people have mentioned above you're going to be looking at maybe 70-80MB/sec. Options along the line of:
"ro,hard,intr,retrans=2,rsize=32768,wsize=32768,nfsvers=3,tcp"
are probably reasonable for you at both ends of the system.
To get above that you're going to need to look at teaming the network cards into pairs, which should increase your throughput by about 90%. You might need a switch that supports 802.3ad to get the best performance with link aggregation.
One thing I'd suggest though is your IO throughput on the OpenSolaris box sounds suspiciously high, 12 disks aren't likely to support 1.6GB/sec of throughput, and that may be heavily cached by Solaris + ZFS.