Get link speed of an virtio-net network adapter
Virtio is a para-virtualized driver, which means the OS and driver are aware that it's not a physical Device. The driver is really an API between the guest and the hypervisor so it's speed is totally disconnected from any physical device or Ethernet standard.
This is a good thing as this is faster than the hypervisor pretending to be a physical device and applying an arbitrary "link speed" concept to flow.
The VM just dumps frames onto a bus and it's the host's job to deal with the physical devices; no need for the VM to know or care what the link speed of hosts physical devices are.
One of the advantages of this is that when packets are moving between 2 VMs on the same host they can send packets as fast as the host's CPU can move them from one set of memory to another, setting a "linkspeed" here just puts in an unneeded speed limit.
This also allows the host to do adaptor teaming and spread traffic across multiple links without every VM needing to be explicitly configured to get the full bandwidth of the setup.
If you want to know how fast you can actually transfer data from your VM to another location you need to do actual throughput tests with tools like iperf
.
To expand a bit on this because I too recently came into this and was also semi-confused by the lack of speed details when running ethtool
on a VM:
$ ethtool eth0
Settings for eth0:
Link detected: yes
When I looked into lshw
output:
$ lshw -class network -short
H/W path Device Class Description
==========================================================
/0/100/3 network Virtio network device
/0/100/3/0 eth0 network Ethernet interface
This is telling us that the device driver being used for this VM is virtualized, in this case this is a VM running on KVM and so the VM is using the virtio_* drivers for all its interactions with "hardware".
$ lsmod | grep virt
virtio_rng 13019 0
virtio_balloon 13864 0
virtio_net 28096 0
virtio_console 28066 1
virtio_scsi 18453 2
virtio_pci 22913 0
virtio_ring 22746 6 virtio_net,virtio_pci,virtio_rng,virtio_balloon,virtio_console,virtio_scsi
virtio 14959 6 virtio_net,virtio_pci,virtio_rng,virtio_balloon,virtio_console,virtio_scsi
These kernel modules are available to certain OSes (Linux, BSD, and Windows). With these drivers installed in your VM, the kernel in your VM has special access to the underlying hardware through the kernel that's running on your hypervisor.
Remember that with hypervisors there's 2 distinct types. ESX/vsphere are considered type-1. Reminder on the types:
- Type-1, native or bare-metal hypervisors
- Type-2 or hosted hypervisors
KVM is more akin to a type-2, but has some elements, such as virtio_*, that make it behave and perform more like a type-1, by exposing to virtualization the underlying Linux kernel of the hypervisor in such a way that VMs can have semi-direct access to it.
The speed of my NIC?
Given you're running on a paravirtualized hypervisor you have to go onto the actual hypervisor to find out your NIC's theoretical speed using ethtool
. In lieu of that can only find out by doing something like using iperf
to benchmark the NIC under load, and experimentally find out what the NICs speed appears to be.
For example, here I have 2 servers that are running on 2 different hypervisors. Using iperf
on both servers:
$ sudo yum install iperf
Then running one server as an iperf
server on host1 VM:
host1$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
Then on a client VM host2:
host2$ iperf -c 192.168.100.25
------------------------------------------------------------
Client connecting to 192.168.100.25, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.100.101 port 55854 connected with 192.168.100.25 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 10.0 GBytes 8.60 Gbits/sec
On host1's output you'll see this:
$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.100.25 port 5001 connected with 192.168.100.101 port 55854
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 10.0 GBytes 8.60 Gbits/sec
Here we can see that the NIC was able to go up to 8.60Gbits/sec.