KVM and virtual to physical CPU mapping
Solution 1:
A virtual CPU equates to 1 physical core, but when your VM attempts to process something, it can potentially run on any of the cores that happen to be available at that moment. The scheduler handles this, and the VM is not aware of it. You can assign multiple vCPUs to a VM which allows it to run concurrently across several cores.
Cores are shared between all VMs as needed, so you could have a 4-core system, and 10 VMs running on it with 2 vCPUs assigned to each. VMs share all the cores in your system quite efficiently as determined by the scheduler. This is one of the main benefits of virtualization - making the most use of under-subscribed resources to power multiple OS instances.
If your VMs are so busy that they have to contend for CPU time, the outcome is that VMs may have to wait for CPU time. Again, this is transparent to the VM and handled by the scheduler.
I'm not familiar with KVM but all of the above is generic behavior for most virtualization systems.
Solution 2:
a virtual CPU is a thread in the qemu-kvm process. qemu-kvm is of course multithreaded.
unless you pin processes to specific CPUs, the system scheduler will allocate the threads CPU time from the available cores, meaning, any vCPU can end up getting CPU cycles from any physical core, unless specifically pinned to specific core(s)