The vhost will use one thread to poll all the IO queues, and the poll thread is belong to
the coremask that you specified.
We don't introduce the poll group concept into vhost, nvmf/iSCSI target already used
From: dingbao.chen(a)outlook.com <dingbao.chen(a)outlook.com>
Sent: Sunday, November 29, 2020 4:03 PM
Subject: [SPDK] SPDK vhost mechanism only can use one CPU core for polling
even specify multiple cores
The storage target side is Lightbits' NVMe/TCP storage solution, which provide a
NVMe/TCP storage pool, user can create volumes from this pool to any spcecific
Using the vhost is to present the nvme/tcp volume to KVM based virutal
1. Assign 16 cores to vhost
build/bin/vhost -S /var/tmp -m 0xFFFF
2. Create a bdev controller with 8 cores, the Nvme0n17 is the NVMe/TCP volume
with SPDK initiator.
scripts/rpc.py vhost_create_blk_controller --cpumask 0xFF vhost.0 Nvme0n17
3. virtual machine device virtio-blk-pci num-queues=8
4. Running the FIO inside of the VM, the performance result is not as good as
expected, did some investigation and suspect the bottleneck is on the vhost
controller, which only use one CPU cores even I assigned 8 to it.
Checked with spdk top and our LightOS lb_top. The result shows only one core
is occupied in vhost,
And by inforamtion from Lightbits target side, only one active nvme-tcp
connection between HOST Server and Lightbits storage server
I checked the vhost bdev module document, it has such word:
“All the I/O polling will be pinned to the least occupied CPU core within given
I suspect this behavior is by design, but still want to get confirmation, and is
there any plan to improve?
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org