I'd like to introduce my pynvme to all SSD developers and users. It is an
open source project, based on SPDK. With some new developed features and
Python wrappers, people can write test scripts in Python for NVMe SSD
efficiently. Please have a try, and let me know what you think.
It's here: https://github.com/cranechu/pynvme
I'm trying to get a better understanding of the zero copy support that was
added in 19.04 (I believe).
It seems that malloc is the only bdev that supports it and bdevperf the
only driver that actually is coded to make use of zero copy in the read
If my understanding is correct, are there any plans to support zero copy in
the nvmf target?
Downtime announcement for SPDK Jenkins CI system.
5:00 PM May 31 - 8:00 AM June 03
Scheduled electrical maintenance.
Continuous integration system for SPDK will be unavailable during that time and all incoming commits will be scheduled to run after power-on on June 03.
Intel Technology Poland sp. z o.o.
ul. Slowackiego 173 | 80-298 Gdansk | Sad Rejonowy Gdansk Polnoc | VII Wydzial Gospodarczy Krajowego Rejestru Sadowego - KRS 101882 | NIP 957-07-52-316 | Kapital zakladowy 200.000 PLN.
Ta wiadomosc wraz z zalacznikami jest przeznaczona dla okreslonego adresata i moze zawierac informacje poufne. W razie przypadkowego otrzymania tej wiadomosci, prosimy o powiadomienie nadawcy oraz trwale jej usuniecie; jakiekolwiek
przegladanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by
others is strictly prohibited.
Thank you for improving the patch continuously. I have seen great improvement.
I have an item to discuss with you, and I send it to the mailing list first.
Please correct me if I'm wrong or let's discusss on Trello by creating the board for FC if this question is reasonable.
NVMe-oF FC transport utilizes two types of WWN.
WWPN is a World Wide Port Name, WWPN is an unique ID for each FC port, and each port on a FC HBA has a unique WWPN.
WWNN is a World Wide Node Name and WWNN is assigned to a FC HBA.
If I understand correctly,
The FC low level driver (LLD) reads persistent WWPN and WWNN and informs them to SPDK NVMe-oF transport,
then SPDK NVMe-oF transport configures listeners according to them.
Besides, nvmf_fc_listen is implemented as NOP.
So WWPN and WWNN is read-only for SPDK NVMe-oF transport.
But it is very desirable if we can change WWNN and WWPN as our own needs.
.INI config file has been deprecated and could you consider to add FC code to the nvmf_subsystem_add_listener RPC?
Implementation options may be for example
- to pass the pair WWNN and WWPN to LLD, and LLD change the WWPN of the HBA which matches WWNN.
- FC port has its own PCI address.
- user passes the trio, PCI address, WWNN, WWPN.
- if the PCI address is the lowest of the FC HBA, WWNN can be changed.
If FC HBA doesn't allow changing WWNN or WWPN, we can output error message.
For the last week some of us have been testing out using Slack instead of IRC
and we've been very happy with it. It solves a couple of key problems that the
community has reported, namely:
1) It's based on HTTP, so people can access it through their firewalls
2) The server automatically logs the conversation for everyone, so people don't
need to run a separate IRC bouncer like ZNC.
So we're going to make the full switch. The links on https://spdk.io will be
updated shortly. You can join us in Slack using the following link:
The Slack team is "spdk-team" if you are prompted.
We are trying to understand SPDK NVMe driver and its functionality. For this
we are studying the codes and documentation from 'https://spdk.io/doc/'. But
we are facing difficulties to understand the functions. Can you please guide
us to other documents except these?
Thanks & Regards,
Mir Tanveer Islam
I was wondering if it is possible to run apps like nvmf target as a
I read the sections about pagemap/IOMMU in both DPDK and SPDK and I'm not
sure if it is possible or not from the text.
In any case, I enabled IOMMU via the grub command line and loaded vfio-pci
but I'm still getting an error:
Starting SPDK v19.04 / DPDK 19.02.0 initialization...
[ DPDK EAL parameters: nvgrid --no-shconf -c 0x1 --log-level=lib.eal:6
EAL: VFIO support initialized
EAL: Cannot obtain physical addresses: Success. Only vfio will function.
error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
Failed to initialize DPDK
I'm seeing this message in dmesg after enabling iommu in grub (but nothing
[ 0.000000] DMAR: IOMMU enabled
I'm SPDK core maintainer responsible for the vhost library.
I saw your virtio-vhost-user patch series on gerrithub. I know you've
been talking about it on SPDK community meeting over a month ago,
although I was on holiday at that time.
I wanted to give you some background of what is currently going on
around SPDK vhost.
SPDK currently keeps an internal copy of DPDK's rte_vhost with a
couple of storage specific changes. We have tried to upstream those
changes to DPDK, but they were rejected . Although they were
critical to support vhost-scsi or vhost-blk, they also altered how
vhost-net operated and that was DPDK's major concern. We kept the
internal rte_vhost copy but still haven't decided whether to try to
switch to DPDK's version or to completely derive from DPDK and
maintain our own vhost library. At one point we've also put together a
list of rte_vhost issues - one of which was vhost-user specification
incompliance that eventually made our vhost-scsi unusable with QEMU
2.12+. The amount of "fixes" that rte_vhost required was huge.
Instead, we tried to create a new, even lower level vhost library in
DPDK . The initial API proposal was warmly welcomed , but a few
months later, after a PoC implementation was ready, the whole library
was rejected as well . [One of the concerns the new library would
address was creating an abstraction and environment for
virtio-vhost-user, but apparently DPDK team didn't find that useful at
We still have the rte_vhost copy in SPDK and we still haven't decided
on its future strategy, which is why we were so reluctant to reviewing
Just last week we seem to have finally made some progress, as a DPDK
patch that would potentially allow SPDK to use DPDK's rte_vhost
directly  was approved for DPDK 19.05. Around the end of February I
believe SPDK will try to stop using its rte_vhost copy and switch to
DPDK's rte_vhost with the mentioned patch.
After that happens, I would like to ask you to rebase your patches on
latest DPDK's rte_vhost and resubmit them to DPDK. I can certainly
help with upstreaming vfio no-iommu support in SPDK and am even
willing to implement registering non-2MB-aligned memory, but rte_vhost
changes belong in DPDK.
I'm sorry for the previous lack of transparency in this matter.
Anyone who has been paying attention knows that we've had a bit of a challenge with the nightly tests in that often times failures go days (or forever) without anyone looking at them simply because of folks' workloads and the fact that a lot of these are intermittent.
We added the IRC messages a while back hoping that the increased visibility would get some more eyes on them but it didn't help quite as much as we'd hoped. So, although it's still everyone's job in the community to dig into those failures when you see them we are going to add a little accountability here as well.
On a weekly basis we'll be asking each of the Intel sites (if others want to volunteer please, please let me know!) to own digging into the failures and getting them dispositioned either by fixing or opening a bug or getting others involved or whatever.
I'll be sending a note out on IRC and the dist list at the beginning of the week tapping someone on the shoulder. The expectation is that any failures are looked into by someone at the site and updates are provided over IRC. It can be as simple as "Last night's failure looks like a real bug, logged as issue #xyz" or whatever. We'd like real triage though so please look into enough to make an informed recommendation on next steps.
Hope that makes sense, if not please ask.
Gang, you're on the hook this week. Please make sure the PRC team (you can decide who) is posting something on IRC about the previous night's failures if there were any. Next week will be Poland.
PS: I can make a schedule if someone feels strongly about it but I'm not a big fan of that much overhead, I'd rather just rotate US/PRC/Poland and announce it weekly and if there's a big holiday that week in the geo or something it's easy enough to pick another. I'm open to suggestions though....