On behalf of the SPDK community I'm pleased to announce the release of SPDK 20.01.1 LTS!
SPDK 20.01.1 is a bug fix and maintenance LTS release.
The full changelog for this release is available at:
Thanks to everyone for your contributions, participation, and effort!
is NVMe hotplug functionality as implemented limited to PCIe transport or
does it also work for other transports? If it's currently PCIe only, are
there any plans to extend the support to RDMA/TCP?
Hello SPDK team,
First, thank you for the really cool work you are doing! I am working on a small satellite mission at MIT, which will use a 6-channel SDR (SW defined radio). My task is to ensure that the 6-channel SDR data is saved reliably to an SSD. I am working on the processor (PS) side. Another colleague is working on the FPGA PL (programmable logic) side. The FPGA will provide DMA (still under development).
My general idea to try to achieve zero-copy performance is:
ADC -> PL Queue
PS Determines SDRAM temporary storage
DMA from PL Quque -> SDRAM (likely using libiio)
PS Determines when a block (or other storage unit, TBD) is ready to go to SSD
-> because our data is always the same size/format, I believe we can use an analytic/deterministic equation to determine the storage location
DMA from SDRAM -> SSD (likely using SPDK)
when its time to 'process' the data (which has to be at a later time due to power limits of the satellite):
PS determines (analytic equation) data to be processed
DMA from SSD -> SDRAM (w SPDK)
PS informs PL of data vailable
PL processes data via DMA
PL informs new 'processed data queue' ready
PS prepares location for processed data
DMA from SDRAM -> SSD (w SPDK)
The mission PI (principal investigator) has one main worry of our approach: he is concerned that SSD's can end up with 'bad sectors', like older drives, but that its usually a big chunk of space that goes bad. We are not concerned about single-event-upsets (when just one individual piece of data gets damaged), but rather when a large section that can result in us loosing too much data.
I understand that the idea of keeping track of 'bad sectors' in 'hard drives' is usually the task of a file-system. However, for our purposes a file system is appearing to be too much overhead and we have not found one that would help us with a 'zero copy' setup. But we need to be able to know when there are data errors (read data is garbage) and not slow down if there is a bad write request (if a bad write slows down the system, then we loose 'new' data that should have been saved).
I read the documentation as much as possible, and did a good amount of online searching for the 'expected' response from SPDK when the SSD has errors. But I could not find any information on that. I would greatly appreciate if anyone in the team can guide me in the right direction (maybe its pointing to some standard that SPDK adheres to [NVMe & PCIe] but even that I was not sure how SPDK returns such errors and the expected timing of them).
Hopefully this was clear and its in the scope of this list; if its not, please ask me to clarify or I greatly appreciate if you point me in the right direction.
- Trying to do zero-copy 6-channel data saving from FPGA to SSD (PCIe NVME)
- If I don't want a full file-system, how can I handle 'bad sector' type errors in the SSD?
- Is there any spec of expectation on the timing impacts when an error occurs?
I was trying to setup SPDK on Linux Kernel 5.6.4 and get started on using SPDK examples. I hope I have posted the query in the right forum thread.
I was trying to setup SPDK on Linux Kernel 5.6.4. While trying to execute the NVMe arbitration example, I am facing Starting I/O failed exception. I have added debug prints to get the value of error and it was -22.
The attached NVMe SSD has a single namespace of 64MB and 4k sector size format and single controller. Since basic script was giving exception, I tried reducing the IO count to 1 and io size in bytes to 4096 to keep the IO profile simple. But I still faced the same exception.
Is there any configuration missing while setting up SPDK?
I have tried most of the other examples too, it is observed that NVMe Admin commands and queue creation is working fine but the example script is unable to submit NVM I/O commands to the controller due to the mentioned exception.
Starting SPDK v20.07-pre git sha1 e69375b / DPDK 19.11.0 initialization...
Thanks in advance.
Recently, there are some patches in SPDK master branch to move the complied binary code into different locations, e.g.,
1. Application binary in "spdk/app" folder are moved to "spdk/build/bin" folder,
2. Some related binary code from "spdk/examples" folder are moved to spdk/build/examples folder
3. Two Fio plugin binary code are moved to spdk/build/fio folder.
Pay attention on these changes if you use SPDK master branch to conduct code development or debugging work.
Is it possible to allow iSCSI initiator from any IP/subnet to connect to a target?
Json RPC method `iscsi_create_initiator_group` requires initiator IPs and netmasks,
not sure how to configure.
Please, join us for a 3-day virtual experience with technical presentations, engaging demos, and interactive sessions with industry leaders, experts, developers, and user communities.
We hope to see you there!
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org
This is primarily for Shuhei but please feel free, anyone, to respond :)
Adding support for Intel's next generation offload engine is going well (Note, the feature is not available in HW yet, I'm using a simulator to do dev/test). Currently support exists, or is about to land on master, for:
Copy, fill, dual-cast, CRC32C, compare and the ability to submit batches of commands.
Currently these are only being used by a new tool in /examples/accel/perf but once they all land and I've added some more tests, we'll start using them in SPDK modules - the most notable uses will be for CRC32C 9iscsi) and DIF/DIX throughout the stack. There will be other uses (compare, fill, copy, etc) as well but those are the big ones.
I've just now started looking at DIF/DIX and have determined that using these within SPDK won't be quite as straightforward as some of the others. I'll explain what I'm thinking after briefly summering the DSA DIF/DIX functions (more detail is available in the public spec at https://software.intel.com/content/www/us/en/develop/download/intel-data-...)
Note: there is no SGL support in any of these, all are single src and/or dst:
* DIF Check: The DIF Check operation computes the Data Integrity Field (DIF) on the source data and compares the computed DIF to the DIF contained in the source data.
* DIF Insert: The DIF Insert operation copies memory from the Source Address to the Destination Address, while computing the Data Integrity Field (DIF) on the source data and inserting the DIF into the output data.
* DIF Strip: The DIF Strip operation copies memory from the Source Address to the Destination Address, removing the Data Integrity Field (DIF). It optionally computes the DIF on the source data and compares the computed DIF to the DIF contained in the source data.
* DIF Update: The DIF Update operation copies memory from the Source Address to the Destination Address. It optionally computes the Data Integrity Field (DIF) on the source data and compares the computed DIF to the DIF contained in the data. It simultaneously computes the DIF on the source data using Destination DIF fields in the descriptor and inserts the computed DIF into the output data.
Upon initial review of the relatively complex implementation of DIF?DIX we have in SPDK I have the following observations that I'm hoping to get some feedback on:
* It looks like we require SGL in most if not all cases. I can go through them one by one but wanted to get an initial feel mainly from Shuhei on how lack of SGL support impacts our ability to use DIF?DIX offload w/DSA before I start adding support :)
* With the exception of DIF Check, all of the DSA functions include a copy (I can only assume they figured a use case where they are moving data from a host buffer into a different memory subsystem in prep for DMA'ing to disk). It looks like most if not all of our calculations are done on fixed buffers. I see a few copy functions in diff.c but I don't see them used anywhere
I'm almost thinking the DSA functions are too "simple" for our current implementation but wonder if there's some refactoring we can do to make use of them. I don't know if the DSA CRC32C engine calculates the same exact CRC as the DIF/DIX functions but if so (I can verify) at a minum maybe use just accelerate the CRCs called from funcs within diff.c
Thoughts? We can chat in a community meeting soon too but email might be easier to get us all on the amge page first.