As discussed during the Summit last week, we believe SPDK needs support for a dynamic threading model. An RFC patch has been pushed upstream for review.
This patch is a beginning point for our proposed changes. Improvements will be made with subsequent patches.
The description below is taken from https://github.com/spdk/spdk/issues/308
SPDK needs to support a dynamic threading model where reactors are NOT bound to lcores.
Many applications need SPDK to support a threading model that:
1. Does not assume a static number of threads
2. Does not bind threads to cores (this burns up cores)
3. Does not assume all treads use the same polling model
Removing these assumptions from the SPDK libraries will allow:
* Different applications to share the SPDK libraries on the same platform
* E.g. FC-NVMe, RDMA-NVMe, and NVMe
* Different platforms to support the same applications with the same libraries
* E.g. a 4 core platform and a 128 core plaform, a PowerPC and NFS traffic
* Different workloads at different scales
* E.g. 1 NVMF Host with 1 Subsystem and 1 Namespace, or 16 NVMF Hosts with 100 Subsystems and 1,000 namespaces.
* In particular, in SPDK, NVMF threads need to come and go depending upon the “NVMF load”.
More Dynamic Use Cases Coming
With the advent of FC-NVMe (which uses NPIV to visualize FC ports) NVMF Subsystem Ports and Host Ports are not static. Different Hosts and Subsystems can have a different number of Ports, and Ports can be dynamically added and removed from the configuration. This means:
* The same platform may end up having different number of Subsystem ports at various points in its lifecycle
* The SPDK FC-NVMe application does NOT know up front how many ports it will have.
1. SPDK libraries should not assume a static number of threads
2. SPDK libraries should bind threads to cores only optionally - supporting both static and dynamic threading models
3. SPDK libraries should support a Hybrid polling model (modified run to completion)
1. SPDK libraries assume a static number of threads
2. SPDK libraries bind threads to cores
3. SPDK libraries assume all treads use the same polling model
Proposal to solve above Use Cases:
Use the spdk_nvmf_poll_group (PG) as the unit of threading abstraction
* Use PG as the fundamental unit on which a thread operates
* The spdk_thread will be a “virtual” thread that gets tied into a PG (1-1 relationship)
* Create PGs as and when hardware ports (and associated queue-pairs) come to life.
* No dependency between a PG and a “real” thread.
* A PG can be picked up by any “real” thread and worked upon. The PG contains everything needed for IO handling.
* PG continues to contain spdk_thread. spdk_thread continues same mechanisms for IO channels to different NS etc. etc.
* PG contains vendor data. Eg. A “ring” for depositing asynchronous callback events from the backend OR management events that come from external modules.
* spdk_thread contains thread_context that points to a PG instead of a reactor. So messages from the library get routed to the PG “ring” instead of a thread/reactor event ring.
Understanding the intent of the event library, it is believed this is the place for customization. However, the current event library assumes a threading model that's a part of the util library. Moreover, many of the other SPDK core libraries assume the same threading model as the util library. If the SPDK util library can be modified to support these use dynamic threading use cases, all applications would be able to use the SPDK framework more effectively.
Steps to Reproduce
This is an enhancement. There is no bug.
Context (Environment including OS version, SPDK version, etc.)
Would like to provide these enhancements in V18.07.
There is a “Switch to old UI” link at the bottom of the page.
I plan to try the new UI soon, but I changed mine for now. The query for my main GerritHub view doesn’t work with the new UI yet.
On 5/23/18, 2:59 PM, "SPDK on behalf of Liu, Changpeng" <spdk-bounces(a)lists.01.org on behalf of changpeng.liu(a)intel.com> wrote:
My GerritHub Review changed too, :).
From: SPDK [mailto:email@example.com] On Behalf Of Meneghini, John
Sent: Wednesday, May 23, 2018 2:30 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] What is PolyGerrit?
Is it just me or did GerritHub Review get a whole new look?
SPDK mailing list
I was compiling spdk vhost app on a Intel(R) Xeon(R) CPU E5-2650 v4 @
2.20GHz CPU server, and then running it on another of Intel(R) Xeon(R) CPU
E5-2650 v2 @ 2.60GHz.
The app would immediately crash, with below kernel message:
traps: spdk_vhost trap invalid opcode ip:468569 sp:7ffee17d0ad0
error:0 in spdk_vhost[400000+a7000]
Incompatible CPU instruction set, I guess?
Some of you may have noticed that at the suggestion of someone in a recent community meeting that we've added an issue reporting template so that when GitHub issues are reported, there are some standard sections with requests for information to help everyone more efficiently understand and triage the report. We're looking for a balance between getting just enough info to get a basic understanding of what's happening (and to repro if not too complex) but we want to avoid asking for so much data up front that it discourages people from reporting potential bugs.
Hopefully everyone is finding is useful but if you think there should be some small adjustments along the way please feel free to simply submit a patch. The file is in the root of the repo and called ISSUE_TEMPLATE.md
From: Rao, Anu H
Sent: Tuesday, May 15, 2018 8:08 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Subject: SPDK Summit Live streamed!
We are happy to announce we will be live streaming the summit today and tomorrow. Please tune into to the link below to watch the sessions as they are delivered. Here is the agenda for the event.
Software Product Line Manager
Data Center Group, Intel Corporation
New SPDK user here. I've successfully used SPDK to run an NVMeoF target and expose entire NVMe disks as individual subsystems. So far so good.
Now I would like to be able to carve my NVMe disks up into small volumes and expose those volumes through the NVMeoF target. It looks like I'll need to make my NVMe devices into bdevs, make lvols on those bdevs, and then I can expose those lvols through the NVMeoF target? Is that the best approach?
If that's the right tactic, can I define my lvols as part of nvmf.conf.in? I don't see this included in the documentation or sample nvmf.conf.in, so maybe not.
Assuming not, I'm trying to do it on the command line, but I have a failure trying to make a bdev from my NVMe device. What would cause this? Do I have to start SPDK somewhere before I can run this action?
rsa@tppjoe08:~/spdk/scripts$ sudo ./rpc.py construct_nvme_bdev -b spdkdev1 -t PCIe -a 0000:3d:00.0
Error while connecting to /var/tmp/spdk.sock
Error details: [Errno 2] No such file or directory
At the summit, Ben mentioned that during the performance benchmarking, 4 subsystems were tied to 4 individual cores. I had a query regarding that.
Please correct me if I'm wrong, my understanding is that the IO qpairs are created on the cores on which the incoming connections arrive on, regardless of which subsystem the connection is to. And this happens in a round robin way currently. All the subsystem specific configuration changes happen on the thread on which subsystem was created on, but IO can happen on any core on which the connection came. So how do we manage to map a subsystem to a core? We are trying to do performance benchmarking here and this will be very useful to us.
The first few minutes were not there in the youtube link. So I might have missed the answer there.
I am getting below error messages..Not sure what is wrong. Can you pls help?
vagrant@localhost:/spdk$ sudo examples/bdev/hello_world/hello_bdev
Starting SPDK v18.07-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: hello_bdev -c 0x1 --file-prefix=spdk_pid29472 ]
EAL: Detected 4 lcore(s)
EAL: Multi-process socket /var/run/.spdk_pid29472_unix
EAL: Probing VFIO support...
app.c: 521:spdk_app_start: *NOTICE*: Total cores available: 1
reactor.c: 669:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1
reactor.c: 453:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
hello_bdev.c: 152:hello_start: *NOTICE*: Successfully started the application
hello_bdev.c: 161:hello_start: *ERROR*: Could not find the bdev: Malloc0
app.c: 605:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
hello_bdev.c: 250:main: *ERROR*: ERROR starting application
From: "Kariuki, John K" <john.k.kariuki(a)intel.com>
Date: Friday, May 11, 2018 at 2:34 PM
To: "Kariuki, John K" <john.k.kariuki(a)intel.com>
Subject: Pre-requisite for SPDK lab at the SPDK Summit
Thank you for registering to attend the SPDK hands-on lab at the SPDK Summit on Mar 16th. Here are the instructions to install the software required to participate in this lab. Please complete the installation before coming to the lab.
If you need further help, please send me an email.
Look forward to seeing you on Wednesday!