Getting "Not enough ioat channels found. Check that ioatdma driver is unloaded." error
by Kevin Wilson
Hi, all,
I am using spdk from git on x86_64 platform. I had built it
successfully, but I am getting
"Not enough ioat channels found. Check that ioatdma driver is unloaded." error
after running "/verify" even though "lsmod | grep ioatdma" shows that
the ioatdma module is not loaded. Any idea what can be the reason for this ?
/examples/ioat/verify/verify
Starting SPDK v18.10-pre / DPDK 18.05.0 initialization...
[ DPDK EAL parameters: verify --no-shconf -c 0x1 --legacy-mem
--file-prefix=spdk_pid21116 ]
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/spdk_pid21116/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
User configuration:
Run time: 10 seconds
Core mask: 0x1
Queue depth: 32
Not enough ioat channels found. Check that ioatdma driver is unloaded.
[email protected]:/work/src/spdk# lsmod | grep ioatdma
Regards,
Kevin
3 years, 11 months
Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
by Isaac Otsiabah
Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).
------- 192.168.2.10
| | initiator
-------
|
|
|
-------------------------------------------- 192.168.2.0
|
|
| 192.168.2.20
-------------- vpp, vppctl
| | iscsi_tgt
--------------
Both system have a 10GB NIC
(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first 10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp
[[email protected] ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran
STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 down
local0 0 down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 up
local0 0 down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
192.168.2.20/24
local0 (dn):
/* ping initiator from vpp */
vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms
(On Initiator):
/* ping vpp interface from initiator*/
[[email protected] ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms
STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:
ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host
vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
192.168.2.20/24
host-vpp1out (up):
192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10
/* From host, ping vpp */
[[email protected] ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms
/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms
Statistics: 5 sent, 5 received, 0% packet loss
>From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.
Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?
Isaac
From: Zawadzki, Tomasz [mailto:[email protected]]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel <daniel.verkamp(a)intel.com>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hello Isaac,
Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/
SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.
Suggested flow of starting up applications is:
1. Unbind interfaces from kernel
2. Start VPP and configure the interface via vppctl
3. Start SPDK
4. Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP
Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.
Let me know if you have any questions.
Tomek
From: Isaac Otsiabah [mailto:[email protected]]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:[email protected]>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:ja[email protected]>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:[email protected]>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:[email protected]>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.
For VPP, first, I unbind the nick from the kernel as and start VPP application.
./usertools/dpdk-devbind.py -u 0000:07:00.0
vpp unix {cli-listen /run/vpp/cli.sock}
Unbinding the nic takes down the interface, however, the ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."
is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?
Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?
I would appreciate if anyone would help. Thank you.
Isaac
3 years, 11 months
PSA: Please rebase patches on latest master.
by Howell, Seth
Hi All,
We recently added a fedora 28 machine into the Chandler SPDK build pool. It is running Scan-build with Clang-6.0 and catches quite a few cases that previous versions of the tool have missed. This morning, I officially enabled that test, so you may see scan-build failures if your patches are not rebased on the latest master.
Thank you,
Seth Howell
3 years, 11 months
Question about how is SPDK_BDEV_MODULE_REGISTER() get invoked
by Zeyuan Hu
Hello,
I'm playing around with hello_bdev.c with NVMe SSD. I got the following
correct output:
-----
Starting SPDK v18.10-pre / DPDK 18.05.0 initialization...
[ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --legacy-mem
--file-prefix=spdk_pid16492 ]
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/spdk_pid16492/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
app.c: 594:spdk_app_start: *NOTICE*: Total cores available: 1
reactor.c: 718:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1
reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0
vbdev_passthru.c: 389:vbdev_passthru_init: *NOTICE*: conf parse matched:
Malloc1
vbdev_passthru.c: 493:vbdev_passthru_register: *NOTICE*: Match on Malloc1
vbdev_passthru.c: 527:vbdev_passthru_register: *NOTICE*: io_device created
at: 0x0x2598eb0
vbdev_passthru.c: 538:vbdev_passthru_register: *NOTICE*: bdev opened
vbdev_passthru.c: 549:vbdev_passthru_register: *NOTICE*: bdev claimed
vbdev_passthru.c: 560:vbdev_passthru_register: *NOTICE*: pt_bdev registered
vbdev_passthru.c: 561:vbdev_passthru_register: *NOTICE*: created pt_bdev
for: PT0
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
157:hello_start: *NOTICE*: Successfully started the application
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
164:hello_start: *NOTICE*: bdev name: Malloc0
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
165:hello_start: *NOTICE*: bdev product_name: Malloc disk
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
166:hello_start: *NOTICE*: bdev module name: malloc
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
164:hello_start: *NOTICE*: bdev name: Malloc1
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
165:hello_start: *NOTICE*: bdev product_name: Malloc disk
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
166:hello_start: *NOTICE*: bdev module name: malloc
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
164:hello_start: *NOTICE*: bdev name: PT0
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
165:hello_start: *NOTICE*: bdev product_name: passthru
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
166:hello_start: *NOTICE*: bdev module name: passthru
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
164:hello_start: *NOTICE*: bdev name: Nvme0n1
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
165:hello_start: *NOTICE*: bdev product_name: NVMe disk
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
166:hello_start: *NOTICE*: bdev module name: nvme
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
186:hello_start: *NOTICE*: Opening the bdev Nvme0n1
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
195:hello_start: *NOTICE*: Opening io channel
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
222:hello_start: *NOTICE*: Writing to the bdev
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
116:write_complete: *NOTICE*: bdev io write completed successfully
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
131:write_complete: *NOTICE*: Reading io
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
86:read_complete: *NOTICE*: Read string from bdev : Hello World!
Starting SPDK v18.10-pre / DPDK 18.05.0 initialization...
[ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --legacy-mem
--file-prefix=spdk_pid16492 ]
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/spdk_pid16492/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
app.c: 594:spdk_app_start: *NOTICE*: Total cores available: 1
reactor.c: 718:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1
reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0
vbdev_passthru.c: 389:vbdev_passthru_init: *NOTICE*: conf parse matched:
Malloc1
vbdev_passthru.c: 493:vbdev_passthru_register: *NOTICE*: Match on Malloc1
vbdev_passthru.c: 527:vbdev_passthru_register: *NOTICE*: io_device created
at: 0x0x2598eb0
vbdev_passthru.c: 538:vbdev_passthru_register: *NOTICE*: bdev opened
vbdev_passthru.c: 549:vbdev_passthru_register: *NOTICE*: bdev claimed
vbdev_passthru.c: 560:vbdev_passthru_register: *NOTICE*: pt_bdev registered
vbdev_passthru.c: 561:vbdev_passthru_register: *NOTICE*: created pt_bdev
for: PT0
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
157:hello_start: *NOTICE*: Successfully started the application
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
164:hello_start: *NOTICE*: bdev name: Malloc0
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
165:hello_start: *NOTICE*: bdev product_name: Malloc disk
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
166:hello_start: *NOTICE*: bdev module name: malloc
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
164:hello_start: *NOTICE*: bdev name: Malloc1
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
165:hello_start: *NOTICE*: bdev product_name: Malloc disk
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
166:hello_start: *NOTICE*: bdev module name: malloc
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
164:hello_start: *NOTICE*: bdev name: PT0
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
165:hello_start: *NOTICE*: bdev product_name: passthru
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
166:hello_start: *NOTICE*: bdev module name: passthru
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
164:hello_start: *NOTICE*: bdev name: Nvme0n1
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
165:hello_start: *NOTICE*: bdev product_name: NVMe disk
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
166:hello_start: *NOTICE*: bdev module name: nvme
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
186:hello_start: *NOTICE*: Opening the bdev Nvme0n1
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
195:hello_start: *NOTICE*: Opening io channel
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
222:hello_start: *NOTICE*: Writing to the bdev
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
116:write_complete: *NOTICE*: bdev io write completed successfully
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
131:write_complete: *NOTICE*: Reading io
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
86:read_complete: *NOTICE*: Read string from bdev : Hello World!
/home/zeyuanhu/rustfs/examples/hello_nvme_bdev/hello_nvme_bdev.c:
97:read_complete: *NOTICE*: Stopping app
-----
Now, I try to write the same program in Rust and call the SPDK C interface.
Everything seems to work until there is an error "Could not find the bdev
Nvme0n1". My output of the program looks like:
-----
Starting SPDK v18.10-pre / DPDK 18.05.0 initialization...
[ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --legacy-mem
--file-prefix=spdk_pid16981 ]
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/spdk_pid16981/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
app.c: 594:spdk_app_start: *NOTICE*: Total cores available: 1
reactor.c: 718:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1
reactor.c: 492:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket 0
Successfully started the application
Could not find the bdev Nvme0n1
app.c: 678:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
thread 'main' panicked at 'ERROR starting application', src/main.rs:178:9
note: Run with `RUST_BACKTRACE=1` for a backtrace.
-----
I use the same bdev.conf file for both my .c program and .rs program:
-----
[Passthru]
PT Malloc1 PT0
[Malloc]
NumberOfLuns 2
LunSizeInMB 16
[Nvme]
TransportID "trtype:PCIe traddr:0000:02:00.0" Nvme0
-----
By comparing the output of both programs, I notice the following outputs
are missing:
-----
vbdev_passthru.c: 389:vbdev_passthru_init: *NOTICE*: conf parse matched:
Malloc1
vbdev_passthru.c: 493:vbdev_passthru_register: *NOTICE*: Match on Malloc1
vbdev_passthru.c: 527:vbdev_passthru_register: *NOTICE*: io_device created
at: 0x0x2598eb0
vbdev_passthru.c: 538:vbdev_passthru_register: *NOTICE*: bdev opened
vbdev_passthru.c: 549:vbdev_passthru_register: *NOTICE*: bdev claimed
vbdev_passthru.c: 560:vbdev_passthru_register: *NOTICE*: pt_bdev registered
vbdev_passthru.c: 561:vbdev_passthru_register: *NOTICE*: created pt_bdev
for: PT0
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL: probe driver: 8086:953 spdk_nvme
-----
>From the
https://lists.01.org/pipermail/spdk/2016-October/000135.html
, I notice the problem might due to bdev modules are not properly
registered. One possible reason I think might because S
PDK_BDEV_MODULE_REGISTER()
from bdev_module.h is not included in my Rust bindings (i.e. Rust requires
function declaration in Rust format for every C interface function).
However, I'm not sure if some function missing from header file is the root
cause for my problem. I try to debug .c program to see the whole process of
modules registration but I cannot see it from gdb on .c program because
once spdk_reactors_start() get called from spdk_app_start(), all the module
registration seems already done.
I'm wondering anyone can shed some lights on my problem? Specifically,
is SPDK_BDEV_MODULE_REGISTER() from bdev_module.h critical for module
registration if I only static link the SPDK library as some 3rd party
library?
Is the unable registration of bdev modules the root cause for my problem?
what's the code path for the bdev module registration process?
Thanks much and sorry for my lengthy email.
best
regards,
Zack
3 years, 11 months
Re: [SPDK] Announcing the 2018 SPDK Developer Meetup hosted by NetApp
by Meneghini, John
This is just a note to say the SDK Meetup is only 6 weeks away.
If you plan to attend, please register ASAP so we can properly plan for the number of attendees and the size of the meeting room.
We currently have 28 people registered.
Location: NetApp, Building 3 Quincy Conference Room 1345 Crossman Ave Sunnyvale CA<https://goo.gl/maps/v8d8axmh5jT2>, 94089
Dates: 10/16/2018 8:00am - 6:00pm
10/17/2018 8:00am - 5:00pm*
Thanks,
--
John Meneghini
Data ONTAP SCSI/NVMe-oF Target Architect
978-930-3519
johnm(a)netapp.com
From: SPDK <spdk-bounces(a)lists.01.org> on behalf of "Luse, Paul E" <paul.e.luse(a)intel.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, August 9, 2018 at 2:30 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: [SPDK] Announcing the 2018 SPDK Developer Meetup hosted by NetApp
Come and join the 2nd annual SPDK Developer Meetup, this year hosted by NetApp in beautiful Sunnyvale CA! It’s an excellent opportunity for networking, learning and making forward progress on the code and generally making the community more productive. We’ll be covering all sorts of topics but a major theme this year will be NVMeOF (including FC support) so if you’re interested in these topics you won’t want to miss this meet-up.
Please see the SPDK blog for complete details including a registration link:
http://www.spdk.io/blog/
Thanks!
Paul
3 years, 11 months
Running bdevio without a conf file
by Luse, Paul E
Just working on making the crypto module RPC-only and haven't done much with RPCs before. I'm trying to run the target in one window (using -I 1) and use RPCs to create a NVMe bdev with a crypto on top of it. The RPCs are working as I can confirm with both output and gdb under the spdk target.
However I assumed I could run bdevio in another window with the same shared mem parm "./bdevio -I 1" for example and it should work however it does not see any bdevs in bdevio_construct_targets()
Am I missing something high level or really simple here?
Thx
Paul
3 years, 11 months