I am trying to have SPDK run on ARM64 server machine.
With a bit of digging into details, I could figure out that I
need to port NVMe driver for ARM64 and may be update
a few more files.
It will be very helpful if someone could share more insights
on what more needs to be done and if I am on correct path,
to enable SPDK working on ARM64 architecture.
Thanks in Advance!
I need to run DPDK networking and access NVMe device in the same process.
rte_eal_init() initializes network PMD drivers normally
But spdk_nvme_probe() activates network devices probe for the second time and rte exits with error.
To resolve this failure I register NVMe controller driver as DPDK PMD.
Now I can pass NVMe device PCI address to EAL initialization with `-w' parameter along with network devices
PMD template I use is attached below.
Is there another way to run DPDK and SPDK in the same process ?
I had two issues with SPDK, the first issue was an error shown when I ran your setup script (setup.sh) to bind my NVMe device to uio_pci_generic driver (instead of nvme).
The 'dmesg' command shows the following message:
[ 1742.763091] nvme 0000:07:00.0: Cancelling I/O 0 QID 0
[ 1742.763093] nvme 0000:07:00.0: Cancelling I/O 1 QID 0
[ 1742.763094] nvme 0000:07:00.0: Cancelling I/O 2 QID 0
[ 1742.763095] nvme 0000:07:00.0: Cancelling I/O 3 QID 0
[ 1742.779157] ------------[ cut here ]------------
[ 1742.779176] WARNING: CPU: 4 PID: 7023 at ../lib/idr.c:1073 nvme_release_instance.isra.35+0x23/0x30 [nvme]()
[ 1742.779176] ida_remove called for id=0 which is not allocated.
[ 1742.779177] Modules linked in: uio_pci_generic fuse rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd sunrpc fscache rte_kni(OEN) igb_uio(OEN) uio vtsspp(OEN) af_packet sep4_0(OEN) socperf2_0(OEN) snd_hda_codec_hdmi pax(OEN) iscsi_ibft iscsi_boot_sysfs binfmt_misc iTCO_wdt iTCO_vendor_support snd_hda_codec_realtek cpufreq_conservative cpufreq_userspace msr ext4 crc16 mbcache jbd2 hp_wmi sparse_keymap rfkill coretemp kvm crct10dif_pclmul crc32_pclmul crc32c_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd dm_mod pcspkr serio_raw sb_edac edac_core i2c_i801 lpc_ich mfd_core ixgbe tpm_infineon snd_hda_intel mdio dca snd_hda_codec snd_hwdep snd_pcm snd_page_alloc snd_timer e1000e snd ptp wmi mei_me processor pps_core mei soundcore shpchp button xfs libcrc32c hid_generic
[ 1742.779197] usbhid raid0 md_mod sr_mod cdrom sd_mod isci(X) ehci_pci xhci_hcd libsas ehci_hcd ahci scsi_transport_sas libahci ata_generic firewire_ohci usbcore libata firewire_core nvme crc_itu_t usb_common sg scsi_mod autofs4
[ 1742.779203] Supported: No, Unsupported modules are loaded
[ 1742.779205] CPU: 4 PID: 7023 Comm: nvmesetup Tainted: G OE NX 3.12.62-60.62-default #1
[ 1742.779206] Hardware name: Hewlett-Packard HP Z620 Workstation/158A, BIOS J61 v03.65 12/19/2013
[ 1742.779207] 0000000000000000 ffffffff8151f96a ffff880fe7445da0 ffffffff8182949d
[ 1742.779210] ffffffff81058f02 ffff880fe664a840 ffff880fe7445df0 ffff880fe664a870
[ 1742.779212] ffff880fe664a880 ffff880fe664a800 ffffffff81058f8c ffffffff818294f8
[ 1742.779214] Call Trace:
[ 1742.779223] [<ffffffff8100474d>] dump_trace+0x7d/0x2d0
[ 1742.779226] [<ffffffff81004a34>] show_stack_log_lvl+0x94/0x170
[ 1742.779228] [<ffffffff81005cd1>] show_stack+0x21/0x50
[ 1742.779232] [<ffffffff8151f96a>] dump_stack+0x5d/0x78
[ 1742.779236] [<ffffffff81058f02>] warn_slowpath_common+0x82/0xc0
[ 1742.779238] [<ffffffff81058f8c>] warn_slowpath_fmt+0x4c/0x50
[ 1742.779242] [<ffffffffa0073163>] nvme_release_instance.isra.35+0x23/0x30 [nvme]
[ 1742.779246] [<ffffffffa0073234>] nvme_free_dev+0xc4/0xf0 [nvme]
[ 1742.779251] [<ffffffff812e2e13>] pci_device_remove+0x33/0xb0
[ 1742.779255] [<ffffffff813a713a>] __device_release_driver+0x7a/0xf0
[ 1742.779258] [<ffffffff813a71ce>] device_release_driver+0x1e/0x30
[ 1742.779260] [<ffffffff813a5df5>] unbind_store+0xb5/0xe0
[ 1742.779271] [<ffffffff81216bae>] sysfs_write_file+0xbe/0x140
[ 1742.779276] [<ffffffff811a86a8>] vfs_write+0xb8/0x1e0
[ 1742.779278] [<ffffffff811a90c8>] SyS_write+0x48/0xa0
[ 1742.779283] [<ffffffff8152dd09>] system_call_fastpath+0x16/0x1b
[ 1742.779286] [<00007fd09ecb3d10>] 0x7fd09ecb3d0f
[ 1742.779287] ---[ end trace 19365efcbfb255a9 ]---
The second issue, although I got the above-mentioned error, I could run your hello_world and performance examples successfully, but suddenly
After some runs, the program will fail to probe the device again with the following error:
Initializing NVMe Controllers
Attaching to 0000:07:00.00
Initialization timed out in state 1
and sometimes the timeout in state 3 instead of 1. I do not know if this problem is related to the previous one or not? Can you help me on that?
BTW: My NMVe device is Intel Corporation DC P3700 SSD
And the result of executing 'uname' command is
Linux lu31650764 3.12.62-60.62-default #1 SMP Thu Aug 4 09:06:08 UTC 2016 (b0e5a26) x86_64 x86_64 x86_64 GNU/Linux
To facilitate moving SPDK to a more community-based model for development, we've
posted our plans for our next release on the wiki (https://github.com/spdk/spdk/
wiki). I'd like to open up a thread here to solicit feedback and to answer any
questions. As we continue to move forward, we'll become increasingly open and
transparent about our upcoming releases, and if there is enough community
involvement we'll include community efforts on the official release roadmap as
I have just started using SPDK. I am in need to implement a NVMe/F target implementation using SPDK. I need to use the Virtual Subsystem. I see that there is support for RAM disks and I am using it as reference to my Virtual implementation .
There is a function called SPDK_BDEV_MODULE_REGISTER in the file lib/bdev/malloc/blockdev_malloc.c.
How is this function getting called? Who calls this function
Sent from my iPhone
I saw that NVMe over fabric initiator is available in userspace from 16.12 version of SPDK. Is the release of this version in December’16? As the previous versions have the release convention denoting the year and the month.