Hi Liang,
You need to bind the NVMe driver in UIO or VFIO first. If you want to bind to it to uio,
make sure that you disable iommu in grub during the boot, and you need to have
uio_pci_generic in loaded kernel module. Currently, setup.sh in SPDK first tries to bind
VFIO first then UIO, you can just slightly modify your script and bind to uio first. In
my platform, I used uio to bind NVMe SSD device.
To make sure that you can enable Ceph + SPDK, you can do the following work first after
you bind your NVMe SSD device to uio driver.
0: Make sure that the hugepages are allocated via scripts/setup.sh
1: use examples/nvme/identify/identify in SPDK folder to find the sn of the NVMe device.
This is used to find the NVMe SSDs.
2 Use vstart.sh (The single machine model) to test enable Ceph + SPDK, e.g., you can use
the following commands:
MON=3 OSD=1 MDS=1 MGR=1 RGW=1 ../src/vstart.sh -n -x -l -b
You need to modify vstart.sh to generate the ceph.conf
The following contents should be put in the configuration of [osd] part
bluestore_block_db_path = ""
bluestore_block_db_size = 0
bluestore_block_db_create = false
bluestore_block_wal_path = ""
bluestore_block_wal_size = 0
bluestore_block_wal_create = false
bluestore_spdk_mem = 2048 #allocate 2048 pages, you can change it.
bluestore_block_path = spdk:55cd2e404be053be # This should be your SN.
3 For your cluster, for the ceph configuration, you need to follow the configuration
described in step 2. Currently only bluestore can use SPDK user space NVMe driver.
PS: I used the commit ID of 99c03063248b1ff137e0a8686002c203dac959e4 in Ceph, for the
compilation may be slightly different with you. For the Ceph + SPDK issue, I suggest that
you put those questions in Ceph’s community, there will be more people to answer you the
questions.
Best Regards
Ziye Yang
From: SPDK [mailto:
[email protected]] On Behalf Of 杨亮
Sent: Thursday, April 26, 2018 10:42 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] enable spdk on ceph
Hi Ziye,
I have run src/spdk/setup.sh as below:
[
[email protected] ceph-ansible]# ../ceph/src/spdk/scripts/setup.sh
0005:01:00.0 (1179 010e): nvme -> vfio-pci
But I cannot find /dev/uio. How could I deploy osd on the nvme ssd?
Thank you very much.
At 2018-04-26 17:18:36, "Yang, Ziye"
<
[email protected]<mailto:
[email protected]>> wrote:
Hi,
I think that you may miss the following steps:
1 Unbind linux kernel NVMe Driver and replace it with UIO/VFIO.
2 Allocate huge pages while enabling SPDK.
Generally, you can use the following script: src/spdk/setup.sh. Also If you need more huge
page memory, you can use the following command: sudo HUGEMEM=8192 scripts/setup.sh (This
will allocate 8192 huge pages).
BTW, currently SPDK in Ceph will not greatly increase the performance. And those days,
Redhat is leading the task force to refactor Ceph OSD.
Best Regards
Ziye Yang
From: SPDK [mailto:
[email protected]<mailto:
[email protected]>] On
Behalf Of 杨亮
Sent: Thursday, April 26, 2018 6:07 PM
To:
[email protected]<mailto:
[email protected]>
Subject: [SPDK] enable spdk on ceph
hi,
I am making enable spdk on ceph. I got the error below. Could someone could help me ?
Thank you very much.
1. SPDK code will be compiled by default
if(CMAKE_SYSTEM_PROCESSOR MATCHES "i386|i686|amd64|x86_64|AMD64|aarch64")
option(WITH_SPDK "Enable SPDK" ON)
else()
option(WITH_SPDK "Enable SPDK" OFF)
endif()
2.bluestore_block_path = spdk:5780A001A5KD
3.
ceph-disk prepare --zap-disk --cluster ceph --cluster-uuid $ceph_fsid --bluestore
/dev/nvme0n1
ceph-disk activate /dev/nvme0n1p1 this step failed, the error
information is as below
[
[email protected] ceph-ansible-hxt-0417]# ceph-disk activate /dev/nvme0n1p1
/usr/lib/python2.7/site-packages/ceph_disk/main.py:5689: UserWarning:
*******************************************************************************
This tool is now deprecated in favor of ceph-volume.
It is recommended to use ceph-volume for OSD deployments. For details see:
http://docs.ceph.com/docs/master/ceph-volume/#migrating
*******************************************************************************
warnings.warn(DEPRECATION_WARNING)
got monmap epoch 1
2018-04-26 17:57:21.897 ffffa4090000 -1 bluestore(/var/lib/ceph/tmp/mnt.5lt4X5)
_setup_block_symlink_or_file failed to create block symlink to spdk:5780A001A5KD: (17)
File exists
2018-04-26 17:57:21.897 ffffa4090000 -1 bluestore(/var/lib/ceph/tmp/mnt.5lt4X5) mkfs
failed, (17) File exists
2018-04-26 17:57:21.897 ffffa4090000 -1 OSD::mkfs: ObjectStore::mkfs failed with error
(17) File exists
2018-04-26 17:57:21.897 ffffa4090000 -1 ** ERROR: error creating empty object store in
/var/lib/ceph/tmp/mnt.5lt4X5: (17) File exists
mount_activate: Failed to activate
Traceback (most recent call last):
File "/sbin/ceph-disk", line 11, in <module>
load_entry_point('ceph-disk==1.0.0', 'console_scripts',
'ceph-disk')()
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5772, in run
main(sys.argv[1:])
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5718, in
main
main_catch(args.func, args)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5746, in
main_catch
func(args)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3796, in
main_activate
reactivate=args.reactivate,
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3559, in
mount_activate
(osd_id, cluster) = activate(path, activate_key_template, init)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3736, in
activate
keyring=keyring,
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3187, in
mkfs
'--setgroup', get_ceph_group(),
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 577, in
command_check_call
return subprocess.check_call(arguments)
File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd',
'--no-mon-config', '--cluster', 'ceph', '--mkfs',
'-i', u'0', '--monmap',
'/var/lib/ceph/tmp/mnt.5lt4X5/activate.monmap', '--osd-data',
'/var/lib/ceph/tmp/mnt.5lt4X5', '--osd-uuid',
u'8683718d-0734-4043-827c-3d1ec4f65422', '--setuser', 'ceph',
'--setgroup', 'ceph']' returned non-zero exit status 250