Hi Pawel
just wanted you confirmation please correct me if i am wrong
sdc is ssd
and sdd is nvme
Regards
Nitin
On Thu, Oct 12, 2017 at 9:34 PM, Nitin Gupta <nitin.gupta981(a)gmail.com>
wrote:
Hi PawelX
Sorry i forgot to add command output , since now both nvme and AIO (sdf)
is enabled ..
i came to know that sdd2 is nvme ..
but sdc also giving 00:02:0 so i am confused if it is SSD.
[root@localhost ~]# ls -l /sys/block | grep host
lrwxrwxrwx 1 root root 0 Oct 12 11:54 sda -> ../devices/pci0000:00/0000:00:
01.1/host1/target1:0:0/1:0:0:0/block/sda
lrwxrwxrwx 1 root root 0 Oct 12 11:54 sdb -> ../devices/pci0000:00/0000:00:
02.0/virtio0/host2/target2:0:0/2:0:0:0/block/sdb
lrwxrwxrwx 1 root root 0 Oct 12 11:54 sdc -> ../devices/pci0000:00/0000:00:
02.0/virtio0/host2/target2:0:1/2:0:1:0/block/sdc
lrwxrwxrwx 1 root root 0 Oct 12 11:54 sdd -> ../devices/pci0000:00/0
000:00:02.0/virtio0/host2/target2:0:2/2:0:2:0/block/sdd
[root@localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 8G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 7.5G 0 part
├─VolGroup-lv_root (dm-0) 253:0 0 6.7G 0 lvm /
└─VolGroup-lv_swap (dm-1) 253:1 0 816M 0 lvm [SWAP]
sdb 8:16 0 256M 0 disk
sdc 8:32 0 223.6G 0 disk
sdd 8:48 0 419.2G 0 disk
├─sdd1 8:49 0 20.2M 0 part
└─sdd2 8:50 0 419.2G 0 part
On Thu, Oct 12, 2017 at 7:54 PM, Wodkowski, PawelX <
pawelx.wodkowski(a)intel.com> wrote:
> Steps are already provided. Just use ‘*ls -l /sys/block/ | grep host*’
> to check which SCSI target each /dev/sdX device is.
>
>
>
> *From:* Nitin Gupta [mailto:nitin.gupta981@gmail.com]
> *Sent:* Thursday, October 12, 2017 3:58 PM
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>; Harris,
> James R <james.r.harris(a)intel.com>; Wodkowski, PawelX <
> pawelx.wodkowski(a)intel.com>
> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim / Pawel
>
>
>
> Thanks for you response , could you please help me to find ssd device as
> well
>
> i mean i am trying to map ssd device as well .please find below output
>
>
>
> 1. for my host machine output of lsscsi is
>
>
>
> -bash-4.2# lsscsi
>
> [0:0:0:0] disk ATA INTEL SSDSC2BB24 0039 /dev/sda
>
> [1:0:0:0] disk ATA ST31000524NS SN11 /dev/sdb
>
> [2:0:0:0] disk ATA ST31000524NS SN12 /dev/sdc
>
> [3:0:0:0] disk ATA INTEL SSDSC2BB24 0039 /dev/sdd
>
> [5:0:0:0] disk ATA SAMSUNG MZ7WD120 103Q /dev/sde
>
> [6:0:0:0] disk ATA INTEL SSDSC2BB24 0039 /dev/sdf
>
> [7:0:0:0] disk ATA INTEL SSDSC2BB24 0039 /dev/sdg
>
> [8:0:0:0] disk ATA INTEL SSDSC2BB24 0039 /dev/sdh
>
> [9:0:0:0] disk ATA INTEL SSDSC2BB24 0039 /dev/sdi
>
>
>
> 2. changes done in conf file only to map /dev/sdf partition int AIO0
> device
>
>
>
> # Users must change this section to match the /dev/sdX devices to be
>
> # exported as vhost scsi drives. The devices are accessed using Linux
> AIO.
>
> [AIO]
>
> #AIO /dev/sda AIO0
>
> AIO /dev/sdf AIO0
>
>
>
> 3. added AIO device in conf
>
>
>
> # Vhost scsi controller configuration
>
> # Users should change the VhostScsi section(s) below to match the desired
>
> # vhost configuration.
>
> # Name is minimum required
>
> [VhostScsi0]
>
> # Define name for controller
>
> Name vhost.0
>
> # Assign devices from backend
>
> # Use the first malloc device
>
> Dev 0 Malloc0
>
> #Dev 1 Malloc1
>
> Dev 2 Nvme0n1
>
> #Dev 3 Malloc3
>
>
>
> # Use the first AIO device
>
> Dev 1 AIO0
>
> # Use the frist Nvme device
>
> #Dev 0 Nvme0n1
>
> #Dev 0 Nvme0n1p0
>
> #Dev 1 Nvme0n1p1
>
> # Use the third partition from second Nvme device
>
> #Dev 3 Nvme1n1p2
>
>
>
> # Start the poller for this vhost controller on one of the cores in
>
> # this cpumask. By default, it not specified, will use any core in the
>
> # SPDK process.
>
> #Cpumask 0x2
>
>
>
>
>
>
>
> 4 . running below command
>
>
>
> usr/local/bin/qemu-system-x86_64 -name sl6.9 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial
> mon:telnet:localhost:7704,server,nowait -monitor
> mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
> file=/home/qemu/qcows1,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> 5. in the guest VM below is no output for lsscsi
>
>
>
> [root@localhost ~]# lsblk
>
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
>
> sda 8:0 0 8G 0 disk
>
> ├─sda1 8:1 0 500M 0 part /boot
>
> └─sda2 8:2 0 7.5G 0 part
>
> ├─VolGroup-lv_root (dm-0) 253:0 0 6.7G 0 lvm /
>
> └─VolGroup-lv_swap (dm-1) 253:1 0 816M 0 lvm [SWAP]
>
> sdb 8:16 0 256M 0 disk
>
> sdc 8:32 0 223.6G 0 disk
>
> sdd 8:48 0 419.2G 0 disk
>
> ├─sdd1 8:49 0 20.2M 0 part
>
> └─sdd2 8:50 0 419.2G 0 part
>
>
>
> [root@localhost ~]# lspci
>
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
>
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> 00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
>
>
>
> [root@localhost ~]# lsscsi
>
> [root@localhost ~]#
>
>
>
> Regards
>
> nitin
>
>
>
>
>
>
>
>
>
> On Wed, Oct 11, 2017 at 5:53 PM, Wodkowski, PawelX <
> pawelx.wodkowski(a)intel.com> wrote:
>
> Most likely, yes.
>
>
>
> Pawel
>
>
>
> *From:* SPDK [mailto:spdk-bounces@lists.01.org] *On Behalf Of *Nitin
> Gupta
> *Sent:* Wednesday, October 11, 2017 12:49 PM
>
>
> *To:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Pawel
>
>
>
> Thanks for you reply . some how in my guest VM lsblk -S is not working
>
> please find below output of ls -l /sys/block/ | grep host
>
>
>
> [root@localhost ~]# ls -l /sys/block/ | grep host
>
> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sda ->
> ../devices/pci0000:00/0000:00:01.1/host1/target1:0:0/1:0:0:0/block/sda
>
> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdb ->
> ../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:0
> /2:0:0:0/block/sdb
>
> lrwxrwxrwx 1 root root 0 Oct 11 06:41 sdc ->
> ../devices/pci0000:00/0000:00:02.0/virtio0/host2/target2:0:2
> /2:0:2:0/block/sdc
>
>
>
> looks like then sdc is the nvme device , please correct me if i am wrong
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Wed, Oct 11, 2017 at 3:22 PM, Wodkowski, PawelX <
> pawelx.wodkowski(a)intel.com> wrote:
>
> Consider this config:
>
>
>
> [Malloc]
>
> NumberOfLuns 8
>
> LunSizeInMb 128
>
> BlockSize 512
>
>
>
> [Split]
>
> Split Nvme0n1 8
>
>
>
> [VhostScsi0]
>
> Name ctrl0
>
> Dev 0 Nvme0n1p0
>
> Dev 1 Malloc0
>
>
>
> This is output from my VM (for readability I filter out devices using ‘|
> grep host’).
>
>
>
> # lspci
>
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
>
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> 00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
>
> 00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet
> Controller (rev 03)
>
> *00:04.0* *SCSI storage controller: Red Hat, Inc Virtio SCSI*
>
>
>
> # ll /sys/block/ | grep host
>
> lrwxrwxrwx 1 root root 0 Oct 11 15:16 sda ->
> ../devices/pci0000:00/0000:00:01.1/ata2/host1/target1:0:0/1:
> 0:0:0/block/sda/
>
> lrwxrwxrwx 1 root root 0 Oct 11 15:16 sdb -> ../devices/pci0000:00/0000:
> *00:04.0*/virtio0/host2/*target2:0:0*/2:0:0:0/block/*sdb*/
>
> lrwxrwxrwx 1 root root 0 Oct 11 15:16 sdc -> ../devices/pci0000:00/0000:
> *00:04.0*/virtio0/host2/*target2:0:1*/2:0:1:0/block/*sdc*/
>
>
>
> As you can see (in this case) device which is reported as “SCSI storage
> controller: Red Hat, Inc Virtio SCSI“ is SPDK vhost device. Now find PCI
> address
>
> and use it to figure out which device is which. In this case I have two
> targets defined in vhost.conf (one is split of NVMe and one is Malloc disk)
> and have two
>
> Scsi disks: *sdb* and *sdc* in VM. I know that in vhost.conf *Dev 0* is
> *Nvme0n1p0* so I know that target2:0:*0* is NVMe split device mapped to
> *sdb*. Analogue
>
> target2:0:1 is Malloc0 mapped to *sdc*. To confirm this I run following
> command:
>
>
>
> # lsblk -S
>
> NAME HCTL TYPE VENDOR MODEL REV TRAN
>
> sda 1:0:0:0 disk ATA QEMU HARDDISK 2.5+ ata
>
> *sdb 2:0:0:0* disk INTEL *Split Disk* 0001
>
> *sdc 2:0:1:0* disk INTEL *Malloc disk* 0001
>
>
>
> Pawel
>
>
>
> *From:* SPDK [mailto:spdk-bounces@lists.01.org] *On Behalf Of *Nitin
> Gupta
> *Sent:* Wednesday, October 11, 2017 11:07 AM
> *To:* Harris, James R <james.r.harris(a)intel.com>
>
>
> *Cc:* Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject:* Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i was able to update my environment for guest VM which runs now
> in 2.6.32-696.el6.x86_64
>
> please find the lspci output and able to load virtio-scsi module as well
>
>
>
> Please help me to understand , how to identify nvme disk mapping .
>
> below mapping we used in etc/spdk/vhost.conf.in
>
>
>
> Question :-
>
>
>
> [VhostScsi0]
>
> # Define name for controller
>
> Name vhost.0
>
> # Assign devices from backend
>
> # Use the first malloc device
>
> Dev 0 Malloc0
>
> #Dev 1 Malloc1
>
> Dev 2 Nvme0n1
>
> #Dev 3 Malloc3
>
>
>
> [root@localhost ~]# lsblk
>
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
>
> sda 8:0 0 8G 0 disk
>
> ├─sda1 8:1 0 500M 0 part /boot
>
> └─sda2 8:2 0 7.5G 0 part
>
> ├─VolGroup-lv_root (dm-0) 253:0 0 6.7G 0 lvm /
>
> └─VolGroup-lv_swap (dm-1) 253:1 0 816M 0 lvm [SWAP]
>
> sdb 8:16 0 256M 0 disk
>
> sdc 8:32 0 419.2G 0 disk
>
> ├─sdc1 8:33 0 20.2M 0 part
>
> └─sdc2 8:34 0 419.2G 0 part
>
>
>
> [root@localhost ~]# lspci
>
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> 02)
>
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> II]
>
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>
> 00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
>
>
>
>
>
> [root@localhost ~]# lsmod
>
> Module Size Used by
>
> ib_ipoib 80839 0
>
> rdma_ucm 15739 0
>
> ib_ucm 12328 0
>
> ib_uverbs 40372 2 rdma_ucm,ib_ucm
>
> ib_umad 13487 0
>
> rdma_cm 36555 1 rdma_ucm
>
> ib_cm 36900 3 ib_ipoib,ib_ucm,rdma_cm
>
> iw_cm 32976 1 rdma_cm
>
> ib_sa 24092 4 ib_ipoib,rdma_ucm,rdma_cm,ib_cm
>
> ib_mad 41340 3 ib_umad,ib_cm,ib_sa
>
> ib_core 82732 10 ib_ipoib,rdma_ucm,ib_ucm,ib_uv
> erbs,ib_umad,rdma_cm,ib_cm,iw_cm,ib_sa,ib_mad
>
> ib_addr 8304 3 rdma_ucm,rdma_cm,ib_core
>
> ipv6 336368 14 ib_ipoib,ib_addr
>
> i2c_piix4 11232 0
>
> i2c_core 29132 1 i2c_piix4
>
> sg 29350 0
>
> ext4 381065 2
>
> jbd2 93284 1 ext4
>
> mbcache 8193 1 ext4
>
> virtio_scsi 10761 0
>
> sd_mod 37158 3
>
> crc_t10dif 1209 1 sd_mod
>
> virtio_pci 7512 0
>
> virtio_ring 8891 2 virtio_scsi,virtio_pci
>
> virtio 5639 2 virtio_scsi,virtio_pci
>
> pata_acpi 3701 0
>
> ata_generic 3837 0
>
> ata_piix 24409 2
>
> dm_mirror 14864 0
>
> dm_region_hash 12085 1 dm_mirror
>
> dm_log 9930 2 dm_mirror,dm_region_hash
>
> dm_mod 102467 8 dm_mirror,dm_log
>
>
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Sat, Oct 7, 2017 at 10:10 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Thanks , i will try to install virtio-scsi and update you
>
>
>
> Regards
>
> Nitin
>
>
>
> On Sat, Oct 7, 2017 at 12:01 AM, Harris, James R <
> james.r.harris(a)intel.com> wrote:
>
> Hi Nitin,
>
>
>
> Can you try loading the virtio-scsi module in the guest VM?
>
>
>
> Without a virtio-scsi driver in the guest, there is no way for the guest
> to see the virtio-scsi device backend created by the SPDK vhost target.
>
>
>
> Thanks,
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Friday, October 6, 2017 at 1:28 AM
>
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for looking logs ,
>
> Please find attached vhost log and qemu command which i am invoking
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial
> mon:telnet:localhost:7704,server,nowait -monitor
> mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
> file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
> 2. looks like there is no virtio-scsi module loaded in guest VM
>
>
>
> i ran lsmod command in guest VM please find below output
>
>
>
> [root@localhost ~]# lsmod
>
> Module Size Used by
>
> ipt_REJECT 2349 2
>
> nf_conntrack_ipv4 9440 2
>
> nf_defrag_ipv4 1449 1 nf_conntrack_ipv4
>
> iptable_filter 2759 1
>
> ip_tables 17765 1 iptable_filter
>
> ip6t_REJECT 4562 2
>
> nf_conntrack_ipv6 8650 2
>
> nf_defrag_ipv6 12148 1 nf_conntrack_ipv6
>
> xt_state 1458 4
>
> nf_conntrack 79611 3 nf_conntrack_ipv4,nf_conntrack
> _ipv6,xt_state
>
> ip6table_filter 2855 1
>
> ip6_tables 19424 1 ip6table_filter
>
> ipv6 322291 15 ip6t_REJECT,nf_conntrack_ipv6,
> nf_defrag_ipv6
>
> i2c_piix4 12574 0
>
> i2c_core 31274 1 i2c_piix4
>
> sg 30090 0
>
> ext4 359671 2
>
> mbcache 7918 1 ext4
>
> jbd2 88768 1 ext4
>
> sd_mod 38196 3
>
> crc_t10dif 1507 1 sd_mod
>
> virtio_pci 6653 0
>
> virtio_ring 7169 1 virtio_pci
>
> virtio 4824 1 virtio_pci
>
> pata_acpi 3667 0
>
> ata_generic 3611 0
>
> ata_piix 22652 2
>
> dm_mod 75539 6
>
>
>
>
>
> please let me know if i am missing something
>
>
>
> Regards
>
> Nitin
>
>
>
>
>
> On Fri, Oct 6, 2017 at 9:28 AM, Harris, James R <james.r.harris(a)intel.com>
> wrote:
>
> Thanks Nitin. I don’t see the SPDK vhost log attached though – could you
> add it?
>
>
>
> Can you also confirm the virtio-scsi module is loaded in your guest VM?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Thursday, October 5, 2017 at 3:49 AM
>
>
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Please find attached Vm-guest-boot up log and host dmesg log
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 11:36 PM, Harris, James R <
> james.r.harris(a)intel.com> wrote:
>
> Hi Nitin,
>
>
>
> It would be most helpful if you could get lspci working on your guest VM.
>
>
>
> Could you post dmesg contents from your VM and the SPDK vhost log after
> the VM has booted?
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Wednesday, October 4, 2017 at 10:42 AM
> *To: *James Harris <james.r.harris(a)intel.com>
> *Cc: *Storage Performance Development Kit <spdk(a)lists.01.org>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> i am running this on remote box which is having linux 3.10 .
>
> on the guest VM lspci command is not working and i am not able to install
> lspci as well
>
> below is the lsblk -a command output -S is also not available in guest VM
>
>
>
> NAME MAJ:MIN RM SIZE RO MOUNTPOINT
>
> ram0 1:0 0 16M 0
>
> ram1 1:1 0 16M 0
>
> ram2 1:2 0 16M 0
>
> ram3 1:3 0 16M 0
>
> ram4 1:4 0 16M 0
>
> ram5 1:5 0 16M 0
>
> ram6 1:6 0 16M 0
>
> ram7 1:7 0 16M 0
>
> ram8 1:8 0 16M 0
>
> ram9 1:9 0 16M 0
>
> ram10 1:10 0 16M 0
>
> ram11 1:11 0 16M 0
>
> ram12 1:12 0 16M 0
>
> ram13 1:13 0 16M 0
>
> ram14 1:14 0 16M 0
>
> ram15 1:15 0 16M 0
>
> loop0 7:0 0 0
>
> loop1 7:1 0 0
>
> loop2 7:2 0 0
>
> loop3 7:3 0 0
>
> loop4 7:4 0 0
>
> loop5 7:5 0 0
>
> loop6 7:6 0 0
>
> loop7 7:7 0 0
>
> sda 8:0 0 8G 0
>
> ├─sda1 8:1 0 500M 0 /boot
>
> └─sda2 8:2 0 7.5G 0
>
> ├─VolGroup-lv_root (dm-0) 253:0 0 5.6G 0 /
>
> └─VolGroup-lv_swap (dm-1) 253:1 0 2G 0 [SWAP]
>
>
>
> Regards
>
> Nitin
>
>
>
> On Wed, Oct 4, 2017 at 10:13 PM, Harris, James R <
> james.r.harris(a)intel.com> wrote:
>
> Hi Nitin,
>
>
>
> Are you running these commands from the host or the VM? You will only
> see the virtio-scsi controller in lspci output from the guest VM.
>
>
>
> -Jim
>
>
>
>
>
> *From: *Nitin Gupta <nitin.gupta981(a)gmail.com>
> *Date: *Tuesday, October 3, 2017 at 12:23 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>, James
> Harris <james.r.harris(a)intel.com>
>
>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> One quick update , after running ./script/setup.h for spdk nvme drive is
> converting to uio generic pci device .
>
> so only difference which i found after and before mapping is command for
> ls -l /dev/u*
>
>
>
> can i use /dev/uio0 are the nvme device
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:30 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> Looks like sdf to sdi is the nvme , please correct me if i ma wrong
>
>
>
> -bash-4.2# lsblk -S
>
> NAME HCTL TYPE VENDOR MODEL REV TRAN
>
> sda 0:0:0:0 disk ATA INTEL SSDSC2BB24 0039 sata
>
> sdb 1:0:0:0 disk ATA ST31000524NS SN11 sata
>
> sdc 2:0:0:0 disk ATA ST31000524NS SN12 sata
>
> sdd 3:0:0:0 disk ATA INTEL SSDSC2BB24 0039 sata
>
> sde 5:0:0:0 disk ATA SAMSUNG MZ7WD120 103Q sata
>
> sdf 6:0:0:0 disk ATA INTEL SSDSC2BB24 0039 sata
>
> sdg 7:0:0:0 disk ATA INTEL SSDSC2BB24 0039 sata
>
> sdh 8:0:0:0 disk ATA INTEL SSDSC2BB24 0039 sata
>
> sdi 9:0:0:0 disk ATA INTEL SSDSC2BB24 0039 sata
>
>
>
> Regards
>
> Nitin
>
>
>
> On Tue, Oct 3, 2017 at 11:21 AM, Nitin Gupta <nitin.gupta981(a)gmail.com>
> wrote:
>
> Hi Jim
>
>
>
> i am getting below output for lspci for NVram
>
>
>
> d8:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
> (rev 02)
>
> d9:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
> (rev 02)
>
> da:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
> (rev 02)
>
> db:00.0 Non-Volatile memory controller: Intel Corporation Device 0a53
> (rev 02)
>
>
>
> lsblk
>
>
>
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
>
> sda 8:0 0 223.6G 0 disk
>
> ├─sda1 8:1 0 6G 0 part [SWAP]
>
> ├─sda2 8:2 0 512M 0 part /bootmgr
>
> └─sda3 8:3 0 217.1G 0 part /
>
> sdb 8:16 0 931.5G 0 disk
>
> sdc 8:32 0 931.5G 0 disk
>
> sdd 8:48 0 223.6G 0 disk
>
> sde 8:64 0 111.8G 0 disk
>
> sdf 8:80 0 223.6G 0 disk
>
> sdg 8:96 0 223.6G 0 disk
>
> sdh 8:112 0 223.6G 0 disk
>
> sdi 8:128 0 223.6G 0 disk
>
>
>
>
>
> So how to know which one is virto-scsi controller basically i wanted to
> run fio test with nvme mapped device
>
>
>
>
>
> On Mon, Oct 2, 2017 at 11:12 PM, Harris, James R <
> james.r.harris(a)intel.com> wrote:
>
> Hi Nitin,
>
>
>
> lspci should show you the virtio-scsi controller PCI device.
>
> lsblk –S should show you the SCSI block devices attached to that
> virtio-scsi controller.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Monday, October 2, 2017 at 10:38 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *Re: [SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi Jim
>
>
>
> Thanks for your reply and sorry for my late reply ..
>
> could you please give one example to know how to identify virtio-scsi
> controller in the linux
>
> i mean which directory it will be present or which file system ?
>
>
>
> Regards
>
> Nitin
>
>
>
> On Thu, Sep 28, 2017 at 8:30 PM, Harris, James R <
> james.r.harris(a)intel.com> wrote:
>
> Hi Nitin,
>
>
>
> You should see a virtio-scsi controller in the VM, not an NVMe device.
> This controller should have one LUN attached, which SPDK vhost maps to the
> NVMe device attached to the host.
>
>
>
> -Jim
>
>
>
>
>
> *From: *SPDK <spdk-bounces(a)lists.01.org> on behalf of Nitin Gupta <
> nitin.gupta981(a)gmail.com>
> *Reply-To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Date: *Thursday, September 28, 2017 at 4:07 AM
> *To: *Storage Performance Development Kit <spdk(a)lists.01.org>
> *Subject: *[SPDK] nvme drive not showing in vm in spdk
>
>
>
> Hi All
>
>
>
> i am new in spdk development and currently doing spdk setup in that was
> able to setup back-end storage with NVME .After running the VM with
> following command , there is no nvme drive present .
>
>
>
> /usr/local/bin/qemu-system-x86_64 -m 1024 -object
> memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
> -nographic -no-user-config -nodefaults -serial
> mon:telnet:localhost:7704,server,nowait -monitor
> mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
> file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
> ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
> -device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
>
>
>
>
>
> how to identify which is nvme drive ?
>
> is there any way to enable nvme from qemu command ?
>
>
>
> PS: i have already specified the nvme drive in vhost.conf.in
>
>
>
> Regards
>
> Nitin
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/spdk
>
>
>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
>
https://lists.01.org/mailman/listinfo/spdk
>
>
>