I see SPDK supports VPP, is that in development stage or can it be used for some testing?
Is the configuration setup for nvmf_tgt same as iscsci_tgt?
- Unbind network from kernel using dpdk tools
- Use the same network pci for VPP config
- Start vpp
- Start nvmf_tgt with VPP enabled
On initiator side:
- Use spdk perf with tcp enabled to do io.
With above setup, perf failing even connecting to target but if I use kernel/posix tcp on both target and initialtor, it works fine. Am I missing any configuration setup?
A soft reminder that the bug scrub meeting info as follows showed in the link https://spdk.io/community/
Next Euro-Friendly Github Issue Review Meeting
Wednesday, January 9, 2019, 12:00 AM GMT+8
Next Asia-Friendly Github Issue Review Meeting
Thursday, January 10, 2019, 12:30 PM GMT+8
Hi Sasha, Evgeniy, Mellanox SPDK teams, and all,
I’m working on T10 DIF and DIX support in overall of SPDK.
I developed first the core operation, generation and verification of DIF, and bit-flip error injection for byte granularity of SGLs.
They are reviewable now.
From this experience, I learned that related parameters are many and introducing well-elaborated data structure will be the key to success.
I have found the very interesting presentation by Tzahi Oved at Mellanox about T10 DIF offload.
He proposed data structure and APIs in the presentation.
How about using similar or compatible data structure in SPDK?
SPDK is not limited to RDMA and iSCS is included my scope. Hence Exactly compatible structure may not be possible.
But I believe that aligning SPDK to these very well elaborated structure will be helpful for Mellanox and SPDK, and
will accelerate Mellanox’s T10 DIF offload for SPDK NVMe-oF.
Any feedback is very appreciated.
It's the end of the week now I wanted to let everyone know where we at real quick:
* Jenkins didn't run my patch! We are updating the script that helps make sure we don't miss patch sets, it has been off for the last several hours. Until this is completed your patch may not get picked up by Jenkins. If you confirm that your patch is not in the build queue (check https://ci.spdk.io/spdk-jenkins/ ) after 20 min or so please comment on your own patch with "retrigger Jenkins didn't run"
* I've waited plenty of time and my build logs are not there! This one is a little more complicated and we will likely to continue to suffer from this until after the holidays. Build logs are stored locally by Jenkins and then are rsync'd multiple times as they find their way to the public status page. We are taking steps to simplify this. If you have a critical need to get to a log, as opposed to just running through CI again, please jump on IRC and ask one of us to help. In some cases we may be able to get them directly from the Jenkins server
With the holidays coming up please continue to bear with us for a while longer, I'll send another update out at the end of next week.
I'm new to spdk framework. Trying to understand how to transport io requests from virtio-blk to iscsi target.
Do I have to follow spdk fio_plugin type bdev transaction to spdk layer to transfer those packets to iscsi target? What all other options do we have, assuming I don't have nvme-of type controller available.
On behalf of the SPDK community I'm pleased to announce the release of SPDK 18.10.1.
This release contains infrastructure for packaging SPDK into an RPM.
Special thanks to Lance Hartmann and Pawel Wodkowski for their contributions to this feature.
The full changelog for this release is available at:
All outstanding patches marked for 18.10.1 have now been merged, so the plan is
to tag the release after 08:00 AM GMT tomorrow. Please respond to this email if
there are any last minute concerns.
Thanks to everyone for your contributions. A release announcement will follow.
Unless anyone has an issue with it I believe Shuhei you can just add the item to https://trello.com/b/MN8auadQ/spdk-roadmap and put your name on the card to make it clear that you're signing up for it (well, you already have) and then just sorta drive your patch review process towards closure the best you can here over the holidays.