On Nov 3, 2019 9:00 AM, Or Gerlitz <gerlitz.or(a)gmail.com> wrote:
On Thu, Oct 31, 2019 at 8:54 PM Walker, Benjamin
> Here's the patch at the top of the series:
> Zero copy is getting enabled on the socket and I do see completion
> notifications, but it's always doing a deferred copy. If there was some
> description somewhere of what causes the kernel to end up doing a deferred copy
> instead of page pinning that would be really useful.
the patch you pointed out doesn't change lib/nvmf/tcp.c only the sock code..
can you also refer to the patch that change nvmf
The patches are all in a series with the one I linked at the end. They show in the "relation chain" section of Gerrit. You can click the "Download" button in Gerrit for the patch I linked and it will give you a git-fetch command that grabs the whole series.
With this message I wanted to update SPDK community on state of VPP socket abstraction as of SPDK 19.07 release.
At this time there does not seem to be a clear efficiency improvements with VPP. There is no further work planned on SPDK and VPP integration.
As some of you may remember, SPDK 18.04 release introduced support for alternative socket types. Along with that release, Vector Packet Processing (VPP)<https://wiki.fd.io/view/VPP> 18.01 was integrated with SPDK, by expanding socket abstraction to use VPP Communications Library (VCL). TCP/IP stack in VPP<https://wiki.fd.io/view/VPP/HostStack> was in early stages back then and has seen improvements throughout the last year.
To better use VPP capabilities, following fruitful collaboration with VPP team, in SPDK 19.07, this implementation was changed from VCL to VPP Session API from VPP 19.04.2.
VPP socket abstraction has met some challenges due to inherent design of both projects, in particular related to running separate processes and memory copies.
Seeing improvements from original implementation was encouraging, yet measuring against posix socket abstraction (taking into consideration entire system, i.e. both processes), results are comparable. In other words, at this time there does not seem to be a clear benefit of either socket abstraction from standpoint of CPU efficiency or IOPS.
With this message I just wanted to update SPDK community on state of socket abstraction layers as of SPDK 19.07 release. Each SPDK release always brings improvements to the abstraction and its implementations, with exciting work on more efficient use of kernel TCP stack - changes in SPDK 19.10 and SPDK 20.01.
However there is no active involvement at this point around VPP implementation of socket abstraction in SPDK. Contributions in this area are always welcome. In case you're interested in implementing further enhancements of VPP and SPDK integration feel free to reply, or to use one of the many SPDK community communications channels<https://spdk.io/community/>.
I appreciate the response from everyone to my e-mail  last week. We’re getting much better clarity on the intermittent failures in the SPDK CI test pool.
Thanks to Seth Howell’s usual wizardry, we have some improvements to better integrate SPDK CI with GitHub and GerritHub. A description of those improvements can be found at https://spdk.io/development/#integration_false_positive. It includes a link for querying the open GitHub issues with the "Intermittent Failure" label. This should hopefully facilitate matching new failures with existing issues.
Please take advantage of these improvements and provide any feedback here on the mailing list or on Slack.
In current SPDK thread, it has a single message ring, which is configured as MP/SC. It does work for both SP and MP rings, but it also introduces high overhead even when the application uses SP/SC only. For the example of the test bdevperf on one Malloc bdev, the cost of ring enqueue is as high as 15% of total CPU time.
My mitigation way is to add another SP/SC message ring into the thread. When one thread receives its first message, it reserves this SP/SC message ring to the first provider. Messages from other providers are still sent to the existed MP/SC message ring.
In this way, applications who take SP/SC into design considerations can get the benefits as expected. In the above example, we observe 15% performance improvement.
Here is the code diff: https://github.com/cranechu/spdk/commit/2704ab4902b2b7743a9b269350a9e0f95...
Any comments are welcome.