Please ensure all patches that you believe need to make the 19.04 release have a
hashtag of '19.04' set in GerritHub. The current set of patches that are tagged
and need to be reviewed can be seen here:
You can also respond here if a patch needs further justification for inclusion.
The merge window will close by Friday. The merge window for 19.07 will open up
The SPDK document says "Before running an SPDK application, some hugepages must be allocated...". But at least how much is sufficient for SPDK to run?
If I give a large HUGEMEM number while run scripts/setup.sh, I could see errors like this:
# ERROR: requested 1024 hugepages but only 672 could be allocated.
# Memory might be heavily fragmented. Please try flushing the system cache, or reboot the machine.
So I usually use a much smaller HUGEMEM number, like 128, and my SPDK application can still run. But I see this message:
EAL: No free hugepages reported in hugepages-1048576Kb
What does this message imply? Is there any risk though my application can run?
It looks like scripts/setup.sh allocates 2-MB hugepages. Why does it complain about 1-GB hugepages?
Recently I've created ports for all SPDK dependencies (DPDK, isa-l, ipsec)
for vcpkg package manager. In short, vcpkg is a curated (by Microsoft) tool
and infrastructure to manage consumption of C/C++ libraries by developers
all over the world, it is open source and cross platform (MS/Linux/MacOS).
More on vcpkg here https://github.com/Microsoft/vcpkg
A couple of weeks ago, I finally moved to creating SPDK port for vcpkg.
Looks like it is doing well and it is close to be merged into `master`.
I want to encourage you to visit the PR page on GitHub
https://github.com/Microsoft/vcpkg/pull/5877 and give your opinion. What do
you think? Is it a good thing for SPDK community? will it make developer's
life easier? If you think it is worth the effort, do you think there is
things to improve?
Looking forward to hearing from you.
as far as I understand, both nvmf and iscsi (and thus the multiprotocol)
targets support hot remove, cleanly handling the case when the bdev
underlying the target's LUN/namespace goes away.
My question is whether the targets currently support the reverse, that is
hotplug of the bdev behind the LUN/namespace the target has configured by
either config file or RPC that has been hot-removed previously.
Recently I was working on issue 464.
As proposal, patch https://review.gerrithub.io/#/c/spdk/spdk/+/431471/ has been submitted that could suggest solution to this issue.
I will be grateful for any suggestions/ideas how to solve issue 464?
I’d like to do some housecleaning on the open SPDK patches on GerritHub. I suspect a lot of older patches out there have been abandoned in mind, but not abandoned on GerritHub. Cleaning these up will make it easier for patch reviewers (especially the maintainers) to know what needs to be reviewed.
If you have a patch that has not been updated in the last 3 months, please do one of the following:
1. Abandon the patch in GerritHub yourself if it’s no longer relevant.
2. Rebase your patch on top of latest master and push the new revision to GerritHub. This will reset the clock and indicate the patch is still relevant and in need of review.
Any patch with a last update of 3 months or more will be abandoned by one of the core maintainers starting 2 weeks from now – April 11th.
Also note that any patch that is abandoned is not deleted – you still have the option to restore the patch and then push a rebased version.
While looking at the code for snapshot creation in blobstore I saw that the
cluster maps of the snapshot and original blob are copied/zeroed with
memcpy/memset (respectively) at the critical moment that io is frozen to
the original blob. This imposes two O(n) operations, where n is the number
of bytes in the cluster map(s) of snapshotted blobs. This adds a
significant delay in io to the snapshotted blob, which is quite painful if
that blob is an active lvol.
Is this on purpose? We can zero the new cluster map before the io freeze,
then swap the pointers between the blobs during the freeze. If I've missed
something please let me know.
I've submitted a patch for the blobstore:
In SPDK nvme driver, we check timeout and call registered timeout callback
function after processing completions in
nvme_pcie_qpair_process_completions. But in following scenario, we may lost
timeout callback function call:
1. T+0: send a command. suppose it will complete in 5.5sec, and timeout
threshold is 5 sec.
2. T+2: process completion. suppose we check completions for every 2
3. T+4: process completion. The command still not times out
4. T+5.5: command completes after 5 seconds
5. T+6: process completion. When checking timeout, the command has
already been removed from pqpair->outstanding_tr. So, the command times
out, but the timeout callback is not called.
If we move timeout check ahead of processing completions, we can still
collect this timeout command in the outstanding_tr list.
The fix is submitted here: https://review.gerrithub.io/c/spdk/spdk/+/451186
In this fix, the timeout callback function is just called before processing
the timeout command. If it breaks any assumption, I can do another fix.