The SPDK Chandler based build pool will be paused for approximately 24 hours starting at 10:00 PM UTC on Friday due to maintenance related to last week's shutdown. Please feel free to continue submitting patches over the course of the next day, as the Jenkins build pool will be running and able to provide results. We will also be resuming Chandler build pool operations on Saturday in order to pick up the slack in the build queue in time for the work week.
Working on a vbdev that maintains some global state and underlying data
structures, I decided to explore an SPDKish alternative to the traditional
synchronization, that of the single-threaded semantics where global data
structures are being maintained by a dedicated thread, with requests being
served by message passing to/from that thread.
Coding such a thread against the spdk_allocate_thread() guidelines proved
to be quite straightforward, and I've got it up and running. At the same
time, I'm still unclear as to why I should be essentially duplicating the
features of the pinned SPDK thread, with my gut feeling being that the very
same single-threaded semantics could have been easily achieved by pinning
my execution context to some randomly or round-robin chose pinned thread of
the configured allowed cores mask.
If that gut feeling doesn't fool me, I will appreciate a pointer to any
docs and/or examples showing how one can pin an arbitrary execution context
inside a vbdev to an SPDK native thread (similar to app_start and friends,
I believe), and do polling on that thread/ messaging to/from that thread
similar to what a POSIX thread equipped with spdk thread envelope could do.
Thanks in advance,
I've been attempting to perform a build of the SPDK that relies on a different env_dpdk implementation, but am running into trouble. The documentation's porting guide is a little sparse, but between that, a fragment from an email to the list:
point SPDK at a different implementation of include/spdk/env.h through the
and some experimentation, I've not deduced exactly how this is supposed to work.
I grabbed a copy of a default built libspdk_env_dpdk.a and placed it in my ~/src/my_own_env_dpdk directory, and ran configure setting '--with-env=~/src/my_own_env_dpdk/libspdk_env_dpdk.a', but a build quickly fails because SPDK_ROOT_DIR/mk/spdk_common.mk attempts to do an include of an env.mk therein:
SPDK_ROOT_DIR/mk/spdk_common.mk:170: ~/src/my_own_env_dpdk/libspdk_env_dpdk.a/env.mk: Not a directory
The porting guide and the behavior of the build would indicate that CONFIG_ENV needs to be a directory.
Am I missing something fundamental in my configuration setup? Or, perhaps, did this work a while back, but over time changes to the build process broke it? I tried fudging with it further by doing things like copying the original SPDK_ROOT_DIR/lib/env_dpdk/env.mk to my own directory (and re-running the configure setting '--with-env' to just the directory, ~/src/my_own_env_dpdk), and kicking off the build. That allows the build to run a lot further, but ultimately it fails later and I would suspect we wouldn't be imposing a requirement that one's own env implementation necessarily has to have an 'env.mk'. Also, there seem to be other elements in the build process later on that don't consider CONFIG_ENV when you get to the linking stages.
Anyone on IRC on in recent community meetings knows we've been hit by spammer-robots here a lot recently.
I mentioned this after the last community meeting, tonight will be the first one where the steps to login will require you to know the word in the image below so give yourself a few extra minutes as none of us has tried this yet so there may be some challenges. Keep an eye on IRC and email in case something doesn't work right. The SPDK webpage http://www.spdk.io/community/ is in the process of being updated to include that key phrase, it should be live soon. FYI I'm pretty sure its case sensitive too, those are all caps :)
Earlier today our IRC channel was updated to require registered nicknames, again to try and minimize the spam. If you are having trouble getting on IRC, below are a few links that should help you get things figured out.
For those who have commented on why I had such huge arrays of pointers for crypto operations, thought it'd be easier to explain it in email rather than in the review. Feel free to ignore if you're not paying attention to that patch series.
Below are the calculated numbers for each pool size (crypto ops, source mbufs, dest mbufs). Pretty easy to follow, because we use LBA as IV we break every IO in 512B crypto operations each needing one op from each pool (unless it's a read then is doesn't need a dest mbuf).
The API I'm using to submit requires passing in an array of these buffers (or operations) as a parameter. So there are 4 of these arrays of pointers in the channel structure, the memory is allocated from the heap:
struct rte_crypto_op *crypto_ops[NUM_MBUFS];
struct rte_mbuf *src_mbufs[NUM_MBUFS];
struct rte_mbuf *dst_mbufs[NUM_MBUFS];
struct rte_crypto_op *dequeued_ops[NUM_MBUFS];
There are two arrays of crypto operations, one for enqueue and one for de-queue. There's a rule on the size of these pools too, they must be powers of 2. So, for example, in order to support 64K IO and 32 QD I need pool sizes of 4096 (see above) however that's not sufficient because running something bdevperf (or anything really) will quickly get us to that Q depth plus there's other IO from other modules like LVS that cause me to run out of mbufs unless I bump it up like another 8 which isn't possible so the next option is 8192. Even at 8192 we have pretty limited QD (4) at 1MB IO.
Will discuss some more on IRC, what's in the patch now is probably too high for normal use but am going to need to come up with something here that makes sense. At 8192 we're only talking about 2MB of heap memory for ptr storage in these arrays but even then for large IO (1MB>) this probably isn't acceptable.
I simply tested the BlobFS Asynchronous API by using SPDK events framework to execute multi tasks, each task writes one file.
But it doesn't work, the spdk_file_write_async() reported an error when resizing the file size.
The call stack looks like this:
spdk_file_write_async() -> __readwrite() -> spdk_file_truncate_async() -> spdk_blob_resize()
The resize operation must be done in the metadata thread which invoked the spdk_fs_load(), so only the task dispatched to the metadata CPU core works.
That's to say only one thread can be used to write files. It's hard to use, and performance issues may arise.
Does anyone knows further more about this?
thanks very much
When I'm digging into SPDK iscsi source codes, I have some doubt about the
Look at this example.
Start iscsi target on core 0-7:
app/iscsi_tgt/iscsi_tgt -L all -d 1 -m 0xff
I can see 8 threads which are polling.
But the acceptor is only running on the core 0, right?
This means all IOs are processed on core 0, no matter how many bdevs are
I can see sending NOP IN is a poller running on all reactors. But this is
not about read/write.
So how to utilize the other threads to maximum the iops and throughput?
Thanks in advance.
The SmartX email address is only for business purpose. Any sent message
that is not related to the business is not authorized or permitted by