|
This serves as a performance benchmark as well as a stress test
for QHT. We can tweak quite a number of things, including the
number of resize threads and how frequently resizes are triggered.
A performance comparison of QHT vs CLHT[1] and ck_hs[2] using
this same benchmark program can be found here:
http://imgur.com/a/0Bms4
The tests are run on a 64-core AMD Opteron 6376, pinning threads
to cores favoring same-socket cores. For each run, qht-bench is
invoked with:
$ tests/qht-bench -d $duration -n $n -u $u -g $range
, where $duration is in seconds, $n is the number of threads,
$u is the update rate (0.0 to 100.0), and $range is the number
of keys.
Note that ck_hs's performance drops significantly as writes go
up, since it requires an external lock (I used a ck_spinlock)
around every write.
Also, note that CLHT instead of using a seqlock, relies on an
allocator that does not ever return the same address during the
same read-critical section. This gives it a slight performance
advantage over QHT on read-heavy workloads, since the seqlock
writes aren't there.
[1] CLHT: https://github.com/LPD-EPFL/CLHT
https://infoscience.epfl.ch/record/207109/files/ascy_asplos15.pdf
[2] ck_hs: http://concurrencykit.org/
http://backtrace.io/blog/blog/2015/03/13/workload-specialization/
A few of those plots are shown in text here, since that site
might not be online forever. Throughput is on Mops/s on the Y axis.
200K keys, 0 % updates
450 ++--+------+------+-------+-------+-------+-------+------+-------+--++
| + + + + + + + + +N+ |
400 ++ ---+E+ ++
| +++---- |
350 ++ 9 ++------+------++ --+E+ -+H+ ++
| | +H+- | -+N+---- ---- +++ |
300 ++ 8 ++ +E+ ++ -----+E+ --+H+ ++
| | +++ | -+N+-----+H+-- |
250 ++ 7 ++------+------++ +++-----+E+---- ++
200 ++ 1 -+E+-----+H+ ++
| ---- qht +-E--+ |
150 ++ -+E+ clht +-H--+ ++
| ---- ck +-N--+ |
100 ++ +E+ ++
| ---- |
50 ++ -+E+ ++
| +E+E+ + + + + + + + + |
0 ++--E------+------+-------+-------+-------+-------+------+-------+--++
1 8 16 24 32 40 48 56 64
Number of threads
200K keys, 1 % updates
350 ++--+------+------+-------+-------+-------+-------+------+-------+--++
| + + + + + + + + -+E+ |
300 ++ -----+H+ ++
| +E+-- |
| 9 ++------+------++ +++---- |
250 ++ | +E+ -- | -+E+ ++
| 8 ++ -- ++ ---- |
200 ++ | +++- | +++ ---+E+ ++
| 7 ++------N------++ -+E+-- qht +-E--+ |
| 1 +++---- clht +-H--+ |
150 ++ -+E+ ck +-N--+ ++
| ---- |
100 ++ +E+ ++
| ---- |
| -+E+ |
50 ++ +H+-+N+----+N+-----+N+------ ++
| +E+E+ + + + +N+-----+N+-----+N+----+N+-----+N+ |
0 ++--E------+------+-------+-------+-------+-------+------+-------+--++
1 8 16 24 32 40 48 56 64
Number of threads
200K keys, 20 % updates
300 ++--+------+------+-------+-------+-------+-------+------+-------+--++
| + + + + + + + + + |
| -+H+ |
250 ++ ---- ++
| 9 ++------+------++ --+H+ ---+E+ |
| 8 ++ +H+-- ++ -+H+----+E+-- |
200 ++ | +E+ --| -----+E+-- +++ ++
| 7 ++ + ---- ++ ---+H+---- +++ qht +-E--+ |
150 ++ 6 ++------N------++ -+H+-----+E+ clht +-H--+ ++
| 1 -----+E+-- ck +-N--+ |
| -+H+---- |
100 ++ -----+E+ ++
| +E+-- |
| ----+++ |
50 ++ -+E+ ++
| +E+ +++ |
| +E+N+-+N+-----+ + + + + + + |
0 ++--E------+------N-------N-------N-------N-------N------N-------N--++
1 8 16 24 32 40 48 56 64
Number of threads
200K keys, 100 % updates qht +-E--+
clht +-H--+
160 ++--+------+------+-------+-------+-------+-------+---ck-+-N-----+--++
| + + + + + + + + ----H |
140 ++ +H+-- -+E+ ++
| +++---- ---- |
120 ++ 8 ++------+------++ -+H+ +E+ ++
| 7 ++ +H+---- ++ ---- +++---- |
100 ++ | +E+ | +++ ---+H+ -+E+ ++
| 6 ++ +++ ++ -+H+-- +++---- |
80 ++ 5 ++------N----------+E+-----+E+ ++
| 1 -+H+---- +++ |
| -----+E+ |
60 ++ +H+---- +++ ++
| ----+E+ |
40 ++ +H+---- ++
| --+E+ |
20 ++ +E+ ++
| +EE+ + + + + + + + + |
0 ++--+N-N---N------N-------N-------N-------N-------N------N-------N--++
1 8 16 24 32 40 48 56 64
Number of threads
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1465412133-3029-13-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|