diff options
author | Fam Zheng <famz@redhat.com> | 2016-04-22 21:53:56 +0800 |
---|---|---|
committer | Kevin Wolf <kwolf@redhat.com> | 2016-04-22 16:44:09 +0200 |
commit | ab27c3b5e7408693dde0b565f050aa55c4a1bcef (patch) | |
tree | a6b16ffcb414941d0cd6c63d0c7c3050a3a73451 | |
parent | 37989ced44e559dbb1edb8b238ffe221f70214b4 (diff) | |
download | qemu-ab27c3b5e7408693dde0b565f050aa55c4a1bcef.zip |
mirror: Workaround for unexpected iohandler events during completion
Commit 5a7e7a0ba moved mirror_exit to a BH handler but didn't add any
protection against new requests that could sneak in just before the
BH is dispatched. For example (assuming a code base at that commit):
main_loop_wait # 1
os_host_main_loop_wait
g_main_context_dispatch
aio_ctx_dispatch
aio_dispatch
...
mirror_run
bdrv_drain
(a) block_job_defer_to_main_loop
qemu_iohandler_poll
virtio_queue_host_notifier_read
...
virtio_submit_multiwrite
(b) blk_aio_multiwrite
main_loop_wait # 2
<snip>
aio_dispatch
aio_bh_poll
(c) mirror_exit
At (a) we know the BDS has no pending request. However, the same
main_loop_wait call is going to dispatch iohandlers (EventNotifier
events), which may lead to a new I/O from guest. So the invariant is
already broken at (c). Data loss.
Commit f3926945c8 made iohandler to use aio API. The order of
virtio_queue_host_notifier_read and block_job_defer_to_main_loop within
a main_loop_wait becomes unpredictable, and even worse, if the host
notifier event arrives at the next main_loop_wait call, the
unpredictable order between mirror_exit and
virtio_queue_host_notifier_read is also a trouble. As shown below, this
commit made the bug easier to trigger:
- Bug case 1:
main_loop_wait # 1
os_host_main_loop_wait
g_main_context_dispatch
aio_ctx_dispatch (qemu_aio_context)
...
mirror_run
bdrv_drain
(a) block_job_defer_to_main_loop
aio_ctx_dispatch (iohandler_ctx)
virtio_queue_host_notifier_read
...
virtio_submit_multiwrite
(b) blk_aio_multiwrite
main_loop_wait # 2
...
aio_dispatch
aio_bh_poll
(c) mirror_exit
- Bug case 2:
main_loop_wait # 1
os_host_main_loop_wait
g_main_context_dispatch
aio_ctx_dispatch (qemu_aio_context)
...
mirror_run
bdrv_drain
(a) block_job_defer_to_main_loop
main_loop_wait # 2
...
aio_ctx_dispatch (iohandler_ctx)
virtio_queue_host_notifier_read
...
virtio_submit_multiwrite
(b) blk_aio_multiwrite
aio_dispatch
aio_bh_poll
(c) mirror_exit
In both cases, (b) breaks the invariant wanted by (a) and (c).
Until then, the request loss has been silent. Later, 3f09bfbc7be added
asserts at (c) to check the invariant (in
bdrv_replace_in_backing_chain), and Max reported an assertion failure
first visible there, by doing active committing while the guest is
running bonnie++.
2.5 added bdrv_drained_begin at (a) to protect the dataplane case from
similar problems, but we never realize the main loop bug until now.
As a bandage, this patch disables iohandler's external events
temporarily together with bs->ctx.
Launchpad Bug: 1570134
Cc: qemu-stable@nongnu.org
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
-rw-r--r-- | block/mirror.c | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/block/mirror.c b/block/mirror.c index d56e30e472..039f48125e 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -495,6 +495,9 @@ out: block_job_completed(&s->common, data->ret); g_free(data); bdrv_drained_end(src); + if (qemu_get_aio_context() == bdrv_get_aio_context(src)) { + aio_enable_external(iohandler_get_aio_context()); + } bdrv_unref(src); } @@ -716,6 +719,12 @@ immediate_exit: /* Before we switch to target in mirror_exit, make sure data doesn't * change. */ bdrv_drained_begin(s->common.bs); + if (qemu_get_aio_context() == bdrv_get_aio_context(bs)) { + /* FIXME: virtio host notifiers run on iohandler_ctx, therefore the + * above bdrv_drained_end isn't enough to quiesce it. This is ugly, we + * need a block layer API change to achieve this. */ + aio_disable_external(iohandler_get_aio_context()); + } block_job_defer_to_main_loop(&s->common, mirror_exit, data); } |