summaryrefslogtreecommitdiff
path: root/tests/qemu-iotests/223.out
AgeCommit message (Collapse)Author
2020-05-18iotests: Enhance 223 to cover qemu-img map improvementsEric Blake
Since qemu-img map + x-dirty-bitmap remains the easiest way to read persistent bitmaps at the moment, it makes a reasonable place to add coverage to ensure we do not regress on the just-added parameters to qemu-img map. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20200513181455.295267-1-eblake@redhat.com>
2020-02-05nbd: Allow description when creating NBD blockdevEric Blake
Allow blockdevs to match the feature already present in qemu-nbd -D. Enhance iotest 223 to cover it. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20191114024635.11363-5-eblake@redhat.com>
2019-11-18tests: More iotest 223 improvementsEric Blake
Run the core of the test twice, once without iothreads, and again with, for more coverage of both setups. Suggested-by: Nir Soffer <nsoffer@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Message-Id: <20191114213415.23499-5-eblake@redhat.com>
2019-11-18iotests: Include QMP input in .out filesEric Blake
We generally include relevant HMP input in .out files, by virtue of the fact that HMP echoes its input. But QMP does not, so we have to explicitly inject it in the output stream (appropriately filtered to keep the tests passing), in order to make it easier to read .out files to see what behavior is being tested (especially true where the output file is a sequence of {'return': {}}). Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20191114213415.23499-4-eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2019-09-24tests: Use iothreads during iotest 223Eric Blake
Doing so catches the bugs we just fixed with NBD not properly using correct contexts. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190920220729.31801-1-eblake@redhat.com>
2019-09-05nbd: Implement server use of NBD FAST_ZEROEric Blake
The server side is fairly straightforward: we can always advertise support for detection of fast zero, and implement it by mapping the request to the block layer BDRV_REQ_NO_FALLBACK. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-5-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: update iotests 223, 233]
2019-09-05nbd: Improve per-export flag handling in serverEric Blake
When creating a read-only image, we are still advertising support for TRIM and WRITE_ZEROES to the client, even though the client should not be issuing those commands. But seeing this requires looking across multiple functions: All callers to nbd_export_new() passed a single flag based solely on whether the export allows writes. Later, we then pass a constant set of flags to nbd_negotiate_options() (namely, the set of flags which we always support, at least for writable images), which is then further dynamically modified with NBD_FLAG_SEND_DF based on client requests for structured options. Finally, when processing NBD_OPT_EXPORT_NAME or NBD_OPT_EXPORT_GO we bitwise-or the original caller's flag with the runtime set of flags we've built up over several functions. Let's refactor things to instead compute a baseline of flags as soon as possible which gets shared between multiple clients, in nbd_export_new(), and changing the signature for the callers to pass in a simpler bool rather than having to figure out flags. We can then get rid of the 'myflags' parameter to various functions, and instead refer to client for everything we need (we still have to perform a bitwise-OR for NBD_FLAG_SEND_DF during NBD_OPT_EXPORT_NAME and NBD_OPT_EXPORT_GO, but it's easier to see what is being computed). This lets us quit advertising senseless flags for read-only images, as well as making the next patch for exposing FAST_ZERO support easier to write. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-2-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: improve commit message, update iotest 223]
2019-09-05nbd: Advertise multi-conn for shared read-only connectionsEric Blake
The NBD specification defines NBD_FLAG_CAN_MULTI_CONN, which can be advertised when the server promises cache consistency between simultaneous clients (basically, rules that determine what FUA and flush from one client are able to guarantee for reads from another client). When we don't permit simultaneous clients (such as qemu-nbd without -e), the bit makes no sense; and for writable images, we probably have a lot more work before we can declare that actions from one client are cache-consistent with actions from another. But for read-only images, where flush isn't changing any data, we might as well advertise multi-conn support. What's more, advertisement of the bit makes it easier for clients to determine if 'qemu-nbd -e' was in use, where a second connection will succeed rather than hang until the first client goes away. This patch affects qemu as server in advertising the bit. We may want to consider patches to qemu as client to attempt parallel connections for higher throughput by spreading the load over those connections when a server advertises multi-conn, but for now sticking to one connection per nbd:// BDS is okay. See also: https://bugzilla.redhat.com/1708300 Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190815185024.7010-1-eblake@redhat.com> [eblake: tweak blockdev-nbd.c to not request shared when writable, fix iotest 233] Reviewed-by: John Snow <jsnow@redhat.com>
2019-04-01nbd/server: Advertise actual minimum block sizeEric Blake
Both NBD_CMD_BLOCK_STATUS and structured NBD_CMD_READ will split their reply according to bdrv_block_status() boundaries. If the block device has a request_alignment smaller than 512, but we advertise a block alignment of 512 to the client, then this can result in the server reply violating client expectations by reporting a smaller region of the export than what the client is permitted to address (although this is less of an issue for qemu 4.0 clients, given recent client patches to overlook our non-compliance at EOF). Since it's always better to be strict in what we send, it is worth advertising the actual minimum block limit rather than blindly rounding it up to 512. Note that this patch is not foolproof - it is still possible to provoke non-compliant server behavior using: $ qemu-nbd --image-opts driver=blkdebug,align=512,image.driver=file,image.filename=/path/to/non-aligned-file That is arguably a bug in the blkdebug driver (it should never pass back block status smaller than its alignment, even if it has to make multiple bdrv_get_status calls and determine the least-common-denominator status among the group to return). It may also be possible to observe issues with a backing layer with smaller alignment than the active layer, although so far I have been unable to write a reliable iotest for that scenario (but again, an issue like that could be argued to be a bug in the block layer, or something where we need a flag to bdrv_block_status() to state whether the result must be aligned to the current layer's limits or can be subdivided for accuracy when chasing backing files). Anyways, as blkdebug is not normally used, and as this patch makes our server more interoperable with qemu 3.1 clients, it is worth applying now, even while we still work on a larger patch series for the 4.1 timeframe to have byte-accurate file lengths. Note that the iotests output changes - for 223 and 233, we can see the server's better granularity advertisement; and for 241, the three test cases have the following effects: - natural alignment: the server's smaller alignment is now advertised, and the hole reported at EOF is now the right result; we've gotten rid of the server's non-compliance - forced server alignment: the server still advertises 512 bytes, but still sends a mid-sector hole. This is still a server compliance bug, which needs to be fixed in the block layer in a later patch; output does not change because the client is already being tolerant of the non-compliance - forced client alignment: the server's smaller alignment means that the client now sees the server's status change mid-sector without any protocol violations, but the fact that the map shows an unaligned mid-sector hole is evidence of the block layer problems with aligned block status, to be fixed in a later patch Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190329042750.14704-7-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: rebase to enhanced iotest 241 coverage]
2019-03-30nbd/client: Report offsets in bdrv_block_statusEric Blake
It is desirable for 'qemu-img map' to have the same output for a file whether it is served over file or nbd protocols. However, ever since we implemented block status for NBD (2.12), the NBD protocol forgot to inform the block layer that as the final layer in the chain, the offset is valid; without an offset, the human-readable form of qemu-img map gives up with the unhelpful: $ nbdkit -U - data data="1" size=512 --run 'qemu-img map $nbd' Offset Length Mapped to File qemu-img: File contains external, encrypted or compressed clusters. The --output=json form always works, because it is reporting the lower-level bdrv_block_status results directly rather than trying to filter out sparse ranges for human consumption - but now it also shows the offset member. With this patch, the human output changes to: Offset Length Mapped to File 0 0x200 0 nbd+unix://?socket=/tmp/nbdkitOxeoLa/socket This change is observable to several iotests. Fixes: 78a33ab5 Reported-by: Richard W.M. Jones <rjones@redhat.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190329042750.14704-4-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-03-06iotests: Wait for qemu to end in 223Eric Blake
When iotest 223 was first written, it didn't matter if we waited for the qemu process to clean up. But with the introduction of a later qemu-nbd process trying to reuse the same file, there is a race where even though the asynchronous qemu process has responded to "quit", it has not yet had time to unlock the file and exit, resulting in: -[{ "start": 0, "length": 65536, "depth": 0, "zero": false, "data": false}, -{ "start": 65536, "length": 2031616, "depth": 0, "zero": false, "data": true}, -{ "start": 2097152, "length": 2097152, "depth": 0, "zero": false, "data": false}] +qemu-nbd: Failed to blk_new_open 'tests/qemu-iotests/scratch/t.qcow2': Failed to get shared "write" lock +Is another process using the image [tests/qemu-iotests/scratch/t.qcow2]? +qemu-img: Could not open 'driver=nbd,server.type=unix,server.path=tests/qemu-iotests/scratch/qemu-nbd.sock,x-dirty-bitmap=qemu:dirty-bitmap:b': Failed to connect socket tests/qemu-iotests/scratch/qemu-nbd.sock: Connection refused +./common.nbd: line 33: kill: (11122) - No such process Fixes: ddd09448 Reported-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190305182908.13557-1-eblake@redhat.com> Tested-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com>
2019-01-21iotests: Enhance 223, 233 to cover 'qemu-nbd --list'Eric Blake
Any good new feature deserves some regression testing :) Coverage includes: - 223: what happens when there are 0 or more than 1 export, proof that we can see multiple contexts including qemu:dirty-bitmap - 233: proof that we can list over TLS, and that mix-and-match of plain/TLS listings will behave sanely Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> Tested-by: Richard W.M. Jones <rjones@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190117193658.16413-22-eblake@redhat.com>
2019-01-14qemu-nbd: Add --bitmap=NAME optionEric Blake
Having to fire up qemu, then use QMP commands for nbd-server-start and nbd-server-add, just to expose a persistent dirty bitmap, is rather tedious. Make it possible to expose a dirty bitmap using just qemu-nbd (of course, for now this only works when qemu-nbd is visiting a BDS formatted as qcow2). Of course, any good feature also needs unit testing, so expand iotest 223 to cover it. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190111194720.15671-9-eblake@redhat.com>
2019-01-14nbd: Allow bitmap export during QMP nbd-server-addEric Blake
With the experimental x-nbd-server-add-bitmap command, there was a window of time where an NBD client could see the export but not the associated dirty bitmap, which can cause a client that planned on using the dirty bitmap to be forced to treat the entire image as dirty as a safety fallback. Furthermore, if the QMP client successfully exports a disk but then fails to add the bitmap, it has to take on the burden of removing the export. Since we don't allow changing the exposed dirty bitmap (whether to a different bitmap, or removing advertisement of the bitmap), it is nicer to make the bitmap tied to the export at the time the export is created, with automatic failure to export if the bitmap is not available. The experimental command included an optional 'bitmap-export-name' field for remapping the name exposed over NBD to be different from the bitmap name stored on disk. However, my libvirt demo code for implementing differential backups on top of persistent bitmaps did not need to take advantage of that feature (it is instead possible to create a new temporary bitmap with the desired name, use block-dirty-bitmap-merge to merge one or more persistent bitmaps into the temporary, then associate the temporary with the NBD export, if control is needed over the exported bitmap name). Hence, I'm not copying that part of the experiment over to the stable addition. For more details on the libvirt demo, see https://www.redhat.com/archives/libvir-list/2018-October/msg01254.html, https://kvmforum2018.sched.com/event/FzuB/facilitating-incremental-backup-eric-blake-red-hat This patch focuses on the user interface, and reduces (but does not completely eliminate) the window where an NBD client can see the export but not the dirty bitmap, with less work to clean up after errors. Later patches will add further cleanups now that this interface is declared stable via a single QMP command, including removing the race window. Update test 223 to use the new interface. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190111194720.15671-6-eblake@redhat.com>
2019-01-14nbd: Only require disabled bitmap for read-only exportsEric Blake
Our initial implementation of x-nbd-server-add-bitmap put in a restriction because of incremental backups: in that usage, we are exporting one qcow2 file (the temporary overlay target of a blockdev-backup sync:none job) and a dirty bitmap owned by a second qcow2 file (the source of the blockdev-backup, which is the backing file of the temporary). While both qcow2 files are still writable (the target in order to capture copy-on-write of old contents, and the source in order to track live guest writes in the meantime), the NBD client expects to see constant data, including the dirty bitmap. An enabled bitmap in the source would be modified by guest writes, which is at odds with the NBD export being a read-only constant view, hence the initial code choice of enforcing a disabled bitmap (the intent is that the exposed bitmap was disabled in the same transaction that started the blockdev-backup job, although we don't want to track enough state to actually enforce that). However, consider the case of a bitmap contained in a read-only node (including when the bitmap is found in a backing layer of the active image). Because the node can't be modified, the bitmap won't change due to writes, regardless of whether it is still enabled. Forbidding the export unless the bitmap is disabled is awkward, paritcularly since we can't change the bitmap to be disabled (because the node is read-only). Alternatively, consider the case of live storage migration, where management directs the destination to create a writable NBD server, then performs a drive-mirror from the source to the target, prior to doing the rest of the live migration. Since storage migration can be time-consuming, it may be wise to let the destination include a dirty bitmap to track which portions it has already received, where even if the migration is interrupted and restarted, the source can query the destination block status in order to potentially minimize re-sending data that has not changed in the meantime on a second attempt. Such code has not been written, and might not be trivial (after all, a cluster being marked dirty in the bitmap does not necessarily guarantee it has the desired contents), but it makes sense that letting an active dirty bitmap be exposed and changing alongside writes may prove useful in the future. Solve both issues by gating the restriction against a disabled bitmap to only happen when the caller has requested a read-only export, and where the BDS that owns the bitmap (whether or not it is the BDS handed to nbd_export_new() or from its backing chain) is still writable. We could drop the check altogether (if management apps are prepared to deal with a changing bitmap even on a read-only image), but for now keeping a check for the read-only case still stands a chance of preventing management errors. Update iotest 223 to show the looser behavior by leaving a bitmap enabled the whole run; note that we have to tear down and re-export a node when handling an error. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190111194720.15671-4-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-01-14nbd: Forbid nbd-server-stop when server is not runningEric Blake
Since we already forbid other nbd-server commands when not in the right state, it is unlikely that any caller was relying on a second stop to behave as a silent no-op. Update iotest 223 to show the improved behavior. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190111194720.15671-3-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2019-01-14nbd: Add some error case testing to iotests 223Eric Blake
Testing success paths is important, but it's also nice to highlight expected failure handling, to show that we don't crash, and so that upcoming tests that change behavior can demonstrate the resulting effects on error paths. Add the following errors: Attempting to export without a running server Attempting to start a second server Attempting to export a bad node name Attempting to export a name that is already exported Attempting to export an enabled bitmap Attempting to remove an already removed export Attempting to quit server a second time All of these properly complain except for a second server-stop, which will be fixed next. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20190111194720.15671-2-eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
2018-12-18qmp: Split ShutdownCause host-qmp into quit and system-resetDominik Csapak
It is interesting to know whether the shutdown cause was 'quit' or 'reset', especially when using "--no-reboot". In that case, a management layer can now determine if the guest wanted a reboot or shutdown, and can act accordingly. Changes the output of the reason in the iotests from 'host-qmp' to 'host-qmp-quit'. This does not break compatibility because the field was introduced in the same version. Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> Message-Id: <20181205110131.23049-4-d.csapak@proxmox.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Commit message tweaked] Signed-off-by: Markus Armbruster <armbru@redhat.com>
2018-12-18qmp: Add reason to SHUTDOWN and RESET eventsDominik Csapak
This makes it possible to determine what the exact reason was for a RESET or a SHUTDOWN. A management layer might need the specific reason of those events to determine which cleanups or other actions it needs to do. This patch also updates the iotests to the new expected output that includes the reason. Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> Message-Id: <20181205110131.23049-3-d.csapak@proxmox.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Commit message tweaked] Signed-off-by: Markus Armbruster <armbru@redhat.com>
2018-11-22iotests: Enhance 223 to cover multiple bitmap granularitiesEric Blake
Testing granularity at the same size as the cluster isn't quite as fun as what happens when it is larger or smaller. This enhancement also shows that qemu's nbd server can serve the same disk over multiple exports simultaneously. Signed-off-by: Eric Blake <eblake@redhat.com> Tested-by: John Snow <jsnow@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2018-07-02iotests: New test 223 for exporting dirty bitmap over NBDEric Blake
Although this test is NOT a full test of image fleecing (as it intentionally uses just a single block device directly exported over NBD, rather than trying to set up a blockdev-backup job with multiple BDS involved), it DOES prove that qemu as a server is able to properly expose a dirty bitmap over NBD. When coupled with image fleecing, it is then possible for a third-party client to do an incremental backup by using qemu-img map with the x-dirty-bitmap option to learn which parts of the file are dirty (perhaps confusingly, they are the portions mapped as "data":false - which is part of the reason this is still in the x- experimental namespace), along with another normal client (perhaps 'qemu-nbd -c' to expose the server over /dev/nbd0 and then just use normal I/O on that block device) to read the dirty sections. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <20180702191458.28741-3-eblake@redhat.com> Tested-by: John Snow <jsnow@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>