summaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)Author
2016-12-06Merge remote-tracking branch 'kwolf/tags/for-upstream' into stagingStefan Hajnoczi
Block layer patches for 2.8.0-rc3 # gpg: Signature made Tue 06 Dec 2016 02:44:39 PM GMT # gpg: using RSA key 0x7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * kwolf/tags/for-upstream: qcow2: Don't strand clusters near 2G intervals during commit Message-id: 1481037418-10239-1-git-send-email-kwolf@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-12-06qcow2: Don't strand clusters near 2G intervals during commitEric Blake
The qcow2_make_empty() function is reached during 'qemu-img commit', in order to clear out ALL clusters of an image. However, if the image cannot use the fast code path (true if the image is format 0.10, or if the image contains a snapshot), the cluster size is larger than 512, and the image is larger than 2G in size, then our choice of sector_step causes problems. Since it is not cluster aligned, but qcow2_discard_clusters() silently ignores an unaligned head or tail, we are leaving clusters allocated. Enhance the testsuite to expose the flaw, and patch the problem by ensuring our step size is aligned. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-12-05block/nfs: fix QMP to match debug optionPrasanna Kumar Kalever
The QMP definition of BlockdevOptionsNfs: { 'struct': 'BlockdevOptionsNfs', 'data': { 'server': 'NFSServer', 'path': 'str', '*user': 'int', '*group': 'int', '*tcp-syn-count': 'int', '*readahead-size': 'int', '*page-cache-size': 'int', '*debug-level': 'int' } } To make this consistent with other block protocols like gluster, lets change s/debug-level/debug/ Suggested-by: Eric Blake <eblake@redhat.com> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-12-05block/gluster: fix QMP to match debug optionPrasanna Kumar Kalever
The QMP definition of BlockdevOptionsGluster: { 'struct': 'BlockdevOptionsGluster', 'data': { 'volume': 'str', 'path': 'str', 'server': ['GlusterServer'], '*debug-level': 'int', '*logfile': 'str' } } But instead of 'debug-level we have exported 'debug' as the option for choosing debug level of gluster protocol driver. This patch fix QMP definition BlockdevOptionsGluster s/debug-level/debug/ Suggested-by: Eric Blake <eblake@redhat.com> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-29Merge remote-tracking branch 'kwolf/tags/for-upstream' into stagingStefan Hajnoczi
Block layer patches for 2.8.0-rc2 # gpg: Signature made Tue 29 Nov 2016 03:16:10 PM GMT # gpg: using RSA key 0x7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * kwolf/tags/for-upstream: docs: Specify that cache-clean-interval is only supported in Linux qcow2: Remove stale comment qcow2: Allow 'cache-clean-interval' in Linux only qcow2: Make qcow2_cache_table_release() work only in Linux Message-id: 1480436227-2211-1-git-send-email-kwolf@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-11-25qcow2: Remove stale commentAlberto Garcia
We haven't been using CONFIG_MADVISE since 02d0e095031b7fda77de8b Signed-off-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-25qcow2: Allow 'cache-clean-interval' in Linux onlyAlberto Garcia
The cache-clean-interval option of qcow2 only works on Linux. However we allow setting it in other systems regardless of whether it works or not. In those systems this option is not simply a no-op: it actually invalidates perfectly valid cache tables for no good reason without freeing their memory. This patch forbids using that option in non-Linux systems. Signed-off-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-25qcow2: Make qcow2_cache_table_release() work only in LinuxAlberto Garcia
We are using QEMU_MADV_DONTNEED to discard the memory of individual L2 cache tables. The problem with this is that those semantics are specific to the Linux madvise() system call. Other implementations of madvise() (including the very Linux implementation of posix_madvise()) don't do that, so we cannot use them for the same purpose. This patch makes the code Linux-specific and uses madvise() directly since there's no point in going through qemu_madvise() for this. Signed-off-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-23Merge remote-tracking branch 'bonzini/tags/for-upstream' into stagingStefan Hajnoczi
Small fixes for rc1. # gpg: Signature made Tue 22 Nov 2016 10:26:56 PM GMT # gpg: using RSA key 0xBFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * bonzini/tags/for-upstream: scsi/esp: do not raise an interrupt when reading the FIFO register nbd: Allow unmap and fua during write zeroes cpu_ldst.h: use correct guest address parameter Message-id: 1479853676-35995-1-git-send-email-pbonzini@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-11-22nbd: Allow unmap and fua during write zeroesEric Blake
Commit fa778fff wired up support to send the NBD_CMD_WRITE_ZEROES, but forgot to inform the block layer that FUA unmapping of zeroes is supported. Without BDRV_REQ_MAY_UNMAP listed as a supported flag, the block layer will always insist on the NBD layer passing NBD_CMD_FLAG_NO_HOLE, resulting in the server always allocating things even when it was desired to let the server punch holes. Similarly, failing to set BDRV_REQ_FUA means that the client may send unnecessary NBD_CMD_FLUSH when it could have instead used the NBD_CMD_FLAG_FUA bit. CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <1479413642-22463-2-git-send-email-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-11-22block: Pass unaligned discard requests to driversEric Blake
Discard is advisory, so rounding the requests to alignment boundaries is never semantically wrong from the data that the guest sees. But at least the Dell Equallogic iSCSI SANs has an interesting property that its advertised discard alignment is 15M, yet documents that discarding a sequence of 1M slices will eventually result in the 15M page being marked as discarded, and it is possible to observe which pages have been discarded. Between commits 9f1963b and b8d0a980, we converted the block layer to a byte-based interface that ultimately ignores any unaligned head or tail based on the driver's advertised discard granularity, which means that qemu 2.7 refuses to pass any discard request smaller than 15M down to the Dell Equallogic hardware. This is a slight regression in behavior compared to earlier qemu, where a guest executing discards in power-of-2 chunks used to be able to get every page discarded, but is now left with various pages still allocated because the guest requests did not align with the hardware's 15M pages. Since the SCSI specification says nothing about a minimum discard granularity, and only documents the preferred alignment, it is best if the block layer gives the driver every bit of information about discard requests, rather than rounding it to alignment boundaries early. Rework the block layer discard algorithm to mirror the write zero algorithm: always peel off any unaligned head or tail and manage that in isolation, then do the bulk of the request on an aligned boundary. The fallback when the driver returns -ENOTSUP for an unaligned request is to silently ignore that portion of the discard request; but for devices that can pass the partial request all the way down to hardware, this can result in the hardware coalescing requests and discarding aligned pages after all. Reported by: Peter Lieven <pl@kamp.de> CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-22block: Return -ENOTSUP rather than assert on unaligned discardsEric Blake
Right now, the block layer rounds discard requests, so that individual drivers are able to assert that discard requests will never be unaligned. But there are some ISCSI devices that track and coalesce multiple unaligned requests, turning it into an actual discard if the requests eventually cover an entire page, which implies that it is better to always pass discard requests as low down the stack as possible. In isolation, this patch has no semantic effect, since the block layer currently never passes an unaligned request through. But the block layer already has code that silently ignores drivers that return -ENOTSUP for a discard request that cannot be honored (as well as drivers that return 0 even when nothing was done). But the next patch will update the block layer to fragment discard requests, so that clients are guaranteed that they are either dealing with an unaligned head or tail, or an aligned core, making it similar to the block layer semantics of write zero fragmentation. CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-22block: Let write zeroes fallback work even with small max_transferEric Blake
Commit 443668ca rewrote the write_zeroes logic to guarantee that an unaligned request never crosses a cluster boundary. But in the rewrite, the new code assumed that at most one iteration would be needed to get to an alignment boundary. However, it is easy to trigger an assertion failure: the Linux kernel limits loopback devices to advertise a max_transfer of only 64k. Any operation that requires falling back to writes rather than more efficient zeroing must obey max_transfer during that fallback, which means an unaligned head may require multiple iterations of the write fallbacks before reaching the aligned boundaries, when layering a format with clusters larger than 64k atop the protocol of file access to a loopback device. Test case: $ qemu-img create -f qcow2 -o cluster_size=1M file 10M $ losetup /dev/loop2 /path/to/file $ qemu-io -f qcow2 /dev/loop2 qemu-io> w 7m 1k qemu-io> w -z 8003584 2093056 In fairness to Denis (as the original listed author of the culprit commit), the faulty logic for at most one iteration is probably all my fault in reworking his idea. But the solution is to restore what was in place prior to that commit: when dealing with an unaligned head or tail, iterate as many times as necessary while fragmenting the operation at max_transfer boundaries. Reported-by: Ed Swierk <eswierk@skyportsystems.com> CC: qemu-stable@nongnu.org CC: Denis V. Lunev <den@openvz.org> Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-22qcow2: Inform block layer about discard boundariesEric Blake
At the qcow2 layer, discard is only possible on a per-cluster basis; at the moment, qcow2 silently rounds any unaligned requests to this granularity. However, an upcoming patch will fix a regression in the block layer ignoring too much of an unaligned discard request, by changing the block layer to break up a discard request at alignment boundaries; for that to work, the block layer must know about our limits. However, we can't go one step further by changing qcow2_discard_clusters() to assert that requests are always aligned, since that helper function is reached on paths outside of the block layer. CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-21gluster: Fix use after free in glfs_clear_preopened()Kevin Wolf
This fixes a use-after-free bug introduced in commit 6349c154. We need to use QLIST_FOREACH_SAFE() when freeing elements in the loop. Spotted by Coverity. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1479378608-11962-1-git-send-email-kwolf@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14mirror: do not flush every time the disks are syncedPaolo Bonzini
This puts a huge strain on the disks when there are many concurrent migrations. With this patch we only flush twice: just before issuing the event, and just before pivoting to the destination. If management will complete the job close to the BLOCK_JOB_READY event, the cost of the second flush should be small anyway. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-id: 20161109162008.27287-2-pbonzini@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14block/curl: Do not wait for data beyond EOFMax Reitz
libcurl will only give us as much data as there is, not more. The block layer will deny requests beyond the end of file for us; but since this block driver is still using a sector-based interface, we can still get in trouble if the file size is not a multiple of 512. While we have already made sure not to attempt transfers beyond the end of the file, we are currently still trying to receive data from there if the original request exceeds the file size. This patch fixes this issue and invokes qemu_iovec_memset() on the iovec's tail. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20161025025431.24714-5-mreitz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14block/curl: Remember all socketsMax Reitz
For some connection types (like FTP, generally), more than one socket may be used (in FTP's case: control vs. data stream). As of commit 838ef602498b8d1985a231a06f5e328e2946a81d ("curl: Eliminate unnecessary use of curl_multi_socket_all"), we have to remember all of the sockets used by libcurl, but in fact we only did that for a single one. Since one libcurl connection may use multiple sockets, however, we have to remember them all. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20161025025431.24714-4-mreitz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14block/curl: Fix return value from curl_read_cbMax Reitz
While commit 38bbc0a580f9f10570b1d1b5d3e92f0e6feb2970 is correct in that the callback is supposed to return the number of bytes handled; what it does not mention is that libcurl will throw an error if the callback did not "handle" all of the data passed to it. Therefore, if the callback receives some data that it cannot handle (either because the receive buffer has not been set up yet or because it would not fit into the receive buffer) and we have to ignore it, we still have to report that the data has been handled. Obviously, this should not happen normally. But it does happen at least for FTP connections where some data (that we do not expect) may be generated when the connection is established. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20161025025431.24714-3-mreitz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14block/curl: Use BDRV_SECTOR_SIZEMax Reitz
Currently, curl defines its own constant SECTOR_SIZE. There is no advantage over using the global BDRV_SECTOR_SIZE, so drop it. Cc: qemu-stable@nongnu.org Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 20161025025431.24714-2-mreitz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14block/curl: Drop TFTP "support"Max Reitz
Because TFTP does not support byte ranges, it was never usable with our curl block driver. Since apparently nobody has ever complained loudly enough for someone to take care of the issue until now, it seems reasonable to assume that nobody has ever actually used it. Therefore, it should be safe to just drop it from curl's protocol list. [Jeff Cody: Below is additional summary pulled, with some rewording, from followup emails between Max and Markus, to explain what worked and what didn't] TFTP would sometimes work, to a limited extent, for images <= the curl "readahead" size, so long as reads started at offset zero. By default, that readahead size is 256KB. Reads starting at a non-zero offset would also have returned data from a zero offset. It can become more complicated still, with mixed reads at zero offset and non-zero offsets, due to data buffering. In short, TFTP could only have worked before in very specific scenarios with unrealistic expectations and constraints. Signed-off-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 20161102175539.4375-4-mreitz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14blockjob: refactor backup_start as backup_job_createJohn Snow
Refactor backup_start as backup_job_create, which only creates the job, but does not automatically start it. The old interface, 'backup_start', is not kept in favor of limiting the number of nearly-identical interfaces that would have to be edited to keep up with QAPI changes in the future. Callers that wish to synchronously start the backup_block_job can instead just call block_job_start immediately after calling backup_job_create. Transactions are updated to use the new interface, calling block_job_start only during the .commit phase, which helps prevent race conditions where jobs may finish before we even finish building the transaction. This may happen, for instance, during empty block backup jobs. Reported-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 1478587839-9834-6-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14blockjob: add block_job_startJohn Snow
Instead of automatically starting jobs at creation time via backup_start et al, we'd like to return a job object pointer that can be started manually at later point in time. For now, add the block_job_start mechanism and start the jobs automatically as we have been doing, with conversions job-by-job coming in later patches. Of note: cancellation of unstarted jobs will perform all the normal cleanup as if the job had started, particularly abort and clean. The only difference is that we will not emit any events, because the job never actually started. Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 1478587839-9834-5-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14blockjob: add .start fieldJohn Snow
Add an explicit start field to specify the entrypoint. We already have ownership of the coroutine itself AND managing the lifetime of the coroutine, let's take control of creation of the coroutine, too. This will allow us to delay creation of the actual coroutine until we know we'll actually start a BlockJob in block_job_start. This avoids the sticky question of how to "un-create" a Coroutine that hasn't been started yet. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1478587839-9834-4-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14blockjob: add .clean propertyJohn Snow
Cleaning up after we have deferred to the main thread but before the transaction has converged can be dangerous and result in deadlocks if the job cleanup invokes any BH polling loops. A job may attempt to begin cleaning up, but may induce another job to enter its cleanup routine. The second job, part of our same transaction, will block waiting for the first job to finish, so neither job may now make progress. To rectify this, allow jobs to register a cleanup operation that will always run regardless of if the job was in a transaction or not, and if the transaction job group completed successfully or not. Move sensitive cleanup to this callback instead which is guaranteed to be run only after the transaction has converged, which removes sensitive timing constraints from said cleanup. Furthermore, in future patches these cleanup operations will be performed regardless of whether or not we actually started the job. Therefore, cleanup callbacks should essentially confine themselves to undoing create operations, e.g. setup actions taken in what is now backup_start. Reported-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1478587839-9834-3-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-14Merge remote-tracking branch 'jsnow/tags/ide-pull-request' into stagingStefan Hajnoczi
# gpg: Signature made Mon 14 Nov 2016 04:16:48 PM GMT # gpg: using RSA key 0x7DEF8106AAFC390E # gpg: Good signature from "John Snow (John Huston) <jsnow@redhat.com>" # Primary key fingerprint: FAEB 9711 A12C F475 812F 18F2 88A9 064D 1835 61EB # Subkey fingerprint: F9B7 ABDB BCAC DF95 BE76 CBD0 7DEF 8106 AAFC 390E * jsnow/tags/ide-pull-request: ahci-test: add QMP tray test for ATAPI libqos/ahci: Add get_sense and test_ready libqos/ahci: Add ATAPI tray macros libqos/ahci: Support expected errors libqtest: add qmp_eventwait_ref block-backend: Always notify on blk_eject ahci-test: test atapi read_cd with bcl, nb_sectors = 0 ahci-test: Create smaller test ISO images atapi: classify read_cd as conditionally returning data Message-id: 1479140746-22142-1-git-send-email-jsnow@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-11-14block-backend: Always notify on blk_ejectJohn Snow
blk_eject is only used by scsi-disk and atapi, and in both cases we only attempt to invoke blk_eject if we have a bona-fide change in tray state. The "issue" here is that the tray state does not generate a QMP event unless there is a medium/BDS attached to the device, so if libvirt et al are waiting for a tray event to occur from an empty-but-closed drive, software opening that drive will not emit an event and libvirt will wait forever. Change this by modifying blk_eject to always emit an event, instead of conditionally on a "real" backend eject. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1373264 Reported-by: Peter Krempa <pkrempa@redhat.com> Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1478553214-497-2-git-send-email-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2016-11-11raw-posix: Rename 'raw_s' to 'rs'Fam Zheng
It is too confusing because it sounds like a BDRVRawState variable. Suggested-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com> Message-id: 1477565117-17230-1-git-send-email-famz@redhat.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-11-11nfs: Fix memory leak in nfs_file_create()Kevin Wolf
The leak was introduced in commit 94d6a7a7. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-11-11qcow2: Remove stale FIXME commentAlberto Garcia
It was from the time when none of the global functions had a qcow2_ prefix. Signed-off-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-11raw_bsd: don't check size alignment when only offset is setTomáš Golembiovský
We make sure that the size is aligned to sector length to prevent any round ups. Otherwise we could end up reading/writing data outside the area specified by user. This is only needed when user supplies the size option to avoid any surprises. It is not necessary when only offset is set. More over, the check made it difficult to use the offset option without size option. The check puts unneeded restriction on the offset which had to be aligned too. Because bdrv_getlength() returns aligned value having unaligned offset would make the check fail. Signed-off-by: Tomáš Golembiovský <tgolembi@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-11raw_bsd: move check to prevent overflowTomáš Golembiovský
When only offset is specified but no size and the offset is greater than the real size of the containing device an overflow occurs when parsing the options. This overflow is harmless because we do check for this exact situation little bit later, but it leads to an error message with weird values. It is better to do the check is sooner and prevent the overflow. Signed-off-by: Tomáš Golembiovský <tgolembi@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-11block/ssh: Code cleanup for unused parameterAshijeet Acharya
This patch drops the unused parameter "BDRVSSHState" being passed into the ssh_config() function and does code cleanup. The unused parameter was introduced by the commit c322712. Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-11block/nbd: Fix the leaked visitorAshijeet Acharya
This patch frees the leaked visitor in nbd_refresh_filename() and uses visit_free() to fix it. The leak was introduced by the commit 491d6c7. Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-11-08block: Don't mark node clean after failed flushKevin Wolf
Commit 3ff2f67a changed bdrv_co_flush() so that no flush is issues if the image hasn't been dirtied since the last flush. This is not quite correct: The condition should be that the image hasn't been dirtied since the last _successful_ flush. This patch changes the logic accordingly. Without this fix, subsequent bdrv_co_flush() calls would return success without actually doing anything even though the image is still dirty. The difference is visible in some blkdebug test cases where error messages incorrectly disappeared after commit 3ff2f67a. Cc: qemu-stable@nongnu.org Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Denis V. Lunev <den@openvz.org> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Message-id: 1478300595-10090-1-git-send-email-kwolf@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-11-03Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingStefan Hajnoczi
* NBD bugfix (Changlong) * NBD write zeroes support (Eric) * Memory backend fixes (Haozhong) * Atomics fix (Alex) * New AVX512 features (Luwei) * "make check" logging fix (Paolo) * Chardev refactoring fallout (Paolo) * Small checkpatch improvements (Paolo, Jeff) # gpg: Signature made Wed 02 Nov 2016 08:31:11 AM GMT # gpg: using RSA key 0xBFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (30 commits) main-loop: Suppress I/O thread warning under qtest docs/rcu.txt: Fix minor typo vl: exit qemu on guest panic if -no-shutdown is not set checkpatch: allow spaces before parenthesis for 'coroutine_fn' x86: add AVX512_4VNNIW and AVX512_4FMAPS features slirp: fix CharDriver breakage qemu-char: do not forward events through the mux until QEMU has started nbd: Implement NBD_CMD_WRITE_ZEROES on client nbd: Implement NBD_CMD_WRITE_ZEROES on server nbd: Improve server handling of shutdown requests nbd: Refactor conversion to errno to silence checkpatch nbd: Support shorter handshake nbd: Less allocation during NBD_OPT_LIST nbd: Let client skip portions of server reply nbd: Let server know when client gives up negotiation nbd: Share common option-sending code in client nbd: Send message along with server NBD_REP_ERR errors nbd: Share common reply-sending code in server nbd: Rename struct nbd_request and nbd_reply nbd: Rename NbdClientSession to NBDClientSession ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-11-02nbd: Implement NBD_CMD_WRITE_ZEROES on clientEric Blake
Upstream NBD protocol recently added the ability to efficiently write zeroes without having to send the zeroes over the wire, along with a flag to control whether the client wants a hole. The generic block code takes care of falling back to the obvious write of lots of zeroes if we return -ENOTSUP because the server does not have WRITE_ZEROES. Ideally, since NBD_CMD_WRITE_ZEROES does not involve any data over the wire, we want to support transactions that are much larger than the normal 32M limit imposed on NBD_CMD_WRITE. But the server may still have a limit smaller than UINT_MAX, so until experimental NBD protocol additions for advertising various command sizes is finalized (see [1], [2]), for now we just stick to the same limits as normal writes. [1] https://github.com/yoe/nbd/blob/extension-info/doc/proto.md [2] https://sourceforge.net/p/nbd/mailman/message/35081223/ Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <1476469998-28592-17-git-send-email-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-11-02nbd: Rename struct nbd_request and nbd_replyEric Blake
Our coding convention prefers CamelCase names, and we already have other existing structs with NBDFoo naming. Let's be consistent, before later patches add even more structs. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <1476469998-28592-6-git-send-email-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-11-02nbd: Rename NbdClientSession to NBDClientSessionEric Blake
It's better to use consistent capitalization of the namespace used for NBD functions; we have more instances of NBD* than Nbd*. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <1476469998-28592-5-git-send-email-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-11-02nbd: Treat flags vs. command type as separate fieldsEric Blake
Current upstream NBD documents that requests have a 16-bit flags, followed by a 16-bit type integer; although older versions mentioned only a 32-bit field with masking to find flags. Since the protocol is in network order (big-endian over the wire), the ABI is unchanged; but dealing with the flags as a separate field rather than masking will make it easier to add support for upcoming NBD extensions that increase the number of both flags and commands. Improve some comments in nbd.h based on the current upstream NBD protocol (https://github.com/yoe/nbd/blob/master/doc/proto.md), and touch some nearby code to keep checkpatch.pl happy. Signed-off-by: Eric Blake <eblake@redhat.com> Message-Id: <1476469998-28592-3-git-send-email-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-11-01nbd: Use CoQueue for free_sema instead of CoMutexChanglong Xie
NBD is using the CoMutex in a way that wasn't anticipated. For example, if there are N(N=26, MAX_NBD_REQUESTS=16) nbd write requests, so we will invoke nbd_client_co_pwritev N times. ---------------------------------------------------------------------------------------- time request Actions 1 1 in_flight=1, Coroutine=C1 2 2 in_flight=2, Coroutine=C2 ... 15 15 in_flight=15, Coroutine=C15 16 16 in_flight=16, Coroutine=C16, free_sema->holder=C16, mutex->locked=true 17 17 in_flight=16, Coroutine=C17, queue C17 into free_sema->queue 18 18 in_flight=16, Coroutine=C18, queue C18 into free_sema->queue ... 26 N in_flight=16, Coroutine=C26, queue C26 into free_sema->queue ---------------------------------------------------------------------------------------- Once nbd client recieves request No.16' reply, we will re-enter C16. It's ok, because it's equal to 'free_sema->holder'. ---------------------------------------------------------------------------------------- time request Actions 27 16 in_flight=15, Coroutine=C16, free_sema->holder=C16, mutex->locked=false ---------------------------------------------------------------------------------------- Then nbd_coroutine_end invokes qemu_co_mutex_unlock what will pop coroutines from free_sema->queue's head and enter C17. More free_sema->holder is C17 now. ---------------------------------------------------------------------------------------- time request Actions 28 17 in_flight=16, Coroutine=C17, free_sema->holder=C17, mutex->locked=true ---------------------------------------------------------------------------------------- In above scenario, we only recieves request No.16' reply. As time goes by, nbd client will almostly recieves replies from requests 1 to 15 rather than request 17 who owns C17. In this case, we will encounter assert "mutex->holder == self" failed since Kevin's commit 0e438cdc "coroutine: Let CoMutex remember who holds it". For example, if nbd client recieves request No.15' reply, qemu will stop unexpectedly: ---------------------------------------------------------------------------------------- time request Actions 29 15(most case) in_flight=15, Coroutine=C15, free_sema->holder=C17, mutex->locked=false ---------------------------------------------------------------------------------------- Per Paolo's suggestion "The simplest fix is to change it to CoQueue, which is like a condition variable", this patch replaces CoMutex with CoQueue. Cc: Wen Congyang <wency@cn.fujitsu.com> Reported-by: zhanghailiang <zhang.zhanghailiang@huawei.com> Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com> Message-Id: <1476267508-19499-1-git-send-email-xiecl.fnst@cn.fujitsu.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-11-01blockjobs: split interface into public/private, Part 1John Snow
To make it a little more obvious which functions are intended to be public interface and which are intended to be for use only by jobs themselves, split the interface into "public" and "private" files. Convert blockjobs (e.g. block/backup) to using the private interface. Leave blockdev and others on the public interface. There are remaining uses of private state by qemu-img, and several cases in blockdev.c and block/io.c where we grab job->blk for the purposes of acquiring an AIOContext. These will be corrected in future patches. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477584421-1399-7-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01blockjob: centralize QMP event emissionsJohn Snow
There's no reason to leave this to blockdev; we can do it in blockjobs directly and get rid of an extra callback for most users. All non-internal events, even those created outside of QMP, will consistently emit events. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477584421-1399-5-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01Replication/Blockjobs: Create replication jobs as internalJohn Snow
Bubble up the internal interface to commit and backup jobs, then switch replication tasks over to using this methodology. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477584421-1399-4-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01blockjobs: Allow creating internal jobsJohn Snow
Add the ability to create jobs without an ID. Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477584421-1399-3-git-send-email-jsnow@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01block/gluster: fix port type in the QAPI options listPrasanna Kumar Kalever
After introduction of qapi schema in gluster block driver code, the port type is now string as per InetSocketAddress { 'struct': 'InetSocketAddress', 'data': { 'host': 'str', 'port': 'str', '*to': 'uint16', '*ipv4': 'bool', '*ipv6': 'bool' } } but the current code still treats it as QEMU_OPT_NUMBER, hence fixing port to accept QEMU_OPT_STRING. Suggested-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01block/gluster: improve defense over string to int conversionPrasanna Kumar Kalever
using atoi() for converting string to int may be error prone in case if string supplied in the argument is not a fold of numerical number, This is not a bug because in the existing code, static QemuOptsList runtime_tcp_opts = { .name = "gluster_tcp", .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head), .desc = { ... { .name = GLUSTER_OPT_PORT, .type = QEMU_OPT_NUMBER, .help = "port number ...", }, ... }; port type is QEMU_OPT_NUMBER, before we actually reaches atoi() port is already defended by parse_option_number() However It is a good practice to use function like parse_uint_full() over atoi() to keep port self defended Note: As now the port string to int conversion has its defence code set, and also we understand that port argument is actually a string type, in the follow up patch let's move port type from QEMU_OPT_NUMBER to QEMU_OPT_STRING [Jeff Cody: removed spurious parenthesis] Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01block: Turn on "unmap" in active commitFam Zheng
We already specified BDRV_O_UNMAP when opening images in 'qemu-img commit', but didn't turn on the "unmap" in the active commit job. This patch fixes that so that zeroed clusters in top image can be discarded which is desired in the virt-sparsify use case, where a temporary overlay is created and fstrim'ed before commiting back, to free space in the original image. This also enables it for block-commit. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-id: 1474974892-5031-1-git-send-email-famz@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01block/gluster: memory usage: use one glfs instance per volumePrasanna Kumar Kalever
Currently, for every drive accessed via gfapi we create a new glfs instance (call glfs_new() followed by glfs_init()) which could consume memory in few 100 MB's, from the table below it looks like for each instance ~300 MB VSZ was consumed Before: ------- Disks VSZ RSS 1 1098728 187756 2 1430808 198656 3 1764932 199704 4 2084728 202684 This patch maintains a list of pre-opened glfs objects. On adding a new drive belonging to the same gluster volume, we just reuse the existing glfs object by updating its refcount. With this approch we shrink up the unwanted memory consumption and glfs_new/glfs_init calls for accessing a disk (file) if belongs to same volume. From below table notice that the memory usage after adding a disk (which will reuse the existing glfs object hence) is in negligible compared to before. After: ------ Disks VSZ RSS 1 1101964 185768 2 1109604 194920 3 1114012 196036 4 1114496 199868 Disks: number of -drive VSZ: virtual memory size of the process in KiB RSS: resident set size, the non-swapped physical memory (in kiloBytes) VSZ and RSS are analyzed using 'ps aux' utility. Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1477581890-4811-1-git-send-email-prasanna.kalever@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-11-01block: add gluster ifdef guard checks for SEEK_DATA/SEEK_HOLE supportJeff Cody
Add checks to see if the system compiling QEMU has support for SEEK_HOLE/SEEK_DATA. If the system does not, we will flag that seek data is unsupported in gluster. Note: this is not a check on whether the gluster server itself supports SEEK_DATA (that is already done during runtime), but rather if the compilation environment supports SEEK_DATA. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com> Tested-by: Eric Blake <eblake@redhat.com> Message-id: 00370bce5c98140d6c56ad5145635ec6551265cc.1475876377.git.jcody@redhat.com Signed-off-by: Jeff Cody <jcody@redhat.com>