Age | Commit message (Collapse) | Author |
|
Block layer patches
- NBD server: Fix crashes related to switching between AioContexts
- file-posix: Workaround for discard/write_zeroes on buggy filesystems
- Follow-up fixes for the reopen vs. permission changes
- quorum: Fix error handling for flush
- block-copy: Refactor copy_range handling
- docs: Describe how to use 'null-co' block driver
# gpg: Signature made Wed 02 Jun 2021 14:44:15 BST
# gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6
# gpg: issuer "kwolf@redhat.com"
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream:
docs/secure-coding-practices: Describe how to use 'null-co' block driver
block-copy: refactor copy_range handling
block-copy: fix block_copy_task_entry() progress update
nbd/server: Use drained block ops to quiesce the server
block-backend: add drained_poll
block: improve permission conflict error message
block: simplify bdrv_child_user_desc()
block/vvfat: inherit child_vvfat_qcow from child_of_bds
block: improve bdrv_child_get_parent_desc()
block-backend: improve blk_root_get_parent_desc()
block: document child argument of bdrv_attach_child_common()
block/file-posix: Try other fallbacks after invalid FALLOC_FL_ZERO_RANGE
block/file-posix: Fix problem with fallocate(PUNCH_HOLE) on GPFS
block: drop BlockBackendRootState::read_only
block: drop BlockDriverState::read_only
block: consistently use bdrv_is_read_only()
block/vvfat: fix vvfat_child_perm crash
block/vvfat: child_vvfat_qcow: add .get_parent_aio_context, fix crash
qemu-io-cmds: assert that we don't have .perm requested in no-blk case
block/quorum: Provide .bdrv_co_flush instead of .bdrv_co_flush_to_disk
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Now permissions are updated as follows:
1. do graph modifications ignoring permissions
2. do permission update
(of course, we rollback [1] if [2] fails)
So, on stage [2] we can't say which users are "old" and which are
"new" and exist only since [1]. And current error message is a bit
outdated. Let's improve it, to make everything clean.
While being here, add also a comment and some good assertions.
iotests 283, 307, qsd-jobs outputs are updated.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210601075218.79249-7-vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We have different types of parents: block nodes, block backends and
jobs. So, it makes sense to specify type together with name.
Next, this handler us used to compose an error message about permission
conflict. And permission conflict occurs in a specific place of block
graph. We shouldn't report name of parent device (as it refers another
place in block graph), but exactly and only the name of the node. So,
use bdrv_get_node_name() directly.
iotest 283 output is updated.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20210601075218.79249-4-vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We have different types of parents: block nodes, block backends and
jobs. So, it makes sense to specify type together with name.
While being here also use g_autofree.
iotest 307 output is updated.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Message-Id: <20210601075218.79249-3-vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
move python/qemu/*.py to python/qemu/[machine, qmp, utils]/*.py and
update import directives across the tree.
This is done to create a PEP420 namespace package, in which we may
create subpackages. To do this, the namespace directory ("qemu") should
not have any modules in it. Those files will go into new 'machine',
'qmp' and 'utils' subpackages instead.
Implement machine/__init__.py making the top-level classes and functions
from its various modules available directly inside the package. Change
qmp.py to qmp/__init__.py similarly, such that all of the useful QMP
library classes are available directly from "qemu.qmp" instead of
"qemu.qmp.qmp".
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Message-id: 20210527211715.394144-10-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
mypy is kind of weird about how it handles imports. For legacy reasons,
it won't load PEP 420 namespaces, because of logic implemented prior to
that becoming a standard.
So, if you plan on using any, you have to pass
--namespace-packages. Alright, fine.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Message-id: 20210527211715.394144-9-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
Each instance of qemu.machine.QEMUMachine currently has a "test
directory", which may not have any relation to a "test", and it's
really a temporary directory.
Users instantiating the QEMUMachine class will be able to set the
location of the directory that will *contain* the QEMUMachine unique
temporary directory, so that parameter name has been changed from
test_dir to base_temp_dir.
A property has been added to allow users to access it without using
private attributes, and with that, the directory is created on first
use of the property.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Message-Id: <20210211220146.2525771-3-crosa@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
pylint 2.8 introduces consider-using-with error, suggesting
to use the 'with' block statement when possible.
Modify all subprocess.Popen call to use the 'with' statement,
except the one in __init__ of QemuIoInteractive class, since
it is assigned to a class field and used in other methods.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20210510190449.65948-1-eesposit@redhat.com>
[mreitz: Disable bad-option-value warning in the iotests' pylintrc, so
that disabling consider-using-with in QemuIoInteractive will
not produce a warning in pre-2.8 pylint versions]
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Due to a typo, in this case the SOCK_DIR was not being created.
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20210323181928.311862-6-pbonzini@redhat.com>
Message-Id: <20210503110110.476887-6-pbonzini@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Right now there is no easy way for "check" to print a reproducer command.
Because such a reproducer command line would be huge, we can instead teach
check to start a command of our choice. This can be for example a Python
unit test with arguments to only run a specific subtest.
Move the trailing empty line to print_env(), since it always looks better
and one caller was not adding it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Tested-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20210323181928.311862-5-pbonzini@redhat.com>
Message-Id: <20210503110110.476887-5-pbonzini@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
TestEnv
In the next patch, "check" will learn how to execute a test script without
going through TestRunner. To enable this, keep only the text output
and subprocess handling in the TestRunner; move into TestEnv the logic
to prepare for running a subprocess.
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20210323181928.311862-4-pbonzini@redhat.com>
Message-Id: <20210503110110.476887-4-pbonzini@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Python test scripts that use unittest consist of multiple tests.
unittest.main allows selecting which tests to run, but currently this
is not possible because the iotests wrapper ignores sys.argv.
unittest.main command line options also allow the user to pick the
desired options for verbosity, failfast mode, etc. While "-d" is
currently translated to "-v", it also enables extra debug output,
and other options are not available at all.
These command line options only work if the unittest.main testRunner
argument is a type, rather than a TestRunner instance. Therefore, pass
the class name and "verbosity" argument to unittest.main, and adjust for
the different default warnings between TextTestRunner and unittest.main.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Tested-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20210323181928.311862-3-pbonzini@redhat.com>
Message-Id: <20210503110110.476887-3-pbonzini@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Instead of buffering the test output into a StringIO, patch it on
the fly by wrapping sys.stdout's write method. This can be
done unconditionally, even if using -d, which makes execute_unittest
a bit simpler.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Tested-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <20210323181928.311862-2-pbonzini@redhat.com>
Message-Id: <20210503110110.476887-2-pbonzini@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
If mirror is READY than cancel operation is not discarding the whole
result of the operation, but instead it's a documented way get a
point-in-time snapshot of source disk.
So, we should not cancel any requests if mirror is READ and
force=false. Let's fix that case.
Note, that bug that we have before this commit is not critical, as the
only .bdrv_cancel_in_flight implementation is nbd_cancel_in_flight()
and it cancels only requests waiting for reconnection, so it should be
rare case.
Fixes: 521ff8b779b11c394dbdc43f02e158dd99df308a
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210421075858.40197-1-vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
Sometimes the parser needs to further split a token it has collected
from the token input stream. Right now, it does a cursory check to see
if the relevant characters appear in the token to determine if it should
break it down further.
However, qemu_rbd_next_tok() will escape characters as it removes tokens
from the token stream and plain strchr() won't. This can make the
initial strchr() check slightly misleading since it implies
qemu_rbd_next_tok() will find the token and split on it, except the
reality is that qemu_rbd_next_tok() will pass over it if it is escaped.
Use a custom strchr to avoid mixing escaped and unescaped string
operations. Furthermore, this code is identical to how
qemu_rbd_next_tok() seeks its next token, so incorporate this custom
strchr into the body of that function to reduce duplication.
Reported-by: Han Han <hhan@redhat.com>
Fixes: https://bugzilla.redhat.com/1873913
Signed-off-by: Connor Kuehl <ckuehl@redhat.com>
Message-Id: <20210421212343.85524-3-ckuehl@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
The deprecation message in the expected output has technically been
wrong since the wrong version of a patch was applied to it. Because of
this, the test fails. Correct the expected output so that it passes.
Signed-off-by: Connor Kuehl <ckuehl@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Message-Id: <20210421212343.85524-2-ckuehl@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
It was deprecated in commit e1c4269763, v5.2.0. See that commit
message for rationale.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20210501075747.3293186-1-armbru@redhat.com>
ACKed-by: Peter Krempa <pkrempa@redhat.com>
|
|
Now, bdrv_node_check_perm() is called only with fresh cumulative
permissions, so its actually "refresh_perm".
Move permission calculation to the function. Also, drop unreachable
error message and rewrite the remaining one to be more generic (as now
we don't know which node is added and which was already here).
Add also Virtuozzo copyright, as big work is done at this point.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20210428151804.439460-37-vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
We don't need this workaround anymore: bdrv_append is already smart
enough and we can use new bdrv_drop_filter().
This commit efficiently reverts also recent 705dde27c6c53b73, which
checked .active on io path. Still it said that the problem should be
theoretical. And the logic of filter removement is changed anyway.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20210428151804.439460-25-vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Split out no-perm part of bdrv_root_attach_child() into separate
transaction action. bdrv_root_attach_child() now moves to new
permission update paradigm: first update graph relations then update
permissions.
qsd-jobs test output updated. Seems now permission update goes in
another order. Still, the test comment say that we only want to check
that command doesn't crash, and it's still so.
Error message is a bit misleading as it looks like job was added first.
But actually in new paradigm of graph update we can't distinguish such
things. We should update the error message, but let's not do it now.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210428151804.439460-19-vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Rewrite bdrv_check_perm(), bdrv_abort_perm_update() and bdrv_set_perm()
to update nodes in topological sort order instead of simple DFS. With
topologically sorted nodes, we update a node only when all its parents
already updated. With DFS it's not so.
Consider the following example:
A -+
| |
| v
| B
| |
v |
C<-+
A is parent for B and C, B is parent for C.
Obviously, to update permissions, we should go in order A B C, so, when
we update C, all parent permissions already updated. But with current
approach (simple recursion) we can update in sequence A C B C (C is
updated twice). On first update of C, we consider old B permissions, so
doing wrong thing. If it succeed, all is OK, on second C update we will
finish with correct graph. But if the wrong thing failed, we break the
whole process for no reason (it's possible that updated B permission
will be less strict, but we will never check it).
Also new approach gives a way to simultaneously and correctly update
several nodes, we just need to run bdrv_topological_dfs() several times
to add all nodes and their subtrees into one topologically sorted list
(next patch will update bdrv_replace_node() in this manner).
Test test_parallel_perm_update() is now passing, so move it out of
debugging "if".
We also need to support ignore_children in
bdrv_parent_perms_conflict()
For test 283 order of conflicting parents check is changed.
Note also that in bdrv_check_perm() we don't check for parents conflict
at root bs, as we may be in the middle of permission update in
bdrv_reopen_multiple(). bdrv_reopen_multiple() will be updated soon.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20210428151804.439460-14-vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Add a test accompanying commit 53431b9086b2832ca1aeff0c55e186e9ed79bd11
("block/mirror: Fix mirror_top's permissions").
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210331122815.51491-1-mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Just demonstrate one of x-blockdev-reopen usecases. We can't simply
remove persistent bitmap from RO node (for example from backing file),
as we need to remove it from the image too. So, we should reopen the
node first.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210401161522.8001-1-vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
The job may or may not be ready before the 'quit' is issued. Whether it
is is irrelevant; for the purpose of the test, it only needs to still be
there. Filter the job status change and READY events from the output so
it becomes reliable.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210401132839.139939-1-mreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Three test cases:
(1) Adding a qcow2 (metadata) file to an existing data file, see whether
we can read the existing data through the qcow2 image.
(2) Append data to the data file, grow the qcow2 image accordingly, see
whether we can read the new data through the qcow2 image.
(3) At runtime, add a backing image to a freshly created qcow2 image
with an external data file (with data-file-raw). Reading data from
the qcow2 image must return the same result as reading data from the
data file, so everything in the backing image must be ignored.
(This did not use to be the case, because without the L2 tables
preallocated, all clusters would appear as unallocated, and so the
qcow2 driver would fall through to the backing file.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210326145509.163455-3-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
|
|
Setting the qcow2 data-file-raw bit means that you can ignore the
qcow2 metadata when reading from the external data file. It does not
mean that you have to ignore it, though. Therefore, the data read must
be the same regardless of whether you interpret the metadata or whether
you ignore it, and thus the L1/L2 tables must all be present and give a
1:1 mapping.
This patch changes 244's output: First, the qcow2 file is larger right
after creation, because of metadata preallocation. Second, the qemu-img
map output changes: Everything that was not explicitly discarded or
zeroed is now a data area.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210326145509.163455-2-mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
|
|
For its concurrent requests, 046 has always filtered the offset,
probably because concurrent requests may settle in any order. However,
it did not filter the request length, and so if requests with different
lengths settle in an unexpected order (notably the longer request before
the shorter request), the test fails (for no good reason).
Filter the length, too.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20200918153323.108932-1-mreitz@redhat.com>
|
|
15ce94a68ca ("block/qed: bdrv_qed_do_open: deal with errp") has improved
the qed driver's error reporting, though sadly did not add a test for
it.
The good news are: There already is such a test, namely 116.
The bad news are: Its reference output was not adjusted, and so now it
fails.
Let's fix the reference output, which has the nice side effect of
demonstrating 15ce94a68ca's improvements.
Fixes: 15ce94a68ca6730466c565c3d29971aab3087bf1
("block/qed: bdrv_qed_do_open: deal with errp")
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20210326141419.156831-1-mreitz@redhat.com>
|
|
A patch was recently applied that touched up some error messages that
pertained to key names like 'node-name'. The trouble is it only updated
tests/qemu-iotests/051.pc.out and not tests/qemu-iotests/051.out as
well.
Do that now.
Fixes: 785ec4b1b9 ("block: Clarify error messages pertaining to
'node-name'")
Signed-off-by: Connor Kuehl <ckuehl@redhat.com>
Message-Id: <20210318200949.1387703-2-ckuehl@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
There is an typo in iotest 051, correct it.
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20210324084321.90952-1-tao3.xu@intel.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
|
'remotes/stefanha-gitlab/tags/block-pull-request' into staging
Pull request
This dirty bitmap fix solves a crash that can be triggered in the destination
QEMU process during live migration.
# gpg: Signature made Wed 24 Mar 2021 14:51:31 GMT
# gpg: using RSA key 8695A8BFD3F97CDAAC35775A9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" [full]
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" [full]
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha-gitlab/tags/block-pull-request:
migrate-bitmaps-postcopy-test: check that we can't remove in-flight bitmaps
migration/block-dirty-bitmap: make incoming disabled bitmaps busy
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
And ioeventfd are only available with virtio-scsi-pci or virtio-scsi-ccw,
use the alias but add a rule to require virtio-scsi-pci or virtio-scsi-ccw
for the tests that use iothreads.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210319202335.2397060-7-laurent@vivier.eu>
Message-Id: <20210323165308.15244-22-alex.bennee@linaro.org>
|
|
This allows to cover the virtio tests with a 32bit big-endian
virtio-mmio machine.
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20210319202335.2397060-6-laurent@vivier.eu>
Message-Id: <20210323165308.15244-21-alex.bennee@linaro.org>
|
|
Commit f1d5516ab583 introduces a test in some iotests to check if
the machine is a s390-ccw-virtio and to select virtio-*-ccw rather
than virtio-*-pci.
We don't need that because QEMU already provides aliases to use the correct
virtio interface according to the machine type.
This patch removes all virtio-*-pci and virtio-*-ccw to use virtio-*
instead and remove get_virtio_scsi_device().
This also enables virtio-mmio devices (virtio-*-device)
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20210319202335.2397060-5-laurent@vivier.eu>
Message-Id: <20210323165308.15244-20-alex.bennee@linaro.org>
|
|
Check that we can't remove bitmaps being migrated on destination vm.
The new check proves that previous commit helps.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20210322094906.5079-3-vsementsov@virtuozzo.com>
|
|
Drop the crap deprecated in commit 4a27a638e7 "fdc: Deprecate
configuring floppies with -global isa-fdc" (v5.1.0).
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20210309161214.1402527-3-armbru@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
|
|
The image streaming block job restricts shared permissions of the nodes
it accesses. This can obviously fail when other users already got these
permissions. &error_abort is therefore wrong and can crash. Handle these
errors gracefully and just fail starting the block job.
Reported-by: Nini Gu <ngu@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20210309173451.45152-1-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
bdrv_close_all() asserts that no jobs are running any more, so we need
to cancel all jobs first to avoid failing the assertion.
Fixes: b55a3c8860b763b62b2cc2f4a6f55379977bbde5
Reported-by: Nini Gu <ngu@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20210309121814.31078-1-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
The 'host_device' and 'host_cdrom' drivers must be used instead.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
|
The same data is available in the 'BlockDeviceInfo' struct.
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
|
The same information is available via the 'recording' and 'busy' fields.
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
|
This has been hardcoded to "false" since 2.10.0, since secrets required
to unlock block devices are now always provided up front instead of using
interactive prompts.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
|
The 'scsi-hd' and 'scsi-cd' devices provide suitable alternatives.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
|
The 'ide-hd' and 'ide-cd' devices provide suitable alternatives.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
|
The generic 'migrate_set_parameters' command handle all types of param.
Only the QMP commands were documented in the deprecations page, but the
rationale for deprecating applies equally to HMP, and the replacements
exist. Furthermore the HMP commands are just shims to the QMP commands,
so removing the latter breaks the former unless they get re-implemented.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
|
|
staging
nbd patches for 2021-03-09
- Add Vladimir as NBD co-maintainer
- Fix reporting of holes in NBD_CMD_BLOCK_STATUS
- Improve command-line parsing accuracy of large numbers (anything going
through qemu_strtosz), including the deprecation of hex+suffix
- Improve some error reporting in the block layer
# gpg: Signature made Tue 09 Mar 2021 15:38:10 GMT
# gpg: using RSA key 71C2CC22B1C4602927D2F3AAA7A16B4A2527436A
# gpg: Good signature from "Eric Blake <eblake@redhat.com>" [full]
# gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>" [full]
# gpg: aka "[jpeg image of size 6874]" [full]
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2021-03-09:
block/qcow2: refactor qcow2_update_options_prepare error paths
block/qed: bdrv_qed_do_open: deal with errp
block/qcow2: simplify qcow2_co_invalidate_cache()
block/qcow2: read_cache_sizes: return status value
block/qcow2-bitmap: return status from qcow2_store_persistent_dirty_bitmaps
block/qcow2-bitmap: improve qcow2_load_dirty_bitmaps() interface
block/qcow2: qcow2_get_specific_info(): drop error propagation
blockjob: return status from block_job_set_speed()
block/mirror: drop extra error propagation in commit_active_start()
block: drop extra error propagation for bdrv_set_backing_hd
blockdev: fix drive_backup_prepare() missed error
block: check return value of bdrv_open_child and drop error propagation
utils: Deprecate hex-with-suffix sizes
utils: Improve qemu_strtosz() to have 64 bits of precision
utils: Enhance testsuite for do_strtosz()
nbd: server: Report holes for raw images
MAINTAINERS: add Vladimir as co-maintainer of NBD
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
We have multiple clients of qemu_strtosz (qemu-io, the opts visitor,
the keyval visitor), and it gets annoying that edge-case testing is
impacted by implicit rounding to 53 bits of precision due to parsing
with strtod(). As an example posted by Rich Jones:
$ nbdkit memory $(( 2**63 - 2**30 )) --run \
'build/qemu-io -f raw "$uri" -c "w -P 3 $(( 2**63 - 2**30 - 512 )) 512" '
write failed: Input/output error
because 9223372035781033472 got rounded to 0x7fffffffc0000000 which is
out of bounds.
It is also worth noting that our existing parser, by virtue of using
strtod(), accepts decimal AND hex numbers, even though test-cutils
previously lacked any coverage of the latter until the previous patch.
We do have existing clients that expect a hex parse to work (for
example, iotest 33 using qemu-io -c "write -P 0xa 0x200 0x400"), but
strtod() parses "08" as 8 rather than as an invalid octal number, so
we know there are no clients that depend on octal. Our use of
strtod() also means that "0x1.8k" would actually parse as 1536 (the
fraction is 8/16), rather than 1843 (if the fraction were 8/10); but
as this was not covered in the testsuite, I have no qualms forbidding
hex fractions as invalid, so this patch declares that the use of
fractions is only supported with decimal input, and enhances the
testsuite to document that.
Our previous use of strtod() meant that -1 parsed as a negative; now
that we parse with strtoull(), negative values can wrap around modulo
2^64, so we have to explicitly check whether the user passed in a '-';
and make it consistent to also reject '-0'. This has the minor effect
of treating negative values as EINVAL (with no change to endptr)
rather than ERANGE (with endptr advanced to what was parsed), visible
in the updated iotest output.
We also had no testsuite coverage of "1.1e0k", which happened to parse
under strtod() but is unlikely to occur in practice; as long as we are
making things more robust, it is easy enough to reject the use of
exponents in a strtod parse.
The fix is done by breaking the parse into an integer prefix (no loss
in precision), rejecting negative values (since we can no longer rely
on strtod() to do that), determining if a decimal or hexadecimal parse
was intended (with the new restriction that a fractional hex parse is
not allowed), and where appropriate, using a floating point fractional
parse (where we also scan to reject use of exponents in the fraction).
The bulk of the patch is then updates to the testsuite to match our
new precision, as well as adding new cases we reject (whether they
were rejected or inadvertently accepted before).
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210211204438.1184395-3-eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
|
|
When querying image extents for raw image, qemu-nbd reports holes as
zero:
$ qemu-nbd -t -r -f raw empty-6g.raw
$ qemu-img map --output json nbd://localhost
[{ "start": 0, "length": 6442450944, "depth": 0, "zero": true, "data": true, "offset": 0}]
$ qemu-img map --output json empty-6g.raw
[{ "start": 0, "length": 6442450944, "depth": 0, "zero": true, "data": false, "offset": 0}]
Turns out that qemu-img map reports a hole based on BDRV_BLOCK_DATA, but
nbd server reports a hole based on BDRV_BLOCK_ALLOCATED.
The NBD protocol says:
NBD_STATE_HOLE (bit 0): if set, the block represents a hole (and
future writes to that area may cause fragmentation or encounter an
NBD_ENOSPC error); if clear, the block is allocated or the server
could not otherwise determine its status.
qemu-img manual says:
whether the sectors contain actual data or not (boolean field data;
if false, the sectors are either unallocated or stored as
optimized all-zero clusters);
To me, data=false looks compatible with NBD_STATE_HOLE. From user point
of view, getting same results from qemu-nbd and qemu-img is more
important than being more correct about allocation status.
Changing nbd server to report holes using BDRV_BLOCK_DATA makes qemu-nbd
results compatible with qemu-img map:
$ qemu-img map --output json nbd://localhost
[{ "start": 0, "length": 6442450944, "depth": 0, "zero": true, "data": false, "offset": 0}]
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Message-Id: <20210219160752.1826830-1-nsoffer@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
|
|
Signed-off-by: Connor Kuehl <ckuehl@redhat.com>
Message-Id: <20210305151929.1947331-3-ckuehl@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|
|
Some error messages contain ambiguous representations of the 'node-name'
parameter. This can be particularly confusing when exchanging QMP
messages (C = client, S = server):
C: {"execute": "block_resize", "arguments": { "device": "my_file", "size": 26843545600 }}
S: {"error": {"class": "GenericError", "desc": "Cannot find device=my_file nor node_name="}}
^^^^^^^^^
This error message suggests one could send a message with a key called
'node_name':
C: {"execute": "block_resize", "arguments": { "node_name": "my_file", "size": 26843545600 }}
^^^^^^^^^
but using the underscore is actually incorrect, the parameter should be
'node-name':
S: {"error": {"class": "GenericError", "desc": "Parameter 'node_name' is unexpected"}}
This behavior was uncovered in bz1651437, but I ended up going down a
rabbit hole looking for other areas where this miscommunication might
occur and changing those accordingly as well.
Fixes: https://bugzilla.redhat.com/1651437
Signed-off-by: Connor Kuehl <ckuehl@redhat.com>
Message-Id: <20210305151929.1947331-2-ckuehl@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
|