diff options
author | Eric Blake <eblake@redhat.com> | 2018-11-13 17:03:18 -0600 |
---|---|---|
committer | Kevin Wolf <kwolf@redhat.com> | 2018-11-19 12:51:40 +0100 |
commit | 77d6a21558577fbdd35e65e0e1d03ae07214329f (patch) | |
tree | 60abccf363c918aaf73e96d3986db697b48be829 /tests | |
parent | d3e1a7eb4ceb9489d575c45c9518137dfbd1389d (diff) | |
download | qemu-77d6a21558577fbdd35e65e0e1d03ae07214329f.zip |
qcow2: Don't allow overflow during cluster allocation
Our code was already checking that we did not attempt to
allocate more clusters than what would fit in an INT64 (the
physical maximimum if we can access a full off_t's worth of
data). But this does not catch smaller limits enforced by
various spots in the qcow2 image description: L1 and normal
clusters of L2 are documented as having bits 63-56 reserved
for other purposes, capping our maximum offset at 64PB (bit
55 is the maximum bit set). And for compressed images with
2M clusters, the cap drops the maximum offset to bit 48, or
a maximum offset of 512TB. If we overflow that offset, we
would write compressed data into one place, but try to
decompress from another, which won't work.
It's actually possible to prove that overflow can cause image
corruption without this patch; I'll add the iotests separately
in the next commit.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'tests')
0 files changed, 0 insertions, 0 deletions