Age | Commit message (Collapse) | Author |
|
Implement a basic ASPEED VIC device model for the AST2400 SoC[1], with
enough functionality to boot an aspeed_defconfig Linux kernel. The model
implements the 'new' (revised) register set: While the hardware exposes
both the new and legacy register sets, accesses to the model's legacy
register set will not be serviced (however the access will be logged).
[1] http://www.aspeedtech.com/products.php?fPath=20&rId=376
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Message-id: 1458096317-25223-3-git-send-email-andrew@aj.id.au
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Implement basic ASPEED timer functionality for the AST2400 SoC[1]: Up to
8 timers can independently be configured, enabled, reset and disabled.
Some hardware features are not implemented, namely clock value matching
and pulse generation, but the implementation is enough to boot the Linux
kernel configured with aspeed_defconfig.
[1] http://www.aspeedtech.com/products.php?fPath=20&rId=376
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Message-id: 1458096317-25223-2-git-send-email-andrew@aj.id.au
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
into staging
# gpg: Signature made Mon 14 Mar 2016 11:27:01 GMT using RSA key ID 81AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>"
* remotes/stefanha/tags/tracing-pull-request:
trace: separate MMIO tracepoints from TB-access tracepoints
trace: include CPU index in trace_memory_region_*()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Memory accesses to code which has previously been translated into a TB show up
in the MMIO path, so that they may invalidate the TB. It's extremely confusing
to mix those in with device MMIOs, so split them into their own tracepoint.
Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1456949575-1633-2-git-send-email-hollis_blanchard@mentor.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Knowing which CPU performed an action is essential for understanding SMP guest
behavior.
However, cpu_physical_memory_rw() may be executed by a machine init function,
before any VCPUs are running, when there is no CPU running ('current_cpu' is
NULL). In this case, store -1 in the trace record as the CPU index. Trace
analysis tools may need to be aware of this special case.
Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
Message-id: 1456949575-1633-1-git-send-email-hollis_blanchard@mentor.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Both platform and PCI vfio drivers create a "slow", I/O memory region
with one or more mmap memory regions overlayed when supported by the
device. Generalize this to a set of common helpers in the core that
pulls the region info from vfio, fills the region data, configures
slow mapping, and adds helpers for comleting the mmap, enable/disable,
and teardown. This can be immediately used by the PCI MSI-X code,
which needs to mmap around the MSI-X vector table.
This also changes VFIORegion.mem to be dynamically allocated because
otherwise we don't know how the caller has allocated VFIORegion and
therefore don't know whether to unreference it to destroy the
MemoryRegion or not.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
When memory_region_ops tracepoints are enabled, calculate and record the
absolute address being accessed. Otherwise, we only get offsets into the
memory region instead of addresses.
[Fixed "offset" -> "addr" in trace event format strings.
--Stefan]
Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
Message-id: 1454976185-30095-3-git-send-email-hollis_blanchard@mentor.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Previously, a single MMIO could trigger the memory_region_ops tracepoint twice:
once on its way into subpage ops, then later on its way into the model's ops.
Also, the fields previously called "addr" are actually offsets into the memory
region. Rename them to "offset" while we're editing the tracepoint definitions.
Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
Message-id: 1454976185-30095-2-git-send-email-hollis_blanchard@mentor.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Also fix a typo in the virtio_balloon_handle_output() trace while here.
[The double-quoting was a limitation of the old tracetool.sh script.
The modern tracetool.py script does not require double-quotes at the end
of the line. See commit cf85cf8e972f3ad79f203be4edb7968d6e052293
("trace: Format strings must begin/end with double quotes").
--Stefan]
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 20160111173036.24764.59878.stgit@bahia.huguette.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
The "pnum < nb_sectors" condition in deciding whether to actually copy
data is unnecessarily strict, and the qiov initialization is
unnecessarily for bdrv_aio_write_zeroes and bdrv_aio_discard.
Rewrite mirror_iteration to fix both flaws.
The output of iotests 109 is updated because we now report the offset
and len slightly differently in mirroring progress.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1454637630-10585-2-git-send-email-famz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
|
|
Using the return value to report errors is error prone:
- xics_alloc() returns -1 on error but spapr_vio_busdev_realize() errors
on 0
- xics_alloc_block() returns the unclear value of ics->offset - 1 on error
but both rtas_ibm_change_msi() and spapr_phb_realize() error on 0
This patch adds an errp argument to xics_alloc() and xics_alloc_block() to
report errors. The return value of these functions is a valid IRQ number
if errp is NULL. It is undefined otherwise.
The corresponding error traces get promotted to error messages. Note that
the "can't allocate IRQ" error message in spapr_vio_busdev_realize() also
moves to xics_alloc(). Similar error message consolidation isn't really
applicable to xics_alloc_block() because callers have extra context (device
config address, MSI or MSIX).
This fixes the issues mentioned above.
Based on previous work from Brian W. Hart.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
This adds the SAS1068 device, a SAS disk controller used in VMware that
is oldish but widely supported and has decent performance. Unlike
megasas, it presents itself as a SAS controller and not as a RAID
controller. The device corresponds to the mptsas kernel driver in
Linux.
A few small things in the device setup are based on Don Slutz's old
patch, but the device emulation was written from scratch based on Don's
SeaBIOS patch and on the FreeBSD and Linux drivers. It is 2400 lines
shorter than Don's patch (and roughly the same size as MegaSAS---also
because it doesn't support the similar SPI controller), implements SCSI
task management functions (with asynchronous cancellation), supports
big-endian hosts, has complete support for migration and follows the
QEMU coding standards much more closely.
To write the driver, I first split Don's patch in two parts, with
the configuration bits in one file and the rest in a separate file.
I first left mptconfig.c in place and rewrote the rest, then deleted
mptconfig.c as well. The configuration pages are still based mostly on
VirtualBox's, though not exactly the same. However, the implementation
is completely different. The contents of the pages themselves should
not be copyrightable.
Signed-off-by: Don Slutz <Don@CloudSwitch.com>
Message-Id: <1347382813-5662-1-git-send-email-Don@CloudSwitch.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The PCI spec recommends devices use additional alignment for MSI-X
data structures to allow software to map them to separate processor
pages. One advantage of doing this is that we can emulate those data
structures without a significant performance impact to the operation
of the device. Some devices fail to implement that suggestion and
assigned device performance suffers.
One such case of this is a Mellanox MT27500 series, ConnectX-3 VF,
where the MSI-X vector table and PBA are aligned on separate 4K
pages. If PBA emulation is enabled, performance suffers. It's not
clear how much value we get from PBA emulation, but the solution here
is to only lazily enable the emulated PBA when a masked MSI-X vector
fires. We then attempt to more aggresively disable the PBA memory
region any time a vector is unmasked. The expectation is then that
a typical VM will run entirely with PBA emulation disabled, and only
when used is that emulation re-enabled.
Reported-by: Shyam Kaushik <shyam.kaushik@gmail.com>
Tested-by: Shyam Kaushik <shyam.kaushik@gmail.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
|
|
Commit c8ee0a4 introduced new events containing PRIx64 constants without
including the % prefix in the preceding string. This results in a compile
error during build if --enable-trace-backends is passed to configure.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-id: 1450566522-6003-1-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Some functions was moved from block.c to block/io.c, so the trace-events file should reflect that change.
Signed-off-by: Qinghua Jin <qhjin_dev@163.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Add a QIOChannel subclass that is capable of performing I/O
to/from a separate process, via a pair of pipes. The command
can be used for unidirectional or bi-directional I/O.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
|
Add a QIOChannel subclass that can run the websocket protocol over
the top of another QIOChannel instance. This initial implementation
is only capable of acting as a websockets server. There is no support
for acting as a websockets client yet.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
|
Add a QIOChannel subclass that can run the TLS protocol over
the top of another QIOChannel instance. The object provides a
simplified API to perform the handshake when starting the TLS
session. The layering of TLS over the underlying channel does
not have to be setup immediately. It is possible to take an
existing QIOChannel that has done some handshake and then swap
in the QIOChannelTLS layer. This allows for use with protocols
which start TLS right away, and those which start plain text
and then negotiate TLS.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
|
Add a QIOChannel subclass that is capable of operating on things
that are files, such as plain files, pipes, character/block
devices, but notably not sockets.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
|
Implement a QIOChannel subclass that supports sockets I/O.
The implementation is able to manage a single socket file
descriptor, whether a TCP/UNIX listener, TCP/UNIX connection,
or a UDP datagram. It provides APIs which can listen and
connect either asynchronously or synchronously. Since there
is no asynchronous DNS lookup API available, it uses the
QIOTask helper for spawning a background thread to ensure
non-blocking operation.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
|
A number of I/O operations need to be performed asynchronously
to avoid blocking the main loop. The caller of such APIs need
to provide a callback to be invoked on completion/error and
need access to the error, if any. The small QIOTask provides
a simple framework for dealing with such probes. The API
docs inline provide an outline of how this is to be used.
Some functions don't have the ability to run asynchronously
(eg getaddrinfo always blocks), so to facilitate their use,
the task class provides a mechanism to run a blocking
function in a thread, while triggering the completion
callback in the main event loop thread. This easily allows
any synchronous function to be made asynchronous, albeit
at the cost of spawning a thread.
In this series, the QIOTask class will be used for things like
the TLS handshake, the websockets handshake and TCP connect()
progress.
The concept of QIOTask is inspired by the GAsyncResult
interface / GTask class in the GIO libraries. The min
version requirements on glib don't allow those to be
used from QEMU, so QIOTask provides a facsimilie which
can be easily switched to GTask in the future if the
min version is increased.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
|
|
"Unimplemented" messages go to stderr, everything else goes to tracepoints
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
into staging
fw_cfg: doc updates, various optimizations.
# gpg: Signature made Thu 17 Dec 2015 08:59:32 GMT using RSA key ID D3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg: aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
* remotes/kraxel/tags/pull-fw-cfg-20151217-1:
fw_cfg: replace ioport data read with generic method
fw_cfg: add generic non-DMA read method
fw_cfg: avoid calculating invalid current entry pointer
fw_cfg: remove offset argument from callback prototype
fw_cfg: amend callback behavior spec to once per select
fw_cfg: move internal function call docs to header file
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Introduce fw_cfg_data_read(), a generic read method which works
on all access widths (1 through 8 bytes, inclusive), and can be
used during both IOPort and MMIO read accesses.
To maintain legibility, only fw_cfg_data_mem_read() (the MMIO
data read method) is replaced by this patch. The new method
essentially unwinds the fw_cfg_data_mem_read() + fw_cfg_read()
combo, but without unnecessarily repeating all the validity
checks performed by the latter on each byte being read.
This patch also modifies the trace_fw_cfg_read prototype to
accept a 64-bit value argument, allowing it to work properly
with the new read method, but also remain backward compatible
with existing call sites.
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Marc Marí <markmb@redhat.com>
Signed-off-by: Gabriel Somlo <somlo@cmu.edu>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1446733972-1602-6-git-send-email-somlo@cmu.edu
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
For now, we use inotify watches to track only a small number of
events, namely, add, delete and modify. Note that for delete, the kernel
already deactivates the watch for us and we just need to
take care of modifying our internal state.
inotify is a linux only mechanism.
Suggested-by: Gerd Hoffman <kraxel@redhat.com>
Signed-off-by: Bandan Das <bsd@redhat.com>
Message-id: 1448314625-3855-4-git-send-email-bsd@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
To support adding/removal of objects, we will need to update
the object cache hierarchy we have built internally. Convert
to using a Qlist for easier management.
Signed-off-by: Bandan Das <bsd@redhat.com>
Message-id: 1448314625-3855-2-git-send-email-bsd@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
|
|
The assertion problem was noticed in 06c3916b35a, but it wasn't
completely fixed, because even though the req is not marked as
serialising, it still gets serialised by wait_serialising_requests
against other serialising requests, which could lead to the same
assertion failure.
Fix it by even more explicitly skipping the serialising for this
specific case.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 1448962590-2842-2-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
staging
vnc: buffer code improvements, bugfixes.
# gpg: Signature made Mon 16 Nov 2015 17:20:02 GMT using RSA key ID D3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg: aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg: aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
* remotes/kraxel/tags/pull-vnc-20151116-1:
vnc: fix mismerge
buffer: allow a buffer to shrink gracefully
buffer: factor out buffer_adj_size
buffer: factor out buffer_req_size
vnc: recycle empty vs->output buffer
vnc: fix local state init
vnc: only alloc server surface with clients connected
vnc: use vnc_{width,height} in vnc_set_area_dirty
vnc: factor out vnc_update_server_surface
vnc: add vnc_width+vnc_height helpers
vnc: zap dead code
vnc-jobs: move buffer reset, use new buffer move
vnc: kill jobs queue buffer
vnc: attach names to buffers
buffer: add tracing
buffer: add buffer_shrink
buffer: add buffer_move
buffer: add buffer_move_empty
buffer: add buffer_init
buffer: make the Buffer capacity increase in powers of two
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Tweak the end of migration cleanup; we don't want to close stuff down
at the end of the main stream, since the postcopy is still sending pages
on the other thread.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Prior to servicing userfault requests we must ensure we've not got
huge pages in the area that might include non-transferred memory,
since a hugepage could incorrectly mark the whole huge page as present.
We mark the area as non-huge page (nhp) just before we perform
discards; the discard code now tells us to discard any areas
that haven't been sent (as well as any that are redirtied);
any already formed transparent-huge-pages get fragmented
by this discard process if they cotnain any discards.
Transparent huge pages that have been entirely transferred
and don't contain any discards are not broken by this mechanism;
they stay as huge pages.
By starting postcopy after a full precopy pass, many of the pages
then stay as huge pages; this is important for maintaining performance
after the end of the migration.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Wire up more of the handlers for the commands on the destination side,
in particular loadvm_postcopy_handle_run now has enough to start the
guest running.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
The loading of a device state (during postcopy) may access guest
memory that's still on the source machine and thus might need
a page fill; split off a separate thread that handles the incoming
page data so that the original incoming migration code can finish
off the device data.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
userfaultfd is a Linux syscall that gives an fd that receives a stream
of notifications of accesses to pages registered with it and allows
the program to acknowledge those stalls and tell the accessing
thread to carry on.
We convert the requests from the kernel into messages back to the
source asking for the pages.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
In postcopy, the destination guest is running at the same time
as it's receiving pages; as we receive new pages we must put
them into the guests address space atomically to avoid a running
CPU accessing a partially written page.
Use the helpers in postcopy-ram.c to map these pages.
qemu_get_buffer_in_place is used to avoid a copy out of qemu_file
in the case that postcopy is going to do a copy anyway.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
postcopy_place_page (etc) provide a way for postcopy to place a page
into guests memory atomically (using the copy ioctl on the ufd).
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
When transmitting RAM pages, consume pages that have been queued by
MIG_RPCOMM_REQPAGE commands and send them ahead of normal page scanning.
Note:
a) After a queued page the linear walk carries on from after the
unqueued page; there is a reasonable chance that the destination
was about to ask for other closeby pages anyway.
b) We have to be careful of any assumptions that the page walking
code makes, in particular it does some short cuts on its first linear
walk that break as soon as we do a queued page.
c) We have to be careful to not break up host-page size chunks, since
this makes it harder to place the pages on the destination.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
On receiving MIG_RPCOMM_REQ_PAGES look up the address and
queue the page.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Add MIG_RP_MSG_REQ_PAGES command on Return path for the postcopy
destination to request a page from the source.
Two versions exist:
MIG_RP_MSG_REQ_PAGES_ID that includes a RAMBlock name and start/len
MIG_RP_MSG_REQ_PAGES that just has start/len for use with the same
RAMBlock as a previous MIG_RP_MSG_REQ_PAGES_ID
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
The end of migration in postcopy is a bit different since some of
the things normally done at the end of migration have already been
done on the transition to postcopy.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Rework the migration thread to setup and start postcopy.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Soon we'll be in either ACTIVE or POSTCOPY_ACTIVE when we
complete migration, and we need to know which we expect to be
in to change state safely.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
'MIGRATION_STATUS_POSTCOPY_ACTIVE' is entered after migrate_start_postcopy
'migration_in_postcopy' is provided for other sections to know if
they're in postcopy.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Modify save_live_pending to return separate postcopiable and
non-postcopiable counts.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
MIG_CMD_PACKAGED is a migration command that wraps a chunk of migration
stream inside a package whose length can be determined purely by reading
its header. The destination guarantees that the whole MIG_CMD_PACKAGED
is read off the stream prior to parsing the contents.
This is used by postcopy to load device state (from the package)
while leaving the main stream free to receive memory pages.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
messages.
The state of the postcopy process is managed via a series of messages;
* Add wrappers and handlers for sending/receiving these messages
* Add state variable that track the current state of postcopy
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Postcopy needs to have two migration streams loading concurrently;
one from memory (with the device state) and the other from the fd
with the memory transactions.
Split the core of qemu_loadvm_state out so we can use it for both.
Allow the inner loadvm loop to quit and cause the parent loops to
exit as well.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|
|
Open a return path, and handle messages that are received upon it.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
|