diff options
author | Peter Xu <peterx@redhat.com> | 2019-06-03 14:50:56 +0800 |
---|---|---|
committer | Juan Quintela <quintela@redhat.com> | 2019-07-15 15:39:03 +0200 |
commit | 002cad6b16ca01d8dc8160038e139b47e5ca557e (patch) | |
tree | edf2fe1449cd7a6ab57d50530f0cc1a7c6c50b35 /migration/migration.h | |
parent | ff4aa11419242c835b03d274f08f797c129ed7ba (diff) | |
download | qemu-002cad6b16ca01d8dc8160038e139b47e5ca557e.zip |
migration: Split log_clear() into smaller chunks
Currently we are doing log_clear() right after log_sync() which mostly
keeps the old behavior when log_clear() was still part of log_sync().
This patch tries to further optimize the migration log_clear() code
path to split huge log_clear()s into smaller chunks.
We do this by spliting the whole guest memory region into memory
chunks, whose size is decided by MigrationState.clear_bitmap_shift (an
example will be given below). With that, we don't do the dirty bitmap
clear operation on the remote node (e.g., KVM) when we fetch the dirty
bitmap, instead we explicitly clear the dirty bitmap for the memory
chunk for each of the first time we send a page in that chunk.
Here comes an example.
Assuming the guest has 64G memory, then before this patch the KVM
ioctl KVM_CLEAR_DIRTY_LOG will be a single one covering 64G memory.
If after the patch, let's assume when the clear bitmap shift is 18,
then the memory chunk size on x86_64 will be 1UL<<18 * 4K = 1GB. Then
instead of sending a big 64G ioctl, we'll send 64 small ioctls, each
of the ioctl will cover 1G of the guest memory. For each of the 64
small ioctls, we'll only send if any of the page in that small chunk
was going to be sent right away.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190603065056.25211-12-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Diffstat (limited to 'migration/migration.h')
-rw-r--r-- | migration/migration.h | 27 |
1 files changed, 27 insertions, 0 deletions
diff --git a/migration/migration.h b/migration/migration.h index 5e8f09c6db..1fdd7b21fd 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -26,6 +26,23 @@ struct PostcopyBlocktimeContext; #define MIGRATION_RESUME_ACK_VALUE (1) +/* + * 1<<6=64 pages -> 256K chunk when page size is 4K. This gives us + * the benefit that all the chunks are 64 pages aligned then the + * bitmaps are always aligned to LONG. + */ +#define CLEAR_BITMAP_SHIFT_MIN 6 +/* + * 1<<18=256K pages -> 1G chunk when page size is 4K. This is the + * default value to use if no one specified. + */ +#define CLEAR_BITMAP_SHIFT_DEFAULT 18 +/* + * 1<<31=2G pages -> 8T chunk when page size is 4K. This should be + * big enough and make sure we won't overflow easily. + */ +#define CLEAR_BITMAP_SHIFT_MAX 31 + /* State for the incoming migration */ struct MigrationIncomingState { QEMUFile *from_src_file; @@ -232,6 +249,16 @@ struct MigrationState * do not trigger spurious decompression errors. */ bool decompress_error_check; + + /* + * This decides the size of guest memory chunk that will be used + * to track dirty bitmap clearing. The size of memory chunk will + * be GUEST_PAGE_SIZE << N. Say, N=0 means we will clear dirty + * bitmap for each page to send (1<<0=1); N=10 means we will clear + * dirty bitmap only once for 1<<10=1K continuous guest pages + * (which is in 4M chunk). + */ + uint8_t clear_bitmap_shift; }; void migrate_set_state(int *state, int old_state, int new_state); |