Age | Commit message (Collapse) | Author |
|
Split the bits that require it to exec/log.h.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-id: 1452174932-28657-8-git-send-email-den@openvz.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Clean up includes so that osdep.h is included first and headers
which it implies are not included manually.
This commit was created with scripts/clean-includes.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1453832250-766-11-git-send-email-peter.maydell@linaro.org
|
|
staging
X86 queue, 2016-01-21
# gpg: Signature made Thu 21 Jan 2016 15:08:40 GMT using RSA key ID 984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
* remotes/ehabkost/tags/x86-pull-request:
target-i386: Add PKU and and OSPKE support
target-i386: Add support to migrate vcpu's TSC rate
target-i386: Reorganize TSC rate setting code
target-i386: Fallback vcpu's TSC rate to value returned by KVM
target-i386: Add suffixes to MMReg struct fields
target-i386: Define MMREG_UNION macro
target-i386: Define MMXReg._d field
target-i386: Rename XMM_[BWLSDQ] helpers to ZMM_*
target-i386: Rename struct XMMReg to ZMMReg
target-i386: Use a _q array on MMXReg too
target-i386/ops_sse.h: Use MMX_Q macro
target-i386: Rename optimize_flags_init()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add PKU and OSPKE CPUID features, including xsave state and
migration support.
Signed-off-by: Huaitong Han <huaitong.han@intel.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
[ehabkost: squashed 3 patches together, edited patch description]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
This patch enables migrating vcpu's TSC rate. If KVM on the
destination machine supports TSC scaling, guest programs will
observe a consistent TSC rate across the migration.
If TSC scaling is not supported on the destination machine, the
migration will not be aborted and QEMU on the destination will
not set vcpu's TSC rate to the migrated value.
If vcpu's TSC rate specified by CPU option 'tsc-freq' on the
destination machine is inconsistent with the migrated TSC rate,
the migration will be aborted.
For backwards compatibility, the migration of vcpu's TSC rate is
disabled on pc-*-2.5 and older machine types.
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
[ehabkost: Rewrote comment at kvm_arch_put_registers()]
[ehabkost: Moved compat code to pc-2.5]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Following changes are made to the TSC rate setting code in
kvm_arch_init_vcpu():
* The code is moved to a new function kvm_arch_set_tsc_khz().
* If kvm_arch_set_tsc_khz() fails, i.e. following two conditions are
both satisfied:
* KVM does not support the TSC scaling or it fails to set vcpu's
TSC rate by KVM_SET_TSC_KHZ,
* the TSC rate to be set is different than the value currently used
by KVM, then kvm_arch_init_vcpu() will fail. Prevously,
* the lack of TSC scaling never failed kvm_arch_init_vcpu(),
* the failure of KVM_SET_TSC_KHZ failed kvm_arch_init_vcpu()
unconditionally, even though the TSC rate to be set is identical
to the value currently used by KVM.
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
If no user-specified TSC rate is present, we will try to set
env->tsc_khz to the value returned by KVM_GET_TSC_KHZ. This patch
does not change the current functionality of QEMU and just
prepares for later patches to enable migrating vcpu's TSC rate.
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
This will ensure we never use the MMX_* and ZMM_* macros with the
wrong struct type.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
This will simplify the definitions of ZMMReg and MMXReg.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Add a new field and reorder MMXReg fields, to make MMXReg and
ZMMReg field lists look the same (except for the array sizes).
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
They are helpers for the ZMMReg fields, so name them accordingly.
This is just a global search+replace, no other changes are being
introduced.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
The struct represents a 512-bit register, so name it accordingly.
This is just a global search+replace, no other changes are being
introduced.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Make MMXReg use the same field names used on XMMReg, so we can
try to reuse macros and other code later.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
We have a MMX_Q macro in addition to MMX_{B,W,L}. Use it.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Rename the function so that the reason for its existence is
clearer: it does x86-specific initialization of TCG structures.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Allow multiple calls to cpu_address_space_init(); each
call adds an entry to the cpu->ases array at the specified
index. It is up to the target-specific CPU code to actually use
these extra address spaces.
Since this multiple AddressSpace support won't work with
KVM, add an assertion to avoid confusing failures.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
|
|
Rather than setting cpu->as unconditionally in cpu_exec_init
(and then having target-i386 override this later), don't set
it until the first call to cpu_address_space_init.
This requires us to initialise the address space for
both TCG and KVM (KVM doesn't need the AS listener but
it does require cpu->as to be set).
For target CPUs which don't set up any address spaces (currently
everything except i386), add the default address_space_memory
in qemu_init_vcpu().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
|
|
x86_cpu_handle_mmu_fault is currently checking twice for writability
and executability of pages; the first time to decide whether to
trigger a page fault, the second time to compute the "prot" argument
to tlb_set_page_with_attrs.
Reorganize code so that first "prot" is computed, then it is used
to check whether to raise a page fault, then finally PROT_WRITE is
removed if the D bit will have to be set.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This commit fixes migration of a QEMU/KVM guest from kernel >= v3.9 to
kernel <= v3.7 (e.g. from RHEL 7 to RHEL 6). Without this commit a guest
migrated across these kernel versions fails to resume on the target host
as its segment descriptors are invalid.
Two separate kernel commits combined together to result in this bug:
commit f0495f9b9992f80f82b14306946444b287193390
Author: Avi Kivity <avi@redhat.com>
Date: Thu Jun 7 17:06:10 2012 +0300
KVM: VMX: Relax check on unusable segment
Some userspace (e.g. QEMU 1.1) munge the d and g bits of segment
descriptors, causing us not to recognize them as unusable segments
with emulate_invalid_guest_state=1. Relax the check by testing for
segment not present (a non-present segment cannot be usable).
Signed-off-by: Avi Kivity <avi@redhat.com>
commit 25391454e73e3156202264eb3c473825afe4bc94
Author: Gleb Natapov <gleb@redhat.com>
Date: Mon Jan 21 15:36:46 2013 +0200
KVM: VMX: don't clobber segment AR of unusable segments.
Usability is returned in unusable field, so not need to clobber entire
AR. Callers have to know how to deal with unusable segments already
since if emulate_invalid_guest_state=true AR is not zeroed.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
The first commit changed the KVM_SET_SREGS ioctl so that it did no treat
segment flags == 0 as an unusable segment, instead only looking at the
"present" flag.
The second commit changed KVM_GET_SREGS so that it did not clear the
flags of an unusable segment.
Since QEMU does not itself maintain the "unusable" flag across a
migration, the end result is that unusable segments read from a kernel
with these commits and loaded into a kernel without these commits are
not properly recognised as being unusable.
This commit updates both get_seg and set_seg so that the problem is
avoided even when migrating to or migrating from a QEMU without this
commit. In get_seg, we clear the segment flags if the segment is marked
unusable. In set_seg, we mark the segment unusable if the segment's
"present" flag is not set.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
Message-Id: <1449464047-17467-1-git-send-email-mike@very.puzzling.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This patch adds support for split IRQ chip mode. When
KVM_CAP_SPLIT_IRQCHIP is enabled:
1.) The PIC, PIT, and IOAPIC are implemented in userspace while
the LAPIC is implemented by KVM.
2.) The software IOAPIC delivers interrupts to the KVM LAPIC via
kvm_set_irq. Interrupt delivery is configured via the MSI routing
table, for which routes are reserved in target-i386/kvm.c then
configured in hw/intc/ioapic.c
3.) KVM delivers IOAPIC EOIs via a new exit KVM_EXIT_IOAPIC_EOI,
which is handled in target-i386/kvm.c and relayed to the software
IOAPIC via ioapic_eoi_broadcast.
Signed-off-by: Matt Gingell <gingell@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Hyper-V SynIC timers are host timers that are configurable
by guest through corresponding MSR's (HV_X64_MSR_STIMER*).
Guest setup and use fired by host events(SynIC interrupt
and appropriate timer expiration message) as guest clock
events.
The state of Hyper-V SynIC timers are stored in corresponding
MSR's. This patch seria implements such MSR's support and migration.
Signed-off-by: Andrey Smetanin <asmetanin@virtuozzo.com>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Richard Henderson <rth@twiddle.net>
CC: Eduardo Habkost <ehabkost@redhat.com>
CC: "Andreas Färber" <afaerber@suse.de>
CC: Marcelo Tosatti <mtosatti@redhat.com>
CC: Denis V. Lunev <den@openvz.org>
CC: Roman Kagan <rkagan@virtuozzo.com>
CC: kvm@vger.kernel.org
Message-Id: <1448464885-8300-3-git-send-email-asmetanin@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Hyper-V SynIC(synthetic interrupt controller) helpers for
Hyper-V SynIC irq routing setup, irq injection, irq ack
notifications event/message pages changes tracking for future use.
Signed-off-by: Andrey Smetanin <asmetanin@virtuozzo.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Richard Henderson <rth@twiddle.net>
CC: Eduardo Habkost <ehabkost@redhat.com>
CC: "Andreas Färber" <afaerber@suse.de>
CC: Marcelo Tosatti <mtosatti@redhat.com>
CC: Roman Kagan <rkagan@virtuozzo.com>
CC: Denis V. Lunev <den@openvz.org>
CC: kvm@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This patch does Hyper-V Synthetic interrupt
controller(Hyper-V SynIC) MSR's support and
migration. Hyper-V SynIC is enabled by cpu's
'hv-synic' option.
This patch does not allow cpu creation if
'hv-synic' option specified but kernel
doesn't support Hyper-V SynIC.
Changes v3:
* removed 'msr_hv_synic_version' migration because
it's value always the same
* moved SynIC msr's initialization into kvm_arch_init_vcpu
Signed-off-by: Andrey Smetanin <asmetanin@virtuozzo.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Richard Henderson <rth@twiddle.net>
CC: Eduardo Habkost <ehabkost@redhat.com>
CC: "Andreas Färber" <afaerber@suse.de>
CC: Marcelo Tosatti <mtosatti@redhat.com>
CC: Roman Kagan <rkagan@virtuozzo.com>
CC: Denis V. Lunev <den@openvz.org>
CC: kvm@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Instead of silently clearing mcg_cap bits when the host doesn't
support them, print a warning when doing that.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[Avoid \n at end of error_report. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1448471956-66873-10-git-send-email-pbonzini@redhat.com>
|
|
When setting up MCE, instead of using the MCE_*_DEF macros
directly, just filter the existing env->mcg_cap value.
As env->mcg_cap is already initialized as
MCE_CAP_DEF|MCE_BANKS_DEF at target-i386/cpu.c:mce_init(), this
doesn't change any behavior. But it will allow us to change
mce_init() in the future, to implement different defaults
depending on CPU model, machine-type or command-line parameters.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1448471956-66873-9-git-send-email-pbonzini@redhat.com>
|
|
Instead of silently changing the number of banks in mcg_cap based
on kvm_get_mce_cap_supported(), abort initialization if the host
doesn't support MCE_BANKS_DEF banks.
Note that MCE_BANKS_DEF was always 10 since it was introduced in
QEMU, and Linux always returned 32 at KVM_CAP_MCE since
KVM_CAP_MCE was introduced, so no behavior is being changed and
the error can't be triggered by any Linux version. The point of
the new check is to ensure we won't silently change the bank
count if we change MCE_BANKS_DEF or make the bank count
configurable in the future.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[Avoid Yoda condition and \n at end of error_report. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1448471956-66873-8-git-send-email-pbonzini@redhat.com>
|
|
KVM can't virtualize rdtscp on AMD CPUs yet, so there's no point
in enabling it by default on AMD CPU models, as all we are
getting are confused users because of the "host doesn't support
requested feature" warnings.
Disable rdtscp on Opteron_G* models, but keep compatibility on
pc-*-2.4 and older (just in case there are people are doing funny
stuff using AMD CPU models on Intel hosts).
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
The Intel specification clearly indicates that the low part
of the result is written first and the high part of the result
is written second; thus if ModRM:reg and VEX.vvvv are identical,
the final result should be the high part of the result.
At present, TCG may either produce incorrect results or crash
with --enable-checking.
Reported-by: Toni Nedialkov <farmdve@gmail.com>
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Now these instructions are handled by TCG and can be added to the
TCG_7_0_EBX_FEATURES macro.
Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Detect the clflushopt and pcommit instructions and check their
corresponding feature flags, instead of checking CPUID_SSE and
CPUID_CLFLUSH.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Accept the clwb instruction (66 0F AE /6) if its corresponding feature
flag is enabled on CPUID[7].
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
These instructions are used by NVDIMM drivers and the specification is
located at:
https://software.intel.com/sites/default/files/managed/0d/53/319433-022.pdf
There instructions are available on Skylake Server.
Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
POPCNT is not available on Penryn and older and on Opteron_G2 and older,
and we want to make the default CPU runnable in most hosts, so it won't
be enabled by default in KVM mode.
We should eventually have all features supported by TCG enabled by
default in TCG mode, but as we don't have a good mechanism today to
ensure we have different defaults in KVM and TCG mode, disable POPCNT in
the qemu64 and qemu32 CPU models entirely.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
ABM is not available on Sandy Bridge and older, and we want to make the
default CPU runnable in most hosts, so it won't be enabled by default in
KVM mode.
We should eventually have all features supported by TCG enabled by
default in TCG mode, but as we don't have a good mechanism today to
ensure we have different defaults in KVM and TCG mode, disable ABM in
the qemu64 CPU model entirely.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
SSE4a is not available in any Intel CPU, and we want to make the default
CPU runnable in most hosts, so it doesn't make sense to enable it by
default in KVM mode.
We should eventually have all features supported by TCG enabled by
default in TCG mode, but as we don't have a good mechanism today to
ensure we have different defaults in KVM and TCG mode, disable SSE4a in
the qemu64 CPU model entirely.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
The commit 317b0a6d8 fixed an issue which caused by the outdated
env->tsc value, but the fix lead to 'cpu_synchronize_all_states()'
called twice during live migration. The 'cpu_synchronize_all_states()'
takes about 130us for a VM which has 4 vcpus, it's a bit expensive.
Synchronize the whole CPU context just for updating env->tsc is too
wasting, this patch use a new function to update the env->tsc.
Comparing to 'cpu_synchronize_all_states()', it only takes about 20us.
Signed-off-by: Liang Li <liang.z.li@intel.com>
Message-Id: <1446695464-27116-2-git-send-email-liang.z.li@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This makes the purpose of the function clearer: it is not about the
version of QEMU that's running, but the version string exposed in the
emulated hardware.
Cc: Andrzej Zaborowski <balrogg@gmail.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: John Snow <jsnow@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1446233769-7892-3-git-send-email-ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
In this mode, referring an invalid element of the source forces the
result to false (table 4-7, last column) but referring an invalid
element of the destination forces the result to true, so the outer
loop should still be run even if some elements of the destination
will be invalid. They will be avoided in the inner loop, which
correctly bounds "i" to validd, but they will still contribute to a
positive outcome of the search.
This fixes tst_strstr in glibc 2.17.
Reported-by: Florian Weimer <fweimer@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Some targets already had this within their logic, but make sure
it's present for all targets.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
|
Current default behavior of QEMU is to silently disable features that
are not supported by the host when a CPU model is requested in the
command-line. This means that in addition to risking breaking guest ABI
by default, we are silent about it.
I would like to enable "enforce" by default, but this can easily break
existing production systems because of the way libvirt makes assumptions
about CPU models today (this will change in the future, once QEMU
provide a proper interface for checking if a CPU model is runnable).
But there's no reason we should be silent about it. So, change
target-i386 to enable "check" mode by default so at least we have some
warning printed to stderr (and hopefully logged somewhere) when QEMU
disables a feature that is not supported by the host system.
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Left shift of negative values is undefined behavior. Detected by clang:
qemu/target-i386/translate.c:2423:26: runtime error:
left shift of negative value -8
This changes the code to reverse the sign after the left shift.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Fix undefined behavior detected by clang runtime check:
qemu/target-i386/cpu.c:1494:15: runtime error:
left shift of 1 by 31 places cannot be represented in type 'int'
While doing that, add extra parenthesis for clarity.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Now DE is supported by TCG so it can be enabled in CPUID bits.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Bits 4-11 and 16-31 on DR6 are documented as always 1, so ensure they
can't be cleared by software.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Introduce helper_get_dr so that we don't have to put CR4[DE]
into the scarce HFLAGS resource. At the same time, rename
helper_movl_drN_T0 to helper_set_dr and set the helper flags.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
If the debug register is not enabled, we need
do nothing besides update the register.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
They're only used from bpt_helper.c now.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Bit 10 of DR7 is documented as always set to 1, so ensure that's
always the case.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
|
Before the last patch, we had an efficient loop that disabled
local breakpoints on task switch. Re-add that, but in a more
general way that handles changes to the global enable bits too.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|