summaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)Author
2020-03-17Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell
* Bugfixes all over the place * get/set_uint cleanups (Felipe) * Lock guard support (Stefan) * MemoryRegion ownership cleanup (Philippe) * AVX512 optimization for buffer_is_zero (Robert) # gpg: Signature made Tue 17 Mar 2020 15:01:54 GMT # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (62 commits) hw/arm: Let devices own the MemoryRegion they create hw/arm: Remove unnecessary memory_region_set_readonly() on ROM alias hw/ppc/ppc405: Use memory_region_init_rom() with read-only regions hw/arm/stm32: Use memory_region_init_rom() with read-only regions hw/char: Let devices own the MemoryRegion they create hw/riscv: Let devices own the MemoryRegion they create hw/dma: Let devices own the MemoryRegion they create hw/display: Let devices own the MemoryRegion they create hw/core: Let devices own the MemoryRegion they create scripts/cocci: Patch to let devices own their MemoryRegions scripts/cocci: Patch to remove unnecessary memory_region_set_readonly() scripts/cocci: Patch to detect potential use of memory_region_init_rom hw/sparc: Use memory_region_init_rom() with read-only regions hw/sh4: Use memory_region_init_rom() with read-only regions hw/riscv: Use memory_region_init_rom() with read-only regions hw/ppc: Use memory_region_init_rom() with read-only regions hw/pci-host: Use memory_region_init_rom() with read-only regions hw/net: Use memory_region_init_rom() with read-only regions hw/m68k: Use memory_region_init_rom() with read-only regions hw/display: Use memory_region_init_rom() with read-only regions ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-16target/riscv: Fix VS mode interrupts forwarding.Rajnesh Kanwal
Currently riscv_cpu_local_irq_pending is used to find out pending interrupt and VS mode interrupts are being shifted to represent S mode interrupts in this function. So when the cause returned by this function is passed to riscv_cpu_do_interrupt to actually forward the interrupt, the VS mode forwarding check does not work as intended and interrupt is actually forwarded to hypervisor. This patch fixes this issue. Signed-off-by: Rajnesh Kanwal <rajnesh.kanwal49@gmail.com> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-03-16target/riscv: Correctly implement TSR trapAlistair Francis
As reported in: https://bugs.launchpad.net/qemu/+bug/1851939 we weren't correctly handling illegal instructions based on the value of MSTATUS_TSR and the current privledge level. This patch fixes the issue raised in the bug by raising an illegal instruction if TSR is set and we are in S-Mode. Signed-off-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Jonathan Behrens <jonathan@fintelia.io Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-03-16WHPX: Use proper synchronization primitives while processingSunil Muthuswamy
WHPX wasn't using the proper synchronization primitives while processing async events, which can cause issues with SMP. Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16i386: Fix GCC warning with snprintf when HAX is enabledJulio Faracco
When HAX is enabled (--enable-hax), GCC 9.2.1 reports issues with snprintf(). Replacing old snprintf() by g_strdup_printf() fixes the problem with boundary checks of vm_id and vcpu_id and finally the warnings produced by GCC. For more details, one example of warning: CC i386-softmmu/target/i386/hax-posix.o qemu/target/i386/hax-posix.c: In function ‘hax_host_open_vm’: qemu/target/i386/hax-posix.c:124:56: error: ‘%02d’ directive output may be truncated writing between 2 and 11 bytes into a region of size 3 [-Werror=format-truncation=] 124 | snprintf(name, sizeof HAX_VM_DEVFS, "/dev/hax_vm/vm%02d", vm_id); | ^~~~ qemu/target/i386/hax-posix.c:124:41: note: directive argument in the range [-2147483648, 64] 124 | snprintf(name, sizeof HAX_VM_DEVFS, "/dev/hax_vm/vm%02d", vm_id); | ^~~~~~~~~~~~~~~~~~~~ In file included from /usr/include/stdio.h:867, from qemu/include/qemu/osdep.h:99, from qemu/target/i386/hax-posix.c:14: /usr/include/bits/stdio2.h:67:10: note: ‘__builtin___snprintf_chk’ output between 17 and 26 bytes into a destination of size 17 67 | return __builtin___snprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 68 | __bos (__s), __fmt, __va_arg_pack ()); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Julio Faracco <jcfaracco@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16qom/object: Use common get/set uint helpersFelipe Franciosi
Several objects implemented their own uint property getters and setters, despite them being straightforward (without any checks/validations on the values themselves) and identical across objects. This makes use of an enhanced API for object_property_add_uintXX_ptr() which offers default setters. Some of these setters used to update the value even if the type visit failed (eg. because the value being set overflowed over the given type). The new setter introduces a check for these errors, not updating the value if an error occurred. The error is propagated. Signed-off-by: Felipe Franciosi <felipe@nutanix.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16WHPX: Use QEMU values for trapped CPUIDSunil Muthuswamy
Currently, WHPX is using some default values for the trapped CPUID functions. These were not in sync with the QEMU values because the CPUID values were never set with WHPX during VCPU initialization. Additionally, at the moment, WHPX doesn't support setting CPUID values in the hypervisor at runtime (i.e. after the partition has been setup). That is needed to be able to set the CPUID values in the hypervisor during VCPU init. Until that support comes, use the QEMU values for the trapped CPUIDs. Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com> Message-Id: <SN4PR2101MB0880A8323EAD0CD0E8E2F423C0EB0@SN4PR2101MB0880.namprd21.prod.outlook.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16WHPX: TSC get and set should be dependent on VM stateSunil Muthuswamy
Currently, TSC is set as part of the VM runtime state. Setting TSC at runtime is heavy and additionally can have side effects on the guest, which are not very resilient to variances in the TSC. This patch uses the VM state to determine whether to set TSC or not. Some minor enhancements for getting TSC values as well that considers the VM state. Additionally, while setting the TSC, the partition is suspended to reduce the variance in the TSC value across vCPUs. Signed-off-by: Sunil Muthuswamy <sunilmut@microsoft.com> Message-Id: <SN4PR2101MB08804D23439166E81FF151F7C0EA0@SN4PR2101MB0880.namprd21.prod.outlook.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-16misc: Replace zero-length arrays with flexible array member (manual)Philippe Mathieu-Daudé
Description copied from Linux kernel commit from Gustavo A. R. Silva (see [3]): --v-- description start --v-- The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member [1], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being unadvertenly introduced [2] to the Linux codebase from now on. --^-- description end --^-- Do the similar housekeeping in the QEMU codebase (which uses C99 since commit 7be41675f7cb). All these instances of code were found with the help of the following command (then manual analysis, without modifying structures only having a single flexible array member, such QEDTable in block/qed.h): git grep -F '[0];' [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=76497732932f [3] https://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux.git/commit/?id=17642a2fbd2c1 Inspired-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-12target/arm: kvm: Inject events at the last stage of syncBeata Michalska
KVM_SET_VCPU_EVENTS might actually lead to vcpu registers being modified. As such this should be the last step of sync to avoid potential overwriting of whatever changes KVM might have done. Signed-off-by: Beata Michalska <beata.michalska@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-id: 20200312003401.29017-2-beata.michalska@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-12target/arm/kvm: Let kvm_arm_vgic_probe() return a bitmapEric Auger
Convert kvm_arm_vgic_probe() so that it returns a bitmap of supported in-kernel emulation VGIC versions instead of the max version: at the moment values can be v2 and v3. This allows to expose the case where the host GICv3 also supports GICv2 emulation. This will be useful to choose the default version in KVM accelerated mode. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200311131618.7187-5-eric.auger@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-12target/arm: Disable clean_data_tbi for system modeRichard Henderson
We must include the tag in the FAR_ELx register when raising an addressing exception. Which means that we should not clear out the tag during translation. We cannot at present comply with this for user mode, so we retain the clean_data_tbi function for the moment, though it no longer does what it says on the tin for system mode. This function is to be replaced with MTE, so don't worry about the slight misnaming. Buglink: https://bugs.launchpad.net/qemu/+bug/1867072 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200308012946.16303-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-12target/arm: Check addresses for disabled regimesRichard Henderson
We fail to validate the upper bits of a virtual address on a translation disabled regime, as per AArch64.TranslateAddressS1Off. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200308012946.16303-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-12target/arm: Fix some comment typosPeter Maydell
Fix a couple of comment typos. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200303174950.3298-5-peter.maydell@linaro.org
2020-03-12target/arm: Recalculate hflags correctly after writes to CONTROLPeter Maydell
A write to the CONTROL register can change our current EL (by writing to the nPRIV bit). That means that we can't assume that s->current_el is still valid in trans_MSR_v7m() when we try to rebuild the hflags. Add a new helper rebuild_hflags_m32_newel() which, like the existing rebuild_hflags_a32_newel(), recalculates the current EL from scratch, and use it in trans_MSR_v7m(). This fixes an assertion about an hflags mismatch when the guest changes privilege by writing to CONTROL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200303174950.3298-4-peter.maydell@linaro.org
2020-03-12target/arm: Update hflags in trans_CPS_v7m()Peter Maydell
For M-profile CPUs, the FAULTMASK value affects the CPU's MMU index (it changes the NegPri bit). We update the hflags after calls to the v7m_msr helper in trans_MSR_v7m() but forgot to do so in trans_CPS_v7m(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200303174950.3298-3-peter.maydell@linaro.org
2020-03-10s390x: ipl: Consolidate iplb validity check into one functionJanosch Frank
It's nicer to just call one function than calling a function for each possible iplb type. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20200310090950.61172-1-frankja@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2020-03-05RISC-V: Add a missing "," in riscv_excp_namesPalmer Dabbelt
This would almost certainly cause the exception names to be reported incorrectly. Coverity found the issue (CID 1420223). As per Peter's suggestion, I've also added a comma at the end of the list to avoid the issue reappearing in the future. Fixes: ab67a1d07a ("target/riscv: Add support for the new execption numbers") Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-03-05target/arm: Clean address for DC ZVARichard Henderson
This data access was forgotten when we added support for cleaning addresses of TBI information. Fixes: 3a471103ac1823ba Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200302175829.2183-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Use DEF_HELPER_FLAGS for helper_dc_zvaRichard Henderson
The function does not write registers, and only reads them by implication via the exception path. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200302175829.2183-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Move helper_dc_zva to helper-a64.cRichard Henderson
This is an aarch64-only function. Move it out of the shared file. This patch is code movement only. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200302175829.2183-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Apply TBI to ESR_ELx in helper_exception_returnRichard Henderson
We missed this case within AArch64.ExceptionReturn. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200302175829.2183-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Introduce core_to_aa64_mmu_idxRichard Henderson
If by context we know that we're in AArch64 mode, we need not test for M-profile when reconstructing the full ARMMMUIdx. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200302175829.2183-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Optimize cpu_mmu_indexRichard Henderson
We now cache the core mmu_idx in env->hflags. Rather than recompute from scratch, extract the field. All of the uses of cpu_mmu_index within target/arm are within helpers, and env->hflags is always stable within a translation block from whence helpers are called. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200302175829.2183-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Replicate TBI/TBID bits for single range regimesRichard Henderson
Replicate the single TBI bit from TCR_EL2 and TCR_EL3 so that we can unconditionally use pointer bit 55 to index into our composite TBI1:TBI0 field. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200302175829.2183-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.TTLB bitRichard Henderson
This bit traps EL1 access to tlb maintenance insns. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.TPU bitRichard Henderson
This bit traps EL1 access to cache maintenance insns that operate to the point of unification. There are no longer any references to plain aa64_cacheop_access, so remove it. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.TPCP bitRichard Henderson
This bit traps EL1 access to cache maintenance insns that operate to the point of coherency or persistence. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.TACR bitRichard Henderson
This bit traps EL1 access to the auxiliary control registers. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.TSW bitRichard Henderson
These bits trap EL1 access to set/way cache maintenance insns. Buglink: https://bugs.launchpad.net/bugs/1863685 Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Honor the HCR_EL2.{TVM,TRVM} bitsRichard Henderson
These bits trap EL1 access to various virtual memory controls. Buglink: https://bugs.launchpad.net/bugs/1855072 Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Improve masking in arm_hcr_el2_effRichard Henderson
Update the {TGE,E2H} == '11' masking to ARMv8.6. If EL2 is configured for aarch32, disable all of the bits that are RES0 in aarch32 mode. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Remove EL2 and EL3 setup from user-onlyRichard Henderson
We have disabled EL2 and EL3 for user-only, which means that these registers "don't exist" and should not be set. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Disable has_el2 and has_el3 for user-onlyRichard Henderson
In arm_cpu_reset, we configure many system registers so that user-only behaves as it should with a minimum of ifdefs. However, we do not set all of the system registers as required for a cpu with EL2 and EL3. Disabling EL2 and EL3 mean that we will not look at those registers, which means that we don't have to worry about configuring them. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-4-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Add HCR_EL2 bit definitions from ARMv8.6Richard Henderson
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Improve masking of HCR/HCR2 RES0 bitsRichard Henderson
Don't merely start with v8.0, handle v7VE as well. Ensure that writes from aarch32 mode do not change bits in the other half of the register. Protect reads of aa64 id registers with ARM_FEATURE_AARCH64. Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200229012811.24129-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-03-05target/arm: Implement (trivially) ARMv8.2-TTCNPPeter Maydell
The ARMv8.2-TTCNP extension allows an implementation to optimize by sharing TLB entries between multiple cores, provided that software declares that it's ready to deal with this by setting a CnP bit in the TTBRn_ELx. It is mandatory from ARMv8.2 onward. For QEMU's TLB implementation, sharing TLB entries between different cores would not really benefit us and would be a lot of work to implement. So we implement this extension in the "trivial" manner: we allow the guest to set and read back the CnP bit, but don't change our behaviour (this is an architecturally valid implementation choice). The only code path which looks at the TTBRn_ELx values for the long-descriptor format where the CnP bit is defined is already doing enough masking to not get confused when the CnP bit at the bottom of the register is set, so we can simply add a comment noting why we're relying on that mask. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200225193822.18874-1-peter.maydell@linaro.org
2020-03-03Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-5.0-sf3' ↵Peter Maydell
into staging RISC-V Patches for the 5.0 Soft Freeze, Part 3 This pull request is almost entirely an implementation of the draft hypervisor extension. This extension is still in draft and is expected to have incompatible changes before being frozen, but we've had good luck managing other RISC-V draft extensions in QEMU so far. Additionally, there's a fix to PCI addressing and some improvements to the M-mode timer. This boots linux and passes make check for me. # gpg: Signature made Tue 03 Mar 2020 00:23:20 GMT # gpg: using RSA key 2B3C3747446843B24A943A7A2E1319F35FBB1889 # gpg: issuer "palmer@dabbelt.com" # gpg: Good signature from "Palmer Dabbelt <palmer@dabbelt.com>" [unknown] # gpg: aka "Palmer Dabbelt <palmer@sifive.com>" [unknown] # gpg: aka "Palmer Dabbelt <palmerdabbelt@google.com>" [unknown] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 00CE 76D1 8349 60DF CE88 6DF8 EF4C A150 2CCB AB41 # Subkey fingerprint: 2B3C 3747 4468 43B2 4A94 3A7A 2E13 19F3 5FBB 1889 * remotes/palmer/tags/riscv-for-master-5.0-sf3: (38 commits) hw/riscv: Provide rdtime callback for TCG in CLINT emulation target/riscv: Emulate TIME CSRs for privileged mode riscv: virt: Allow PCI address 0 target/riscv: Allow enabling the Hypervisor extension target/riscv: Add the MSTATUS_MPV_ISSET helper macro target/riscv: Add support for the 32-bit MSTATUSH CSR target/riscv: Set htval and mtval2 on execptions target/riscv: Raise the new execptions when 2nd stage translation fails target/riscv: Implement second stage MMU target/riscv: Allow specifying MMU stage target/riscv: Respect MPRV and SPRV for floating point ops target/riscv: Mark both sstatus and msstatus_hs as dirty target/riscv: Disable guest FP support based on virtual status target/riscv: Only set TB flags with FP status if enabled target/riscv: Remove the hret instruction target/riscv: Add hfence instructions target/riscv: Add Hypervisor trap return support target/riscv: Add hypvervisor trap support target/riscv: Generate illegal instruction on WFI when V=1 target/ricsv: Flush the TLB on virtulisation mode changes ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Implement ARMv8.3-CCIDXPeter Maydell
The ARMv8.3-CCIDX extension makes the CCSIDR_EL1 system ID registers have a format that uses the full 64 bit width of the register, and adds a new CCSIDR2 register so AArch32 can get at the high 32 bits. QEMU doesn't implement caches, so we just treat these ID registers as opaque values that are set to the correct constant values for each CPU. The only thing we need to do is allow 64-bit values in our cssidr[] array and provide the CCSIDR2 accessors. We don't set the CCIDX field in our 'max' CPU because the CCSIDR constant values we use are the same as the ones used by the Cortex-A57 and they are in the old 32-bit format. This means that the extra regdef added here is unused currently, but it means that whenever in the future we add a CPU that does need the new 64-bit format it will just work when we set the cssidr values and the ID registers for it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224182626.29252-1-peter.maydell@linaro.org
2020-02-28target/arm: Implement v8.4-RCPCPeter Maydell
The v8.4-RCPC extension implements some new instructions: * LDAPUR, LDAPURB, LDAPURH, LDAPRSB, LDAPRSH, LDAPRSW * STLUR, STLURB, STLURH These are all in a new subgroup of encodings that sits below the top-level "Loads and Stores" group in the Arm ARM. The STLUR* instructions have standard store-release semantics; the LDAPUR* have Load-AcquirePC semantics, but (as with LDAPR*) we choose to implement them as the slightly stronger Load-Acquire. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-4-peter.maydell@linaro.org
2020-02-28target/arm: Implement v8.3-RCPCPeter Maydell
The v8.3-RCPC extension implements three new load instructions which provide slightly weaker consistency guarantees than the existing load-acquire operations. For QEMU we choose to simply implement them with a full LDAQ barrier. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-3-peter.maydell@linaro.org
2020-02-28target/arm: Fix wrong use of FIELD_EX32 on ID_AA64DFR0Peter Maydell
We missed an instance of using FIELD_EX32 on a 64-bit ID register, in isar_feature_aa64_pmu_8_4(). Fix it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224172846.13053-2-peter.maydell@linaro.org
2020-02-28target/arm: Split VMINMAXNM decodeRichard Henderson
Passing the raw op field from the manual is less instructive than it might be. Do the full decode and use the existing helpers to perform the expansion. Since these are v8 insns, VECLEN+VECSTRIDE are already RES0. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-18-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Split VFM decodeRichard Henderson
Passing the raw o1 and o2 fields from the manual is less instructive than it might be. Do the full decode and let the trans_* functions pass in booleans to a helper. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-17-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Add formats for some vfp 2 and 3-register insnsRichard Henderson
Those vfp instructions without extra opcode fields can share a common @format for brevity. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Remove ARM_FEATURE_VFP*Richard Henderson
We have converted all tests against these features to ISAR tests. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Move the vfp decodetree calls next to the base isaRichard Henderson
Have the calls adjacent as an intermediate step toward actually merging the decodes. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200224222232.13807-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Move VLLDM and VLSTM to vfp.decodeRichard Henderson
Now that we no longer have an early check for ARM_FEATURE_VFP, we can use the proper ISA check in trans_VLLDM_VLSTM. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200224222232.13807-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Remove ARM_FEATURE_VFP check from disas_vfp_insnRichard Henderson
We now have proper ISA checks within each trans_* function. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200224222232.13807-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-02-28target/arm: Replace ARM_FEATURE_VFP4 with isar_feature_aa32_simdfmacRichard Henderson
All remaining tests for VFP4 are for fused multiply-add insns. Since the MVFR1 field is used for both VFP and NEON, move its adjustment from the !has_neon block to the (!has_vfp && !has_neon) block. Test for vfp of the appropraite width alongside the test for simdfmac within translate-vfp.inc.c. Within disas_neon_data_insn, we have already tested for ARM_FEATURE_NEON. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20200224222232.13807-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>