summaryrefslogtreecommitdiff
path: root/accel
AgeCommit message (Collapse)Author
2021-06-03hvf: Simplify post reset/init/loadvm hooksAlexander Graf
The hooks we have that call us after reset, init and loadvm really all just want to say "The reference of all register state is in the QEMU vcpu struct, please push it". We already have a working pushing mechanism though called cpu->vcpu_dirty, so we can just reuse that for all of the above, syncing state properly the next time we actually execute a vCPU. This fixes PSCI resets on ARM, as they modify CPU state even after the post init call has completed, but before we execute the vCPU again. To also make the scheme work for x86, we have to make sure we don't move stale eflags into our env when the vcpu state is dirty. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com> Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-13-agraf@csgraf.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03hvf: Introduce hvf vcpu structAlexander Graf
We will need more than a single field for hvf going forward. To keep the global vcpu struct uncluttered, let's allocate a special hvf vcpu struct, similar to how hax does it. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com> Tested-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-12-agraf@csgraf.de Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03hvf: Remove hvf-accel-ops.hAlexander Graf
We can move the definition of hvf_vcpu_exec() into our internal hvf header, obsoleting the need for hvf-accel-ops.h. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-11-agraf@csgraf.de Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03hvf: Make synchronize functions staticAlexander Graf
The hvf accel synchronize functions are only used as input for local callback functions, so we can make them static. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-10-agraf@csgraf.de Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03hvf: Use cpu_synchronize_state()Alexander Graf
There is no reason to call the hvf specific hvf_cpu_synchronize_state() when we can just use the generic cpu_synchronize_state() instead. This allows us to have less dependency on internal function definitions and allows us to make hvf_cpu_synchronize_state() static. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-9-agraf@csgraf.de Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03hvf: Split out common code on vcpu init and destroyAlexander Graf
Until now, Hypervisor.framework has only been available on x86_64 systems. With Apple Silicon shipping now, it extends its reach to aarch64. To prepare for support for multiple architectures, let's start moving common code out into its own accel directory. This patch splits the vcpu init and destroy functions into a generic and an architecture specific portion. This also allows us to move the generic functions into the generic hvf code, removing exported functions. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-8-agraf@csgraf.de Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03hvf: Remove use of hv_uvaddr_t and hv_gpaddr_tAlexander Graf
The ARM version of Hypervisor.framework no longer defines these two types, so let's just revert to standard ones. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-7-agraf@csgraf.de Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03hvf: Make hvf_set_phys_mem() staticAlexander Graf
The hvf_set_phys_mem() function is only called within the same file. Make it static. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-6-agraf@csgraf.de Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03hvf: Move cpu functions into common directoryAlexander Graf
Until now, Hypervisor.framework has only been available on x86_64 systems. With Apple Silicon shipping now, it extends its reach to aarch64. To prepare for support for multiple architectures, let's start moving common code out into its own accel directory. This patch moves CPU and memory operations over. While at it, make sure the code is consumable on non-i386 systems. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-4-agraf@csgraf.de Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03hvf: Move vcpu thread functions into common directoryAlexander Graf
Until now, Hypervisor.framework has only been available on x86_64 systems. With Apple Silicon shipping now, it extends its reach to aarch64. To prepare for support for multiple architectures, let's start moving common code out into its own accel directory. This patch moves the vCPU thread loop over. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-3-agraf@csgraf.de Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-03hvf: Move assert_hvf_ok() into common directoryAlexander Graf
Until now, Hypervisor.framework has only been available on x86_64 systems. With Apple Silicon shipping now, it extends its reach to aarch64. To prepare for support for multiple architectures, let's start moving common code out into its own accel directory. This patch moves assert_hvf_ok() and introduces generic build infrastructure. Signed-off-by: Alexander Graf <agraf@csgraf.de> Reviewed-by: Sergio Lopez <slp@redhat.com> Message-id: 20210519202253.76782-2-agraf@csgraf.de Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-06-02docs: fix references to docs/devel/tracing.rstStefano Garzarella
Commit e50caf4a5c ("tracing: convert documentation to rST") converted docs/devel/tracing.txt to docs/devel/tracing.rst. We still have several references to the old file, so let's fix them with the following command: sed -i s/tracing.txt/tracing.rst/ $(git grep -l docs/devel/tracing.txt) Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20210517151702.109066-2-sgarzare@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-05-28Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210526' ↵Peter Maydell
into staging Adjust types for some memory access functions. Reduce inclusion of tcg headers. Fix watchpoints vs replay. Fix tcg/aarch64 roli expansion. Introduce SysemuCPUOps structure. # gpg: Signature made Thu 27 May 2021 00:43:54 BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth-gitlab/tags/pull-tcg-20210526: (31 commits) hw/core: Constify TCGCPUOps target/mips: Fold jazz behaviour into mips_cpu_do_transaction_failed cpu: Move CPUClass::get_paging_enabled to SysemuCPUOps cpu: Move CPUClass::get_memory_mapping to SysemuCPUOps cpu: Move CPUClass::get_phys_page_debug to SysemuCPUOps cpu: Move CPUClass::asidx_from_attrs to SysemuCPUOps cpu: Move CPUClass::write_elf* to SysemuCPUOps cpu: Move CPUClass::get_crash_info to SysemuCPUOps cpu: Move CPUClass::virtio_is_big_endian to SysemuCPUOps cpu: Move CPUClass::vmsd to SysemuCPUOps cpu: Introduce SysemuCPUOps structure cpu: Move AVR target vmsd field from CPUClass to DeviceClass cpu: Rename CPUClass vmsd -> legacy_vmsd cpu: Assert DeviceClass::vmsd is NULL on user emulation cpu: Directly use get_memory_mapping() fallback handlers in place cpu: Directly use get_paging_enabled() fallback handlers in place cpu: Directly use cpu_write_elf*() fallback handlers in place cpu: Introduce cpu_virtio_is_big_endian() cpu: Un-inline cpu_get_phys_page_debug and cpu_asidx_from_attrs cpu: Split as cpu-common / cpu-sysemu ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-26accel/tcg: Keep TranslationBlock headers local to TCGPhilippe Mathieu-Daudé
Only the TCG accelerator uses the TranslationBlock API. Move the tb-context.h / tb-hash.h / tb-lookup.h from the global namespace to the TCG one (in accel/tcg). Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210524170453.3791436-3-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-05-26accel/tcg: Reduce 'exec/tb-context.h' inclusionPhilippe Mathieu-Daudé
Only 2 headers require "exec/tb-context.h". Instead of having all files including "exec/exec-all.h" also including it, directly include it where it is required: - accel/tcg/cpu-exec.c - accel/tcg/translate-all.c For plugins/plugin.h, we were implicitly relying on exec/exec-all.h -> exec/tb-context.h -> qemu/qht.h which is now included directly. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210524170453.3791436-2-f4bug@amsat.org> [rth: Fix plugins/plugin.h compilation] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-05-26KVM: Dirty ring supportPeter Xu
KVM dirty ring is a new interface to pass over dirty bits from kernel to the userspace. Instead of using a bitmap for each memory region, the dirty ring contains an array of dirtied GPAs to fetch (in the form of offset in slots). For each vcpu there will be one dirty ring that binds to it. kvm_dirty_ring_reap() is the major function to collect dirty rings. It can be called either by a standalone reaper thread that runs in the background, collecting dirty pages for the whole VM. It can also be called directly by any thread that has BQL taken. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-11-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Disable manual dirty log when dirty ring enabledPeter Xu
KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is for KVM_CLEAR_DIRTY_LOG, which is only useful for KVM_GET_DIRTY_LOG. Skip enabling it for kvm dirty ring. More importantly, KVM_DIRTY_LOG_INITIALLY_SET will not wr-protect all the pages initially, which is against how kvm dirty ring is used - there's no way for kvm dirty ring to re-protect a page before it's notified as being written first with a GFN entry in the ring! So when KVM_DIRTY_LOG_INITIALLY_SET is enabled with dirty ring, we'll see silent data loss after migration. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-10-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Add dirty-ring-size propertyPeter Xu
Add a parameter for dirty gfn count for dirty rings. If zero, dirty ring is disabled. Otherwise dirty ring will be enabled with the per-vcpu gfn count as specified. If dirty ring cannot be enabled due to unsupported kernel or illegal parameter, it'll fallback to dirty logging. By default, dirty ring is not enabled (dirty-gfn-count default to 0). Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-9-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Cache kvm slot dirty bitmap sizePeter Xu
Cache it too because we'll reference it more frequently in the future. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-8-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Simplify dirty log sync in kvm_set_phys_memPeter Xu
kvm_physical_sync_dirty_bitmap() on the whole section is inaccurate, because the section can be a superset of the memslot that we're working on. The result is that if the section covers multiple kvm memslots, we could be doing the synchronization for multiple times for each kvmslot in the section. With the two helpers that we just introduced, it's very easy to do it right now by calling the helpers. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-7-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Provide helper to sync dirty bitmap from slot to ramblockPeter Xu
kvm_physical_sync_dirty_bitmap() calculates the ramblock offset in an awkward way from the MemoryRegionSection that passed in from the caller. The truth is for each KVMSlot the ramblock offset never change for the lifecycle. Cache the ramblock offset for each KVMSlot into the structure when the KVMSlot is created. With that, we can further simplify kvm_physical_sync_dirty_bitmap() with a helper to sync KVMSlot dirty bitmap to the ramblock dirty bitmap of a specific KVMSlot. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-6-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Provide helper to get kvm dirty logPeter Xu
Provide a helper kvm_slot_get_dirty_log() to make the function kvm_physical_sync_dirty_bitmap() clearer. We can even cache the as_id into KVMSlot when it is created, so that we don't even need to pass it down every time. Since at it, remove return value of kvm_physical_sync_dirty_bitmap() because it should never fail. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-5-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Create the KVMSlot dirty bitmap on flag changesPeter Xu
Previously we have two places that will create the per KVMSlot dirty bitmap: 1. When a newly created KVMSlot has dirty logging enabled, 2. When the first log_sync() happens for a memory slot. The 2nd case is lazy-init, while the 1st case is not (which is a fix of what the 2nd case missed). To do explicit initialization of dirty bitmaps, what we're missing is to create the dirty bitmap when the slot changed from not-dirty-track to dirty-track. Do that in kvm_slot_update_flags(). With that, we can safely remove the 2nd lazy-init. This change will be needed for kvm dirty ring because kvm dirty ring does not use the log_sync() interface at all. Also move all the pre-checks into kvm_slot_init_dirty_bitmap(). Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-4-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: Use a big lock to replace per-kml slots_lockPeter Xu
Per-kml slots_lock will bring some trouble if we want to take all slots_lock of all the KMLs, especially when we're in a context that we could have taken some of the KML slots_lock, then we even need to figure out what we've taken and what we need to take. Make this simple by merging all KML slots_lock into a single slots lock. Per-kml slots_lock isn't anything that helpful anyway - so far only x86 has two address spaces (so, two slots_locks). All the rest archs will be having one address space always, which means there's actually one slots_lock so it will be the same as before. Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20210506160549.130416-3-peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-26KVM: do not allow setting properties at runtimePaolo Bonzini
Only allow accelerator properties to be set when the accelerator is being created. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-25accel/tlb: Rename tlb_flush_[page_bits > range]_by_mmuidx_async_[2 > 1]Richard Henderson
Rename to match tlb_flush_range_locked. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-9-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25accel/tcg: Rename tlb_flush_page_bits -> range]_by_mmuidx_async_0Richard Henderson
Rename to match tlb_flush_range_locked. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-8-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25accel/tlb: Add tlb_flush_range_by_mmuidx_all_cpus_synced()Richard Henderson
Forward tlb_flush_page_bits_by_mmuidx_all_cpus_synced to tlb_flush_range_by_mmuidx_all_cpus_synced passing TARGET_PAGE_SIZE. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-7-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25accel/tcg: Add tlb_flush_range_by_mmuidx_all_cpus()Richard Henderson
Forward tlb_flush_page_bits_by_mmuidx_all_cpus to tlb_flush_range_by_mmuidx_all_cpus passing TARGET_PAGE_SIZE. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-6-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25accel/tcg: Add tlb_flush_range_by_mmuidx()Richard Henderson
Forward tlb_flush_page_bits_by_mmuidx to tlb_flush_range_by_mmuidx passing TARGET_PAGE_SIZE. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-5-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25accel/tcg: Remove {encode,decode}_pbm_to_runonRichard Henderson
We will not be able to fit address + length into a 64-bit packet. Drop this optimization before re-organizing this code. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-10-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> [PMM: Moved patch earlier in the series] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25accel/tlb: Rename TLBFlushPageBitsByMMUIdxData -> TLBFlushRangeDataRichard Henderson
Rename the structure to match the rename of tlb_flush_range_locked. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-4-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25accel/tcg: Pass length argument to tlb_flush_range_locked()Richard Henderson
Rename tlb_flush_page_bits_locked() -> tlb_flush_range_locked(), and have callers pass a length argument (currently TARGET_PAGE_SIZE) via the TLBFlushPageBitsByMMUIdxData structure. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-3-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-25accel/tcg: Replace g_new() + memcpy() by g_memdup()Richard Henderson
Using g_memdup is a bit more compact than g_new + memcpy. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-2-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-20accel/tcg: Assert that tb->size != 0 after translationIlya Leoshkevich
If arch-specific code generates a translation block of size 0, tb_gen_code() may generate a spurious exception. Add an assertion in order to catch such situations early. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20210416154939.32404-5-iii@linux.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2021-05-18Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210516' ↵Peter Maydell
into staging Minor MAINTAINERS update. Tweak to includes. Add tcg_constant_tl. Improve constant pool dump. # gpg: Signature made Sun 16 May 2021 15:08:42 BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth-gitlab/tags/pull-tcg-20210516: accel/tcg: Align data dumped at end of TB tcg: Add tcg_constant_tl exec/gen-icount.h: Add missing "exec/exec-all.h" include MAINTAINERS: Add include/exec/gen-icount.h to 'Main Loop' section Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-16accel/tcg: Align data dumped at end of TBPhilippe Mathieu-Daudé
To better visualize the data dumped at the end of a TB, left-align it (padding it with 0). Print ".long" instead of ".quad" on 32-bit hosts. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210515104202.241504-1-f4bug@amsat.org> [rth: Split the qemu_log and print .long for 32-bit hosts.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-05-16accel/tcg: Use add/sub overflow routines in tcg-runtime-gvec.cRichard Henderson
Obvious uses of the new functions. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-05-10accel: add init_accel_cpu for adapting accel behavior to CPU typeClaudio Fontana
while on x86 all CPU classes can use the same set of TCGCPUOps, on ARM the right accel behavior depends on the type of the CPU. So we need a way to specialize the accel behavior according to the CPU. Therefore, add a second initialization, after the accel_cpu->cpu_class_init, that allows to do this. Signed-off-by: Claudio Fontana <cfontana@suse.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20210322132800.7470-24-cfontana@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10accel-cpu: make cpu_realizefn return a boolClaudio Fontana
overall, all devices' realize functions take an Error **errp, but return void. hw/core/qdev.c code, which realizes devices, therefore does: local_err = NULL; dc->realize(dev, &local_err); if (local_err != NULL) { goto fail; } However, we can improve at least accel_cpu to return a meaningful bool value. Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210322132800.7470-9-cfontana@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-10accel: introduce new accessor functionsClaudio Fontana
avoid open coding the accesses to cpu->accel_cpu interfaces, and instead introduce: accel_cpu_instance_init, accel_cpu_realizefn to be used by the targets/ initfn code, and by cpu_exec_realizefn respectively. Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210322132800.7470-7-cfontana@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-06Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into ↵Peter Maydell
staging * NetBSD NVMM support * RateLimit mutex * Prepare for Meson 0.57 upgrade # gpg: Signature made Tue 04 May 2021 13:15:37 BST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini-gitlab/tags/for-upstream: glib-compat: accept G_TEST_SLOW environment variable gitlab-ci: use --meson=internal for CFI jobs configure: handle meson options that have changed type configure: reindent meson invocation slirp: add configure option to disable smbd ratelimit: protect with a mutex Add NVMM Accelerator: add maintainers for NetBSD/NVMM Add NVMM accelerator: acceleration enlightenments Add NVMM accelerator: x86 CPU support Add NVMM accelerator: configure and build logic oslib-win32: do not rely on macro to get redefined function name Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-05-04Add NVMM accelerator: configure and build logicReinoud Zandijk
Signed-off-by: Kamil Rytarowski <kamil@NetBSD.org> Signed-off-by: Reinoud Zandijk <reinoud@NetBSD.org> Message-Id: <20210402202535.11550-2-reinoud@NetBSD.org> [Check for nvmm_vcpu_stop. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-05-02Do not include exec/address-spaces.h if it's not really necessaryThomas Huth
Stop including exec/address-spaces.h in files that don't need it. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210416171314.2074665-5-thuth@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-05-02Do not include cpu.h if it's not really necessaryThomas Huth
Stop including cpu.h in files that don't need it. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210416171314.2074665-4-thuth@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-05-02Do not include hw/boards.h if it's not really necessaryThomas Huth
Stop including hw/boards.h in files that don't need it. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210416171314.2074665-3-thuth@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-05-02Do not include sysemu/sysemu.h if it's not really necessaryThomas Huth
Stop including sysemu/sysemu.h in files that don't need it. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210416171314.2074665-2-thuth@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-05-02accel: kvm: clarify that extra exit data is hexadecimalDavid Edmondson
When dumping the extra exit data provided by KVM, make it clear that the data is hexadecimal. At the same time, zero-pad the output. Signed-off-by: David Edmondson <david.edmondson@oracle.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20210428142431.266879-1-david.edmondson@oracle.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-04-17accel/tcg: avoid re-translating one-shot instructionsAlex Bennée
By definition a single instruction is capable of being an IO instruction. This avoids a problem of triggering a cpu_io_recompile on a non-recorded translation which then fails because it expects tcg_tb_lookup() to succeed unconditionally. The normal use case requires a TB to be able to resolve machine state. The other users of tcg_tb_lookup() are able to tolerate a missing TB if the machine state has been resolved by other means - which in the single-shot case is always true because machine state is synced at the start of a block. Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210415162454.22056-1-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-04-12accel/tcg: Preserve PAGE_ANON when changing page permissionsRichard Henderson
Using mprotect() to change PROT_* does not change the MAP_ANON previously set with mmap(). Our linux-user version of MTE only works with MAP_ANON pages, so losing PAGE_ANON caused MTE to stop working. Reported-by: Stephen Long <steplong@quicinc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>