summaryrefslogtreecommitdiff
path: root/include/exec/cpu-defs.h
AgeCommit message (Collapse)Author
2021-02-05exec/cpu-defs: Remove TCG backends dependencyPhilippe Mathieu-Daudé
"exec/cpu-defs.h" contains generic CPU definitions for the TCG frontends (mostly related to TLB). TCG backends definitions aren't relevant here. See tcg/README description: 4) Backend tcg-target.h contains the target specific definitions. tcg-target.c.inc contains the target specific code; it is #included by tcg/tcg.c, rather than being a standalone C file. So far only "tcg/tcg.h" requires these headers. Remove the "target-tcg.h" header dependency on TCG frontends, so we don't have to rebuild all frontends when hacking a single backend. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210204191423.1754158-1-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2020-11-15overall/alpha tcg cpus|hppa: Fix Lesser GPL version numberChetan Pant
There is no "version 2" of the "Lesser" General Public License. It is either "GPL version 2.0" or "Lesser GPL version 2.1". This patch replaces all occurrences of "Lesser GPL version 2" with "Lesser GPL version 2.1" in comment section. Signed-off-by: Chetan Pant <chetan4windows@gmail.com> Message-Id: <20201023123353.19796-1-chetan4windows@gmail.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
2020-06-26osdep: Make MIN/MAX evaluate arguments only onceEric Blake
I'm not aware of any immediate bugs in qemu where a second runtime evaluation of the arguments to MIN() or MAX() causes a problem, but proactively preventing such abuse is easier than falling prey to an unintended case down the road. At any rate, here's the conversation that sparked the current patch: https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg05718.html Update the MIN/MAX macros to only evaluate their argument once at runtime; this uses typeof(1 ? (a) : (b)) to ensure that we are promoting the temporaries to the same type as the final comparison (we have to trigger type promotion, as typeof(bitfield) won't compile; and we can't use typeof((a) + (b)) or even typeof((a) + 0), as some of our uses of MAX are on void* pointers where such addition is undefined). However, we are unable to work around gcc refusing to compile ({}) in a constant context (such as the array length of a static variable), even when only used in the dead branch of a __builtin_choose_expr(), so we have to provide a second macro pair MIN_CONST and MAX_CONST for use when both arguments are known to be compile-time constants and where the result must also be usable as a constant; this second form evaluates arguments multiple times but that doesn't matter for constants. By using a void expression as the expansion if a non-constant is presented to this second form, we can enlist the compiler to ensure the double evaluation is not attempted on non-constants. Alas, as both macros now rely on compiler intrinsics, they are no longer usable in preprocessor #if conditions; those will just have to be open-coded or the logic rewritten into #define or runtime 'if' conditions (but where the compiler dead-code-elimination will probably still apply). I tested that both gcc 10.1.1 and clang 10.0.0 produce errors for all forms of macro mis-use. As the errors can sometimes be cryptic, I'm demonstrating the gcc output: Use of MIN when MIN_CONST is needed: In file included from /home/eblake/qemu/qemu-img.c:25: /home/eblake/qemu/include/qemu/osdep.h:249:5: error: braced-group within expression allowed only inside a function 249 | ({ \ | ^ /home/eblake/qemu/qemu-img.c:92:12: note: in expansion of macro ‘MIN’ 92 | char array[MIN(1, 2)] = ""; | ^~~ Use of MIN_CONST when MIN is needed: /home/eblake/qemu/qemu-img.c: In function ‘is_allocated_sectors’: /home/eblake/qemu/qemu-img.c:1225:15: error: void value not ignored as it ought to be 1225 | i = MIN_CONST(i, n); | ^ Use of MIN in the preprocessor: In file included from /home/eblake/qemu/accel/tcg/translate-all.c:20: /home/eblake/qemu/accel/tcg/translate-all.c: In function ‘page_check_range’: /home/eblake/qemu/include/qemu/osdep.h:249:6: error: token "{" is not valid in preprocessor expressions 249 | ({ \ | ^ Fix the resulting callsites that used #if or computed a compile-time constant min or max to use the new macros. cpu-defs.h is interesting, as CPU_TLB_DYN_MAX_BITS is sometimes used as a constant and sometimes dynamic. It may be worth improving glib's MIN/MAX definitions to be saner, but that is a task for another day. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20200625162602.700741-1-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-28tcg: let plugins instrument virtual memory accessesEmilio G. Cota
To capture all memory accesses we need hook into all the various helper functions that are involved in memory operations as well as the injected inline helper calls. A later commit will allow us to resolve the actual guest HW addresses by replaying the lookup. Signed-off-by: Emilio G. Cota <cota@braap.org> [AJB: drop haddr handling, just deal in vaddr] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03include/exec/cpu-defs.h: fix typoAlex Bennée
Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190828165307.18321-10-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-21hw/core: Move cpu.c, cpu.h from qom/ to hw/core/Markus Armbruster
Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190709152053.16670-2-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> [Rebased onto merge commit 95a9457fd44; missed instances of qom/cpu.h in comments replaced]
2019-08-16Include qemu/queue.h slightly lessMarkus Armbruster
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-20-armbru@redhat.com>
2019-06-10cpu: Remove CPU_COMMONRichard Henderson
This macro is now always empty, so remove it. This leaves the entire contents of CPUArchState under the control of the guest architecture. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10cpu: Move the softmmu tlb to CPUNegativeOffsetStateRichard Henderson
We have for some time had code within the tcg backends to handle large positive offsets from env. This move makes sure that need not happen. Indeed, we are able to assert at build time that simple offsets suffice for all hosts. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10cpu: Move icount_decr to CPUNegativeOffsetStateRichard Henderson
Amusingly, we had already ignored the comment to keep this value at the end of CPUState. This restores the minimum negative offset from TCG_AREG0 for code generation. For the couple of uses within qom/cpu.c, without NEED_CPU_H, add a pointer from the CPUState object to the IcountDecr object within CPUNegativeOffsetState. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10cpu: Introduce CPUNegativeOffsetStateRichard Henderson
Nothing in there so far, but all of the plumbing done within the target ArchCPU state. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10tcg: Create struct CPUTLBRichard Henderson
Move all softmmu tlb data into this structure. Arrange the members so that we are able to place mask+table together and at a smaller absolute offset from ENV. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10tcg: Split out target/arch/cpu-param.hRichard Henderson
For all targets, into this new file move TARGET_LONG_BITS, TARGET_PAGE_BITS, TARGET_PHYS_ADDR_SPACE_BITS, TARGET_VIRT_ADDR_SPACE_BITS, and NB_MMU_MODES. Include this new file from exec/cpu-defs.h. This now removes the somewhat odd requirement that target/arch/cpu.h defines TARGET_LONG_BITS before including exec/cpu-defs.h, so push the bulk of the includes within target/arch/cpu.h to the top. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-06-10tcg: Fold CPUTLBWindow into CPUTLBDescRichard Henderson
Both structures are allocated once per mmu_idx. There is no reason for them to be separate. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28cputlb: Remove static tlb sizingRichard Henderson
Now that all tcg backends support TCG_TARGET_IMPLEMENTS_DYN_TLB, remove the define and the old code. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-01-28tcg: introduce dynamic TLB sizingEmilio G. Cota
Disabled in all TCG backends for now. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20190116170114.26802-3-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Remove tlb_c.pending_flushesRichard Henderson
This is essentially redundant with tlb_c.dirty. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Filter flushes on already clean tlbsRichard Henderson
Especially for guests with large numbers of tlbs, like ARM or PPC, we may well not use all of them in between flush operations. Remember which tlbs have been used since the last flush, and avoid any useless flushing. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Count "partial" and "elided" tlb flushesRichard Henderson
Our only statistic so far was "full" tlb flushes, where all mmu_idx are flushed at the same time. Now count "partial" tlb flushes where sets of mmu_idx are flushed, but the set is not maximal. Account one per mmu_idx flushed, as that is the unit of work performed. We don't actually count elided flushes yet, but go ahead and change the interface presented to the monitor all at once. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Move env->vtlb_index to env->tlb_d.vindexRichard Henderson
The rest of the tlb victim cache is per-tlb, the next use index should be as well. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Split large page tracking per mmu_idxRichard Henderson
The set of large pages in the kernel is probably not the same as the set of large pages in the application. Forcing one range to cover both will flush more often than necessary. This allows tlb_flush_page_async_work to flush just the one mmu_idx implicated, which in turn allows us to remove tlb_check_page_and_flush_by_mmuidx_async_work. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Move cpu->pending_tlb_flush to env->tlb_c.pending_flushRichard Henderson
Protect it with the tlb_lock instead of using atomics. The move puts it in or near the same cacheline as the lock; using the lock means we don't need a second atomic operation in order to perform the update. Which makes it cheap to also update pending_flush in tlb_flush_by_mmuidx_async_work. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-31cputlb: Move tlb_lock to CPUTLBCommonRichard Henderson
This is the first of several moves to reduce the size of the CPU_COMMON_TLB macro and improve some locality of refernce. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-10-18cputlb: serialize tlb updates with env->tlb_lockEmilio G. Cota
Currently we rely on atomic operations for cross-CPU invalidations. There are two cases that these atomics miss: cross-CPU invalidations can race with either (1) vCPU threads flushing their TLB, which happens via memset, or (2) vCPUs calling tlb_reset_dirty on their TLB, which updates .addr_write with a regular store. This results in undefined behaviour, since we're mixing regular and atomic ops on concurrent accesses. Fix it by using tlb_lock, a per-vCPU lock. All updaters of tlb_table and the corresponding victim cache now hold the lock. The readers that do not hold tlb_lock must use atomic reads when reading .addr_write, since this field can be updated by other threads; the conversion to atomic reads is done in the next patch. Note that an alternative fix would be to expand the use of atomic ops. However, in the case of TLB flushes this would have a huge performance impact, since (1) TLB flushes can happen very frequently and (2) we currently use a full memory barrier to flush each TLB entry, and a TLB has many entries. Instead, acquiring the lock is barely slower than a full memory barrier since it is uncontended, and with a single lock acquisition we can flush the entire TLB. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <20181009174557.16125-6-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-06-15cpu-defs.h: Document CPUIOTLBEntry 'addr' fieldPeter Maydell
The 'addr' field in the CPUIOTLBEntry struct has a rather non-obvious use; add a comment documenting it (reverse-engineered from what the code that sets it is doing). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180611125633.32755-2-peter.maydell@linaro.org
2017-10-10cputlb: bring back tlb_flush_count under !TLB_DEBUGEmilio G. Cota
Commit f0aff0f124 ("cputlb: add assert_cpu_is_self checks") buried the increment of tlb_flush_count under TLB_DEBUG. This results in "info jit" always (mis)reporting 0 TLB flushes when !TLB_DEBUG. Besides, under MTTCG tlb_flush_count is updated by several threads, so in order not to lose counts we'd either have to use atomic ops or distribute the counter, which is more scalable. This patch does the latter by embedding tlb_flush_count in CPUArchState. The global count is then easily obtained by iterating over the CPU list. Note that this change also requires updating the accessors to tlb_flush_count to use atomic_read/set whenever there may be conflicting accesses (as defined in C11) to it. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-07-05tcg: add CONFIG_TCG guards in headersYang Zhong
Add CONFIG_TCG around TLB-related functions and structure declarations. Some of these functions are defined in ./accel/tcg/cputlb.c, which will not be linked in if TCG is disabled, and have no stubs; therefore, their callers will also be compiled out for --disable-tcg. Signed-off-by: Yang Zhong <yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-05-19qemu-common: stop including qemu/host-utils.h from qemu-common.hPaolo Bonzini
Move it to the actual users. There are some inclusions of qemu/host-utils.h in headers, but they are all necessary. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-02-23include: Clean up includesPeter Maydell
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. NB: If this commit breaks compilation for your out-of-tree patchseries or fork, then you need to make sure you add #include "qemu/osdep.h" to any new .c files that you have. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com>
2015-08-06cpu_defs: Simplify CPUTLB padding logicPeter Crosthwaite
There was a complicated subtractive arithmetic for determining the padding on the CPUTLBEntry structure. Simplify this with a union. Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <1436130533-18565-1-git-send-email-crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-26cpu-defs: Move out TB_JMP definesPeter Crosthwaite
These are not Architecture specific in any way so move them out of cpu-defs.h. tb-hash.h is an appropriate place as a leading user and their strong relationship to TB hashing and caching. Reviewed-by: Richard Henderson <rth@redhat.com> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <43ceca65a3fa240efac49aa0bf604ad0442e1710.1433052532.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-26include/exec: Move standard exceptions to cpu-all.hPeter Crosthwaite
These exception indicies are generic and don't have any reliance on the per-arch cpu.h defs. Move them to cpu-all.h so they can be used by core code that does not have access to cpu-defs.h. Reviewed-by: Richard Henderson <rth@redhat.com> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <dbebd3062c7cd4332240891a3564e73f374ddfcd.1433052532.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-26cpu-defs: Move CPU_TEMP_BUF_NLONGS to tcgPeter Crosthwaite
The usages of this define are pure TCG and there is no architecture specific variation of the value. Localise it to the TCG engine to remove another architecture agnostic piece from cpu-defs.h. This follows on from a28177820a868eafda8fab007561cc19f41941f4 where temp_buf was moved out of the CPU_COMMON obsoleting the need for the super early definition. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <498e8e5325c1a1aff79e5bcfc28cb760ef6b214e.1433052532.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-03softmmu: support up to 12 MMU modesPaolo Bonzini
At 8k per TLB (for 64-bit host or target), 8 or more modes make the TLBs bigger than 64k, and some RISC TCG backends do not like that. On the affected hosts, cut the TLB size in half---there is still a measurable speedup on PPC with the next patch. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1424436345-37924-3-git-send-email-pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
2015-04-26Add MemTxAttrs to the IOTLBPeter Maydell
Add a MemTxAttrs field to the IOTLB, and allow target-specific code to set it via a new tlb_set_page_with_attrs() function; pass the attributes through to the device when making IO accesses. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2015-04-26Make CPU iotlb a structure rather than a plain hwaddrPeter Maydell
Make the CPU iotlb a structure rather than a plain hwaddr; this will allow us to add transaction attributes to it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2014-09-01implementing victim TLB for QEMU system emulated TLBXin Tong
QEMU system mode page table walks are expensive. Taken by running QEMU qemu-system-x86_64 system mode on Intel PIN , a TLB miss and walking a 4-level page tables in guest Linux OS takes ~450 X86 instructions on average. QEMU system mode TLB is implemented using a directly-mapped hashtable. This structure suffers from conflict misses. Increasing the associativity of the TLB may not be the solution to conflict misses as all the ways may have to be walked in serial. A victim TLB is a TLB used to hold translations evicted from the primary TLB upon replacement. The victim TLB lies between the main TLB and its refill path. Victim TLB is of greater associativity (fully associative in this patch). It takes longer to lookup the victim TLB, but its likely better than a full page table walk. The memory translation path is changed as follows : Before Victim TLB: 1. Inline TLB lookup 2. Exit code cache on TLB miss. 3. Check for unaligned, IO accesses 4. TLB refill. 5. Do the memory access. 6. Return to code cache. After Victim TLB: 1. Inline TLB lookup 2. Exit code cache on TLB miss. 3. Check for unaligned, IO accesses 4. Victim TLB lookup. 5. If victim TLB misses, TLB refill 6. Do the memory access. 7. Return to code cache The advantage is that victim TLB can offer more associativity to a directly mapped TLB and thus potentially fewer page table walks while still keeping the time taken to flush within reasonable limits. However, placing a victim TLB before the refill path increase TLB refill path as the victim TLB is consulted before the TLB refill. The performance results demonstrate that the pros outweigh the cons. some performance results taken on SPECINT2006 train datasets and kernel boot and qemu configure script on an Intel(R) Xeon(R) CPU E5620 @ 2.40GHz Linux machine are shown in the Google Doc link below. https://docs.google.com/spreadsheets/d/1eiItzekZwNQOal_h-5iJmC4tMDi051m9qidi5_nwvH4/edit?usp=sharing In summary, victim TLB improves the performance of qemu-system-x86_64 by 11% on average on SPECINT2006, kernelboot and qemu configscript and with highest improvement of in 26% in 456.hmmer. And victim TLB does not result in any performance degradation in any of the measured benchmarks. Furthermore, the implemented victim TLB is architecture independent and is expected to benefit other architectures in QEMU as well. Although there are measurement fluctuations, the performance improvement is very significant and by no means in the range of noises. Signed-off-by: Xin Tong <trent.tong@gmail.com> Message-id: 1407202523-23553-1-git-send-email-trent.tong@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-03-13cpu: Move breakpoints field from CPU_COMMON to CPUStateAndreas Färber
Most targets were using offsetof(CPUFooState, breakpoints) to determine how much of CPUFooState to clear on reset. Use the next field after CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise. Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cpu: Move watchpoint fields from CPU_COMMON to CPUStateAndreas Färber
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cpu: Move opaque field from CPU_COMMON to CPUStateAndreas Färber
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cpu: Move exception_index field from CPU_COMMON to CPUStateAndreas Färber
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cpu: Move jmp_env field from CPU_COMMON to CPUStateAndreas Färber
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cpu: Move tb_jmp_cache field from CPU_COMMON to CPUStateAndreas Färber
Clear it on reset. Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cpu: Move icount_decr field from CPU_COMMON to CPUStateAndreas Färber
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cpu: Move icount_extra field from CPU_COMMON to CPUStateAndreas Färber
Reset it. Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cpu: Move can_do_io field from CPU_COMMON to CPUStateAndreas Färber
Rename can_do_io() to cpu_can_do_io() and change argument to CPUState. Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-13cpu: Move mem_io_{pc,vaddr} fields from CPU_COMMON to CPUStateAndreas Färber
Reset them. Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-03-10target-arm: Implement WFE as a yield operationPeter Maydell
Implement WFE to yield our timeslice to the next CPU. This avoids slowdowns in multicore configurations caused by one core busy-waiting on a spinlock which can't possibly be unlocked until the other core has an opportunity to run. This speeds up my test case A15 dual-core boot by a factor of three (though it is still four or five times slower than a single-core boot). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1393339545-22111-1-git-send-email-peter.maydell@linaro.org Reviewed-by: Richard Henderson <rth@twiddle.net> Tested-by: Rob Herring <rob.herring@linaro.org>
2013-10-07cpu: Drop cpu_model_str from CPU_COMMONAndreas Färber
Since this is only read in cpu_copy() and linux-user has a global cpu_model, drop the field from generic code. Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-07-26Merge remote-tracking branch 'rth/tcg-next' into stagingAnthony Liguori
# By Claudio Fontana (1) and others # Via Richard Henderson * rth/tcg-next: tcg: Remove temp_buf tcg/aarch64: Implement tlb lookup fast path tcg/aarch64: implement ldst 12bit scaled uimm offset Message-id: 1373919944-8521-1-git-send-email-rth@twiddle.net Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>