summaryrefslogtreecommitdiff
path: root/target-i386
AgeCommit message (Collapse)Author
2013-03-12cpu: Replace do_interrupt() by CPUClass::do_interrupt methodAndreas Färber
This removes a global per-target function and thus takes us one step closer to compiling multiple targets into one executable. It will also allow to override the interrupt handling for certain CPU families. Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12cpu: Pass CPUState to cpu_interrupt()Andreas Färber
Move it to qom/cpu.h to avoid issues with include order. Change pc_acpi_smi_interrupt() opaque to X86CPU. Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12cpu: Move halted and interrupt_request fields to CPUStateAndreas Färber
Both fields are used in VMState, thus need to be moved together. Explicitly zero them on reset since they were located before breakpoints. Pass PowerPCCPU to kvmppc_handle_halt(). Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12target-i386: Update VMStateDescription to X86CPUAndreas Färber
Expose vmstate_cpu as vmstate_x86_cpu and hook it up to CPUClass::vmsd. Adapt opaques and VMState fields to X86CPU. Drop cpu_{save,load}(). Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-04Merge remote-tracking branch 'mst/tags/for_anthony' into stagingAnthony Liguori
virtio,vhost,pci,e1000 Mostly bugfixes, but also some ICH work by Laszlo. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> # gpg: Signature made Thu 28 Feb 2013 07:13:56 AM CST using RSA key ID D28D5469 # gpg: Can't check signature: public key not found # By Michael S. Tsirkin (2) and others # Via Michael S. Tsirkin * mst/tags/for_anthony: Set virtio-serial device to have a default of 2 MSI vectors. ICH9 LPC: Reset Control Register, basic implementation Fix guest OS hang when 64bit PCI bar present e1000: unbreak the guest network migration to 1.3 vhost: memory sync fixes
2013-03-03gen-icount.h: Rename gen_icount_start/end to gen_tb_start/endPeter Maydell
The gen_icount_start/end functions are now somewhat misnamed since they are useful for generic "start/end of TB" code, used for more than just icount. Rename them to gen_tb_start/end. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-03-03cpu: Introduce ENV_OFFSET macrosAndreas Färber
Introduce ENV_OFFSET macros which can be used in non-target-specific code that needs to generate TCG instructions which reference CPUState fields given the cpu_env register that TCG targets set up with a pointer to the CPUArchState struct. Signed-off-by: Andreas Färber <afaerber@suse.de> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-27target-i386: Use mulu2 and muls2Richard Henderson
These correspond very closely to the insns that we're emulating. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-27Fix guest OS hang when 64bit PCI bar presentAlexey Korolev
This patch addresses the issue fully described here: http://lists.nongnu.org/archive/html/qemu-devel/2013-02/msg01804.html Linux kernels prior to 2.6.36 do not disable the PCI device during enumeration process. Since lower and higher parts of a 64bit BAR are programmed separately this leads to qemu receiving a request to occupy a completely wrong address region for a short period of time. We have found that the boot process screws up completely if kvm-apic range is overlapped even for a short period of time (it is fine for other regions though). This patch raises the priority of the kvm-apic memory region, so it is never pushed out by PCI devices. The patch is quite safe as it does not touch memory manager. Signed-off-by: Alexey Korolev <akorolex@gmail.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2013-02-23target-i386: Use add2 to implement the ADX extensionRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2013-02-19target-i386: Use movcond to implement shiftd.Richard Henderson
With this being all straight-line code, it can get deleted when the cc variables die. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19target-i386: Discard CC_OP computation in set_cc_op alsoRichard Henderson
The shift and rotate insns use movcond to set CC_OP, and thus achieve a conditional EFLAGS setting. By discarding CC_OP in a later flags setting insn, we can discard that movcond. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19target-i386: Use movcond to implement rotate flags.Richard Henderson
With this being all straight-line code, it can get deleted when the cc variables die. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19target-i386: Use movcond to implement shift flags.Richard Henderson
With this being all straight-line code, it can get deleted when the cc variables die. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19target-i386: Add CC_OP_CLRRichard Henderson
Special case xor with self. We need not even store the known zero into cc_src. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19target-i386: Implement tzcnt and fix lzcntRichard Henderson
We weren't computing flags for lzcnt at all. At the same time, adjust the implementation of bsf/bsr to avoid the local branch, using movcond instead. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19target-i386: Use clz/ctz for bsf/bsr helpersRichard Henderson
And mark the helpers as NO_RWG_SE. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-19target-i386: Implement ADX extensionRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Implement RORXRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Implement SHLX, SARX, SHRXRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Implement PDEP, PEXTRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Implement MULXRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Implement BZHIRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Implement BLSR, BLSMSK, BLSIRichard Henderson
Do all of group 17 at one time for ease. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Implement BEXTRRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Implement ANDNRichard Henderson
As this is the first of the BMI insns to be implemented, this carries quite a bit more baggage than normal. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Implement MOVBERichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Decode the VEX prefixesRichard Henderson
No actual required uses of these encodings yet. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Tidy prefix parsingRichard Henderson
Avoid duplicating switch statement between 32 and 64-bit modes. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Use CC_SRC2 for ADC and SBBRichard Henderson
Add another slot in ENV and store two of the three inputs. This lets us do less work when carry-out is not needed, and avoids the unpredictable CC_OP after translating these insns. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Make helper_cc_compute_{all,c} constRichard Henderson
Pass the data in explicitly, rather than indirectly via env. This avoids all sorts of unnecessary register spillage. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Don't reference ENV through most of cc helpersRichard Henderson
In preparation for making this a const helper. By using the proper types in the parameters to the helper functions, we get to avoid quite a lot of subsequent casting. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: optimize flags checking after sub using CC_SRCTRichard Henderson
After a comparison or subtraction, the original value of the LHS will currently be reconstructed using an addition. However, in most cases it is already available: store it in a temp-local variable and save 1 or 2 TCG ops (2 if the result of the addition needs to be extended). The temp-local can be declared dead as soon as the cc_op changes again, or also before the translation block ends because gen_prepare_cc will always make a copy before returning it. All this magic, plus copy propagation and dead-code elimination, ensures that the temp local will (almost) never be spilled. Example (cmp $0x21,%rax + jbe): Before After ---------------------------------------------------------------------------- movi_i64 tmp1,$0x21 movi_i64 tmp1,$0x21 movi_i64 cc_src,$0x21 movi_i64 cc_src,$0x21 sub_i64 cc_dst,rax,tmp1 sub_i64 cc_dst,rax,tmp1 add_i64 tmp7,cc_dst,cc_src movi_i32 cc_op,$0x11 movi_i32 cc_op,$0x11 brcond_i64 tmp7,cc_src,leu,$0x0 discard loc11 brcond_i64 rax,cc_src,leu,$0x0 Before After ---------------------------------------------------------------------------- mov (%r14),%rbp mov (%r14),%rbp mov %rbp,%rbx mov %rbp,%rbx sub $0x21,%rbx sub $0x21,%rbx lea 0x21(%rbx),%r12 movl $0x11,0xa0(%r14) movl $0x11,0xa0(%r14) movq $0x21,0x90(%r14) movq $0x21,0x90(%r14) mov %rbx,0x98(%r14) mov %rbx,0x98(%r14) cmp $0x21,%r12 | cmp $0x21,%rbp jbe ... jbe ... Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: Update cc_op before TCG branchesRichard Henderson
Placing the CC_OP_DYNAMIC at the join is less effective than before the branch, as the branch will have forced global registers to their home locations. This way we have a chance to discard CC_SRC2 before it gets stored. Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: introduce gen_jcc1_noeobRichard Henderson
A jump that ends a basic block or otherwise falls back to CC_OP_DYNAMIC will always have to call gen_op_set_cc_op. However, not all jumps end a basic block, so introduce a variant that does not do this. This was partially undone earlier (i386: drop cc_op argument of gen_jcc1), redo it now also to prepare for the introduction of src2. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: use gen_op for cmps/scasRichard Henderson
Replace low-level ops with a higher-level "cmp %al, (A0)" in the case of scas, and "cmp T0, (A0)" in the case of cmps. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: kill cpu_T3Paolo Bonzini
It is almost unused, and it is simpler to pass a TCG value directly to gen_shiftd_rm_T1_T3. This value is then written to t2 without going through a temporary register. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: expand cmov via movcondRichard Henderson
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: introduce gen_cmovcc1Paolo Bonzini
Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: cleanup temporary macros for CCPreparePaolo Bonzini
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: inline gen_prepare_cc_slowRichard Henderson
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: use CCPrepare to generate conditional jumpsPaolo Bonzini
This simplifies all the jump generation code. CCPrepare allows the code to create an efficient brcond always, so there is no need to duplicate the setcc and jcc code. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: introduce gen_prepare_ccRichard Henderson
This makes the i386 front-end able to create CCPrepare structs for all condition, not just those that come from a single flag. In particular, JCC_L and JCC_LE can be optimized because gen_prepare_cc is not forced to return a result in bit 0 (unlike gen_setcc_slow). However, for now the slow jcc operations will still go through CC computation in a single-bit temporary, followed by a brcond if the temporary is nonzero. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: introduce CCPrepareRichard Henderson
Introduce a struct that describes how to build a *cond operation that checks for a given x86 condition code. For now, just change gen_compute_eflags_* to return the new struct, generate code for the CCPrepare struct, and go on as before. [rth: Use ctz with the proper width rather than ffs.] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: optimize setcc instructionsPaolo Bonzini
Reconstruct the arguments for complex conditions involving CC_OP_SUBx (BE, L, LE). In the others do it via setcond and gen_setcc_slow (which is not that slow in many cases). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: optimize setleRichard Henderson
And allow gen_setcc_slow to operate on cpu_cc_src. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: optimize setbeRichard Henderson
This is looking at EFLAGS, but it can do so more efficiently with setcond. Reviewed-by: Blue Swirl <blauwirbel@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: change gen_setcc_slow_T0 to gen_setcc_slowPaolo Bonzini
Do not hard code the destination register. Reviewed-by: Blue Swirl <blauwirbel@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: convert gen_compute_eflags_c to TCGRichard Henderson
Do the switch at translation time, converting the helper templates to TCG opcodes. In some cases CF can be computed with a single setcond, though others it may require a little more work. In the CC_OP_DYNAMIC case, compute the whole EFLAGS, same as for ZF/SF/PF. Reviewed-by: Blue Swirl <blauwirbel@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2013-02-18target-i386: use inverted setcond when computing NS or NZRichard Henderson
Make gen_compute_eflags_z and gen_compute_eflags_s able to compute the inverted condition, and use this in gen_setcc_slow_T0. We cannot do it yet in gen_compute_eflags_c, but prepare the code for it anyway. It is not worthwhile for PF, as usual. shr+and+xor could be replaced by and+setcond. I'm not doing it yet. Reviewed-by: Blue Swirl <blauwirbel@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>