diff options
author | bellard <bellard@c046a42c-6fe2-441c-8c8c-71466251a162> | 2008-02-01 10:05:41 +0000 |
---|---|---|
committer | bellard <bellard@c046a42c-6fe2-441c-8c8c-71466251a162> | 2008-02-01 10:05:41 +0000 |
commit | c896fe29d6c8ae6cde3917727812ced3f2e536a4 (patch) | |
tree | a8536077171bbe49b59f00734500627032289364 | |
parent | 56abbcfff3f0073bf616be7af20106e51e78bd22 (diff) | |
download | qemu-c896fe29d6c8ae6cde3917727812ced3f2e536a4.zip |
TCG code generator
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@3943 c046a42c-6fe2-441c-8c8c-71466251a162
-rw-r--r-- | tcg/LICENSE | 3 | ||||
-rw-r--r-- | tcg/README | 420 | ||||
-rw-r--r-- | tcg/TODO | 32 | ||||
-rw-r--r-- | tcg/i386/tcg-target.c | 1163 | ||||
-rw-r--r-- | tcg/i386/tcg-target.h | 54 | ||||
-rw-r--r-- | tcg/tcg-dyngen.c | 482 | ||||
-rw-r--r-- | tcg/tcg-op.h | 1175 | ||||
-rw-r--r-- | tcg/tcg-opc.h | 228 | ||||
-rw-r--r-- | tcg/tcg-runtime.c | 68 | ||||
-rw-r--r-- | tcg/tcg.c | 1781 | ||||
-rw-r--r-- | tcg/tcg.h | 324 | ||||
-rw-r--r-- | tcg/x86_64/tcg-target.c | 1227 | ||||
-rw-r--r-- | tcg/x86_64/tcg-target.h | 69 |
13 files changed, 7026 insertions, 0 deletions
diff --git a/tcg/LICENSE b/tcg/LICENSE new file mode 100644 index 0000000000..be817fa162 --- /dev/null +++ b/tcg/LICENSE @@ -0,0 +1,3 @@ +All the files in this directory and subdirectories are released under +a BSD like license (see header in each file). No other license is +accepted. diff --git a/tcg/README b/tcg/README new file mode 100644 index 0000000000..f5b1280913 --- /dev/null +++ b/tcg/README @@ -0,0 +1,420 @@ +Tiny Code Generator - Fabrice Bellard. + +1) Introduction + +TCG (Tiny Code Generator) began as a generic backend for a C +compiler. It was simplified to be used in QEMU. It also has its roots +in the QOP code generator written by Paul Brook. + +2) Definitions + +The TCG "target" is the architecture for which we generate the +code. It is of course not the same as the "target" of QEMU which is +the emulated architecture. As TCG started as a generic C backend used +for cross compiling, it is assumed that the TCG target is different +from the host, although it is never the case for QEMU. + +A TCG "function" corresponds to a QEMU Translated Block (TB). + +A TCG "temporary" is a variable only live in a given +function. Temporaries are allocated explicitely in each function. + +A TCG "global" is a variable which is live in all the functions. They +are defined before the functions defined. A TCG global can be a memory +location (e.g. a QEMU CPU register), a fixed host register (e.g. the +QEMU CPU state pointer) or a memory location which is stored in a +register outside QEMU TBs (not implemented yet). + +A TCG "basic block" corresponds to a list of instructions terminated +by a branch instruction. + +3) Intermediate representation + +3.1) Introduction + +TCG instructions operate on variables which are temporaries or +globals. TCG instructions and variables are strongly typed. Two types +are supported: 32 bit integers and 64 bit integers. Pointers are +defined as an alias to 32 bit or 64 bit integers depending on the TCG +target word size. + +Each instruction has a fixed number of output variable operands, input +variable operands and always constant operands. + +The notable exception is the call instruction which has a variable +number of outputs and inputs. + +In the textual form, output operands come first, followed by input +operands, followed by constant operands. The output type is included +in the instruction name. Constants are prefixed with a '$'. + +add_i32 t0, t1, t2 (t0 <- t1 + t2) + +sub_i64 t2, t3, $4 (t2 <- t3 - 4) + +3.2) Assumptions + +* Basic blocks + +- Basic blocks end after branches (e.g. brcond_i32 instruction), + goto_tb and exit_tb instructions. +- Basic blocks end before legacy dyngen operations. +- Basic blocks start after the end of a previous basic block, at a + set_label instruction or after a legacy dyngen operation. + +After the end of a basic block, temporaries at destroyed and globals +are stored at their initial storage (register or memory place +depending on their declarations). + +* Floating point types are not supported yet + +* Pointers: depending on the TCG target, pointer size is 32 bit or 64 + bit. The type TCG_TYPE_PTR is an alias to TCG_TYPE_I32 or + TCG_TYPE_I64. + +* Helpers: + +Using the tcg_gen_helper_x_y it is possible to call any function +taking i32, i64 or pointer types types. Before calling an helper, all +globals are stored at their canonical location and it is assumed that +the function can modify them. In the future, function modifiers will +be allowed to tell that the helper does not read or write some globals. + +On some TCG targets (e.g. x86), several calling conventions are +supported. + +* Branches: + +Use the instruction 'br' to jump to a label. Use 'jmp' to jump to an +explicit address. Conditional branches can only jump to labels. + +3.3) Code Optimizations + +When generating instructions, you can count on at least the following +optimizations: + +- Single instructions are simplified, e.g. + + and_i32 t0, t0, $0xffffffff + + is suppressed. + +- A liveness analysis is done at the basic block level. The + information is used to suppress moves from a dead temporary to + another one. It is also used to remove instructions which compute + dead results. The later is especially useful for condition code + optimisation in QEMU. + + In the following example: + + add_i32 t0, t1, t2 + add_i32 t0, t0, $1 + mov_i32 t0, $1 + + only the last instruction is kept. + +- A macro system is supported (may get closer to function inlining + some day). It is useful if the liveness analysis is likely to prove + that some results of a computation are indeed not useful. With the + macro system, the user can provide several alternative + implementations which are used depending on the used results. It is + especially useful for condition code optimisation in QEMU. + + Here is an example: + + macro_2 t0, t1, $1 + mov_i32 t0, $0x1234 + + The macro identified by the ID "$1" normally returns the values t0 + and t1. Suppose its implementation is: + + macro_start + brcond_i32 t2, $0, $TCG_COND_EQ, $1 + mov_i32 t0, $2 + br $2 + set_label $1 + mov_i32 t0, $3 + set_label $2 + add_i32 t1, t3, t4 + macro_end + + If t0 is not used after the macro, the user can provide a simpler + implementation: + + macro_start + add_i32 t1, t2, t4 + macro_end + + TCG automatically chooses the right implementation depending on + which macro outputs are used after it. + + Note that if TCG did more expensive optimizations, macros would be + less useful. In the previous example a macro is useful because the + liveness analysis is done on each basic block separately. Hence TCG + cannot remove the code computing 't0' even if it is not used after + the first macro implementation. + +3.4) Instruction Reference + +********* Function call + +* call <ret> <params> ptr + +call function 'ptr' (pointer type) + +<ret> optional 32 bit or 64 bit return value +<params> optional 32 bit or 64 bit parameters + +********* Jumps/Labels + +* jmp t0 + +Absolute jump to address t0 (pointer type). + +* set_label $label + +Define label 'label' at the current program point. + +* br $label + +Jump to label. + +* brcond_i32/i64 cond, t0, t1, label + +Conditional jump if t0 cond t1 is true. cond can be: + TCG_COND_EQ + TCG_COND_NE + TCG_COND_LT /* signed */ + TCG_COND_GE /* signed */ + TCG_COND_LE /* signed */ + TCG_COND_GT /* signed */ + TCG_COND_LTU /* unsigned */ + TCG_COND_GEU /* unsigned */ + TCG_COND_LEU /* unsigned */ + TCG_COND_GTU /* unsigned */ + +********* Arithmetic + +* add_i32/i64 t0, t1, t2 + +t0=t1+t2 + +* sub_i32/i64 t0, t1, t2 + +t0=t1-t2 + +* mul_i32/i64 t0, t1, t2 + +t0=t1*t2 + +* div_i32/i64 t0, t1, t2 + +t0=t1/t2 (signed). Undefined behavior if division by zero or overflow. + +* divu_i32/i64 t0, t1, t2 + +t0=t1/t2 (unsigned). Undefined behavior if division by zero. + +* rem_i32/i64 t0, t1, t2 + +t0=t1%t2 (signed). Undefined behavior if division by zero or overflow. + +* remu_i32/i64 t0, t1, t2 + +t0=t1%t2 (unsigned). Undefined behavior if division by zero. + +* and_i32/i64 t0, t1, t2 + +********* Logical + +t0=t1&t2 + +* or_i32/i64 t0, t1, t2 + +t0=t1|t2 + +* xor_i32/i64 t0, t1, t2 + +t0=t1^t2 + +* shl_i32/i64 t0, t1, t2 + +********* Shifts + +* shl_i32/i64 t0, t1, t2 + +t0=t1 << t2. Undefined behavior if t2 < 0 or t2 >= 32 (resp 64) + +* shr_i32/i64 t0, t1, t2 + +t0=t1 >> t2 (unsigned). Undefined behavior if t2 < 0 or t2 >= 32 (resp 64) + +* sar_i32/i64 t0, t1, t2 + +t0=t1 >> t2 (signed). Undefined behavior if t2 < 0 or t2 >= 32 (resp 64) + +********* Misc + +* mov_i32/i64 t0, t1 + +t0 = t1 + +Move t1 to t0 (both operands must have the same type). + +* ext8s_i32/i64 t0, t1 +ext16s_i32/i64 t0, t1 +ext32s_i64 t0, t1 + +8, 16 or 32 bit sign extension (both operands must have the same type) + +* bswap16_i32 t0, t1 + +16 bit byte swap on a 32 bit value. The two high order bytes must be set +to zero. + +* bswap_i32 t0, t1 + +32 bit byte swap + +* bswap_i64 t0, t1 + +64 bit byte swap + +********* Type conversions + +* ext_i32_i64 t0, t1 +Convert t1 (32 bit) to t0 (64 bit) and does sign extension + +* extu_i32_i64 t0, t1 +Convert t1 (32 bit) to t0 (64 bit) and does zero extension + +* trunc_i64_i32 t0, t1 +Truncate t1 (64 bit) to t0 (32 bit) + +********* Load/Store + +* ld_i32/i64 t0, t1, offset +ld8s_i32/i64 t0, t1, offset +ld8u_i32/i64 t0, t1, offset +ld16s_i32/i64 t0, t1, offset +ld16u_i32/i64 t0, t1, offset +ld32s_i64 t0, t1, offset +ld32u_i64 t0, t1, offset + +t0 = read(t1 + offset) +Load 8, 16, 32 or 64 bits with or without sign extension from host memory. +offset must be a constant. + +* st_i32/i64 t0, t1, offset +st8_i32/i64 t0, t1, offset +st16_i32/i64 t0, t1, offset +st32_i64 t0, t1, offset + +write(t0, t1 + offset) +Write 8, 16, 32 or 64 bits to host memory. + +********* QEMU specific operations + +* tb_exit t0 + +Exit the current TB and return the value t0 (word type). + +* goto_tb index + +Exit the current TB and jump to the TB index 'index' (constant) if the +current TB was linked to this TB. Otherwise execute the next +instructions. + +* qemu_ld_i32/i64 t0, t1, flags +qemu_ld8u_i32/i64 t0, t1, flags +qemu_ld8s_i32/i64 t0, t1, flags +qemu_ld16u_i32/i64 t0, t1, flags +qemu_ld16s_i32/i64 t0, t1, flags +qemu_ld32u_i64 t0, t1, flags +qemu_ld32s_i64 t0, t1, flags + +Load data at the QEMU CPU address t1 into t0. t1 has the QEMU CPU +address type. 'flags' contains the QEMU memory index (selects user or +kernel access) for example. + +* qemu_st_i32/i64 t0, t1, flags +qemu_st8_i32/i64 t0, t1, flags +qemu_st16_i32/i64 t0, t1, flags +qemu_st32_i64 t0, t1, flags + +Store the data t0 at the QEMU CPU Address t1. t1 has the QEMU CPU +address type. 'flags' contains the QEMU memory index (selects user or +kernel access) for example. + +Note 1: Some shortcuts are defined when the last operand is known to be +a constant (e.g. addi for add, movi for mov). + +Note 2: When using TCG, the opcodes must never be generated directly +as some of them may not be available as "real" opcodes. Always use the +function tcg_gen_xxx(args). + +4) Backend + +tcg-target.h contains the target specific definitions. tcg-target.c +contains the target specific code. + +4.1) Assumptions + +The target word size (TCG_TARGET_REG_BITS) is expected to be 32 bit or +64 bit. It is expected that the pointer has the same size as the word. + +On a 32 bit target, all 64 bit operations are converted to 32 bits. A +few specific operations must be implemented to allow it (see add2_i32, +sub2_i32, brcond2_i32). + +Floating point operations are not supported in this version. A +previous incarnation of the code generator had full support of them, +but it is better to concentrate on integer operations first. + +On a 64 bit target, no assumption is made in TCG about the storage of +the 32 bit values in 64 bit registers. + +4.2) Constraints + +GCC like constraints are used to define the constraints of every +instruction. Memory constraints are not supported in this +version. Aliases are specified in the input operands as for GCC. + +A target can define specific register or constant constraints. If an +operation uses a constant input constraint which does not allow all +constants, it must also accept registers in order to have a fallback. + +The movi_i32 and movi_i64 operations must accept any constants. + +The mov_i32 and mov_i64 operations must accept any registers of the +same type. + +The ld/st instructions must accept signed 32 bit constant offsets. It +can be implemented by reserving a specific register to compute the +address if the offset is too big. + +The ld/st instructions must accept any destination (ld) or source (st) +register. + +4.3) Function call assumptions + +- The only supported types for parameters and return value are: 32 and + 64 bit integers and pointer. +- The stack grows downwards. +- The first N parameters are passed in registers. +- The next parameters are passed on the stack by storing them as words. +- Some registers are clobbered during the call. +- The function can return 0 or 1 value in registers. On a 32 bit + target, functions must be able to return 2 values in registers for + 64 bit return type. + +5) Migration from dyngen to TCG + +TCG is backward compatible with QEMU "dyngen" operations. It means +that TCG instructions can be freely mixed with dyngen operations. It +is expected that QEMU targets will be progressively fully converted to +TCG. Once a target is fully converted to dyngen, it will be possible +to apply more optimizations because more registers will be free for +the generated code. + +The exception model is the same as the dyngen one. diff --git a/tcg/TODO b/tcg/TODO new file mode 100644 index 0000000000..9189926106 --- /dev/null +++ b/tcg/TODO @@ -0,0 +1,32 @@ +- test macro system + +- test conditional jumps + +- test mul, div, ext8s, ext16s, bswap + +- generate a global TB prologue and epilogue to save/restore registers + to/from the CPU state and to reserve a stack frame to optimize + helper calls. Modify cpu-exec.c so that it does not use global + register variables (except maybe for 'env'). + +- fully convert the x86 target. The minimal amount of work includes: + - add cc_src, cc_dst and cc_op as globals + - disable its eflags optimization (the liveness analysis should + suffice) + - move complicated operations to helpers (in particular FPU, SSE, MMX). + +- optimize the x86 target: + - move some or all the registers as globals + - use the TB prologue and epilogue to have QEMU target registers in + pre assigned host registers. + +Ideas: + +- Move the slow part of the qemu_ld/st ops after the end of the TB. + +- Experiment: change instruction storage to simplify macro handling + and to handle dynamic allocation and see if the translation speed is + OK. + +- change exception syntax to get closer to QOP system (exception + parameters given with a specific instruction). diff --git a/tcg/i386/tcg-target.c b/tcg/i386/tcg-target.c new file mode 100644 index 0000000000..20822a4bc1 --- /dev/null +++ b/tcg/i386/tcg-target.c @@ -0,0 +1,1163 @@ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2008 Fabrice Bellard + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ +const char *tcg_target_reg_names[TCG_TARGET_NB_REGS] = { + "%eax", + "%ecx", + "%edx", + "%ebx", + "%esp", + "%ebp", + "%esi", + "%edi", +}; + +int tcg_target_reg_alloc_order[TCG_TARGET_NB_REGS] = { + TCG_REG_EAX, + TCG_REG_EDX, + TCG_REG_ECX, + TCG_REG_EBX, + TCG_REG_ESI, + TCG_REG_EDI, + TCG_REG_EBP, + TCG_REG_ESP, +}; + +const int tcg_target_call_iarg_regs[3] = { TCG_REG_EAX, TCG_REG_EDX, TCG_REG_ECX }; +const int tcg_target_call_oarg_regs[2] = { TCG_REG_EAX, TCG_REG_EDX }; + +static void patch_reloc(uint8_t *code_ptr, int type, + tcg_target_long value) +{ + switch(type) { + case R_386_32: + *(uint32_t *)code_ptr = value; + break; + case R_386_PC32: + *(uint32_t *)code_ptr = value - (long)code_ptr; + break; + default: + tcg_abort(); + } +} + +/* maximum number of register used for input function arguments */ +static inline int tcg_target_get_call_iarg_regs_count(int flags) +{ + flags &= TCG_CALL_TYPE_MASK; + switch(flags) { + case TCG_CALL_TYPE_STD: + return 0; + case TCG_CALL_TYPE_REGPARM_1: + case TCG_CALL_TYPE_REGPARM_2: + case TCG_CALL_TYPE_REGPARM: + return flags - TCG_CALL_TYPE_REGPARM_1 + 1; + default: + tcg_abort(); + } +} + +/* parse target specific constraints */ +int target_parse_constraint(TCGArgConstraint *ct, const char **pct_str) +{ + const char *ct_str; + + ct_str = *pct_str; + switch(ct_str[0]) { + case 'a': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_EAX); + break; + case 'b': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_EBX); + break; + case 'c': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_ECX); + break; + case 'd': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_EDX); + break; + case 'S': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_ESI); + break; + case 'D': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_EDI); + break; + case 'q': + ct->ct |= TCG_CT_REG; + tcg_regset_set32(ct->u.regs, 0, 0xf); + break; + case 'r': + ct->ct |= TCG_CT_REG; + tcg_regset_set32(ct->u.regs, 0, 0xff); + break; + + /* qemu_ld/st address constraint */ + case 'L': + ct->ct |= TCG_CT_REG; + tcg_regset_set32(ct->u.regs, 0, 0xff); + tcg_regset_reset_reg(ct->u.regs, TCG_REG_EAX); + tcg_regset_reset_reg(ct->u.regs, TCG_REG_EDX); + break; + default: + return -1; + } + ct_str++; + *pct_str = ct_str; + return 0; +} + +/* test if a constant matches the constraint */ +static inline int tcg_target_const_match(tcg_target_long val, + const TCGArgConstraint *arg_ct) +{ + int ct; + ct = arg_ct->ct; + if (ct & TCG_CT_CONST) + return 1; + else + return 0; +} + +#define ARITH_ADD 0 +#define ARITH_OR 1 +#define ARITH_ADC 2 +#define ARITH_SBB 3 +#define ARITH_AND 4 +#define ARITH_SUB 5 +#define ARITH_XOR 6 +#define ARITH_CMP 7 + +#define SHIFT_SHL 4 +#define SHIFT_SHR 5 +#define SHIFT_SAR 7 + +#define JCC_JMP (-1) +#define JCC_JO 0x0 +#define JCC_JNO 0x1 +#define JCC_JB 0x2 +#define JCC_JAE 0x3 +#define JCC_JE 0x4 +#define JCC_JNE 0x5 +#define JCC_JBE 0x6 +#define JCC_JA 0x7 +#define JCC_JS 0x8 +#define JCC_JNS 0x9 +#define JCC_JP 0xa +#define JCC_JNP 0xb +#define JCC_JL 0xc +#define JCC_JGE 0xd +#define JCC_JLE 0xe +#define JCC_JG 0xf + +#define P_EXT 0x100 /* 0x0f opcode prefix */ + +static const uint8_t tcg_cond_to_jcc[10] = { + [TCG_COND_EQ] = JCC_JE, + [TCG_COND_NE] = JCC_JNE, + [TCG_COND_LT] = JCC_JL, + [TCG_COND_GE] = JCC_JGE, + [TCG_COND_LE] = JCC_JLE, + [TCG_COND_GT] = JCC_JG, + [TCG_COND_LTU] = JCC_JB, + [TCG_COND_GEU] = JCC_JAE, + [TCG_COND_LEU] = JCC_JBE, + [TCG_COND_GTU] = JCC_JA, +}; + +static inline void tcg_out_opc(TCGContext *s, int opc) +{ + if (opc & P_EXT) + tcg_out8(s, 0x0f); + tcg_out8(s, opc); +} + +static inline void tcg_out_modrm(TCGContext *s, int opc, int r, int rm) +{ + tcg_out_opc(s, opc); + tcg_out8(s, 0xc0 | (r << 3) | rm); +} + +/* rm == -1 means no register index */ +static inline void tcg_out_modrm_offset(TCGContext *s, int opc, int r, int rm, + int32_t offset) +{ + tcg_out_opc(s, opc); + if (rm == -1) { + tcg_out8(s, 0x05 | (r << 3)); + tcg_out32(s, offset); + } else if (offset == 0 && rm != TCG_REG_EBP) { + if (rm == TCG_REG_ESP) { + tcg_out8(s, 0x04 | (r << 3)); + tcg_out8(s, 0x24); + } else { + tcg_out8(s, 0x00 | (r << 3) | rm); + } + } else if ((int8_t)offset == offset) { + if (rm == TCG_REG_ESP) { + tcg_out8(s, 0x44 | (r << 3)); + tcg_out8(s, 0x24); + } else { + tcg_out8(s, 0x40 | (r << 3) | rm); + } + tcg_out8(s, offset); + } else { + if (rm == TCG_REG_ESP) { + tcg_out8(s, 0x84 | (r << 3)); + tcg_out8(s, 0x24); + } else { + tcg_out8(s, 0x80 | (r << 3) | rm); + } + tcg_out32(s, offset); + } +} + +static inline void tcg_out_mov(TCGContext *s, int ret, int arg) +{ + if (arg != ret) + tcg_out_modrm(s, 0x8b, ret, arg); +} + +static inline void tcg_out_movi(TCGContext *s, TCGType type, + int ret, int32_t arg) +{ + if (arg == 0) { + /* xor r0,r0 */ + tcg_out_modrm(s, 0x01 | (ARITH_XOR << 3), ret, ret); + } else { + tcg_out8(s, 0xb8 + ret); + tcg_out32(s, arg); + } +} + +static inline void tcg_out_ld(TCGContext *s, int ret, + int arg1, int32_t arg2) +{ + /* movl */ + tcg_out_modrm_offset(s, 0x8b, ret, arg1, arg2); +} + +static inline void tcg_out_st(TCGContext *s, int arg, + int arg1, int32_t arg2) +{ + /* movl */ + tcg_out_modrm_offset(s, 0x89, arg, arg1, arg2); +} + +static inline void tgen_arithi(TCGContext *s, int c, int r0, int32_t val) +{ + if (val == (int8_t)val) { + tcg_out_modrm(s, 0x83, c, r0); + tcg_out8(s, val); + } else { + tcg_out_modrm(s, 0x81, c, r0); + tcg_out32(s, val); + } +} + +void tcg_out_addi(TCGContext *s, int reg, tcg_target_long val) +{ + if (val != 0) + tgen_arithi(s, ARITH_ADD, reg, val); +} + +static void tcg_out_jxx(TCGContext *s, int opc, int label_index) +{ + int32_t val, val1; + TCGLabel *l = &s->labels[label_index]; + + if (l->has_value) { + val = l->u.value - (tcg_target_long)s->code_ptr; + val1 = val - 2; + if ((int8_t)val1 == val1) { + if (opc == -1) + tcg_out8(s, 0xeb); + else + tcg_out8(s, 0x70 + opc); + tcg_out8(s, val1); + } else { + if (opc == -1) { + tcg_out8(s, 0xe9); + tcg_out32(s, val - 5); + } else { + tcg_out8(s, 0x0f); + tcg_out8(s, 0x80 + opc); + tcg_out32(s, val - 6); + } + } + } else { + if (opc == -1) { + tcg_out8(s, 0xe9); + } else { + tcg_out8(s, 0x0f); + tcg_out8(s, 0x80 + opc); + } + tcg_out_reloc(s, s->code_ptr, R_386_PC32, label_index, -4); + tcg_out32(s, -4); + } +} + +static void tcg_out_brcond(TCGContext *s, int cond, + TCGArg arg1, TCGArg arg2, int const_arg2, + int label_index) +{ + int c; + if (const_arg2) { + if (arg2 == 0) { + /* use test */ + switch(cond) { + case TCG_COND_EQ: + c = JCC_JNE; + break; + case TCG_COND_NE: + c = JCC_JNE; + break; + case TCG_COND_LT: + c = JCC_JS; + break; + case TCG_COND_GE: + c = JCC_JNS; + break; + default: + goto do_cmpi; + } + /* test r, r */ + tcg_out_modrm(s, 0x85, arg1, arg1); + tcg_out_jxx(s, c, label_index); + } else { + do_cmpi: + tgen_arithi(s, ARITH_CMP, arg1, arg2); + tcg_out_jxx(s, tcg_cond_to_jcc[cond], label_index); + } + } else { + tcg_out_modrm(s, 0x01 | (ARITH_CMP << 3), arg1, arg2); + tcg_out_jxx(s, tcg_cond_to_jcc[cond], label_index); + } +} + +/* XXX: we implement it at the target level to avoid having to + handle cross basic blocks temporaries */ +static void tcg_out_brcond2(TCGContext *s, + const TCGArg *args, const int *const_args) +{ + int label_next; + label_next = gen_new_label(); + switch(args[4]) { + case TCG_COND_EQ: + tcg_out_brcond(s, TCG_COND_NE, args[0], args[2], const_args[2], label_next); + tcg_out_brcond(s, TCG_COND_EQ, args[1], args[3], const_args[3], args[5]); + break; + case TCG_COND_NE: + tcg_out_brcond(s, TCG_COND_NE, args[0], args[2], const_args[2], args[5]); + tcg_out_brcond(s, TCG_COND_EQ, args[1], args[3], const_args[3], label_next); + break; + case TCG_COND_LT: + tcg_out_brcond(s, TCG_COND_LT, args[1], args[3], const_args[3], args[5]); + tcg_out_brcond(s, TCG_COND_NE, args[1], args[3], const_args[3], label_next); + tcg_out_brcond(s, TCG_COND_LT, args[0], args[2], const_args[2], args[5]); + break; + case TCG_COND_LE: + tcg_out_brcond(s, TCG_COND_LT, args[1], args[3], const_args[3], args[5]); + tcg_out_brcond(s, TCG_COND_NE, args[1], args[3], const_args[3], label_next); + tcg_out_brcond(s, TCG_COND_LE, args[0], args[2], const_args[2], args[5]); + break; + case TCG_COND_GT: + tcg_out_brcond(s, TCG_COND_GT, args[1], args[3], const_args[3], args[5]); + tcg_out_brcond(s, TCG_COND_NE, args[1], args[3], const_args[3], label_next); + tcg_out_brcond(s, TCG_COND_GT, args[0], args[2], const_args[2], args[5]); + break; + case TCG_COND_GE: + tcg_out_brcond(s, TCG_COND_GT, args[1], args[3], const_args[3], args[5]); + tcg_out_brcond(s, TCG_COND_NE, args[1], args[3], const_args[3], label_next); + tcg_out_brcond(s, TCG_COND_GE, args[0], args[2], const_args[2], args[5]); + break; + case TCG_COND_LTU: + tcg_out_brcond(s, TCG_COND_LTU, args[1], args[3], const_args[3], args[5]); + tcg_out_brcond(s, TCG_COND_NE, args[1], args[3], const_args[3], label_next); + tcg_out_brcond(s, TCG_COND_LTU, args[0], args[2], const_args[2], args[5]); + break; + case TCG_COND_LEU: + tcg_out_brcond(s, TCG_COND_LTU, args[1], args[3], const_args[3], args[5]); + tcg_out_brcond(s, TCG_COND_NE, args[1], args[3], const_args[3], label_next); + tcg_out_brcond(s, TCG_COND_LEU, args[0], args[2], const_args[2], args[5]); + break; + case TCG_COND_GTU: + tcg_out_brcond(s, TCG_COND_GTU, args[1], args[3], const_args[3], args[5]); + tcg_out_brcond(s, TCG_COND_NE, args[1], args[3], const_args[3], label_next); + tcg_out_brcond(s, TCG_COND_GTU, args[0], args[2], const_args[2], args[5]); + break; + case TCG_COND_GEU: + tcg_out_brcond(s, TCG_COND_GTU, args[1], args[3], const_args[3], args[5]); + tcg_out_brcond(s, TCG_COND_NE, args[1], args[3], const_args[3], label_next); + tcg_out_brcond(s, TCG_COND_GEU, args[0], args[2], const_args[2], args[5]); + break; + default: + tcg_abort(); + } + tcg_out_label(s, label_next, (tcg_target_long)s->code_ptr); +} + +#if defined(CONFIG_SOFTMMU) +extern void __ldb_mmu(void); +extern void __ldw_mmu(void); +extern void __ldl_mmu(void); +extern void __ldq_mmu(void); + +extern void __stb_mmu(void); +extern void __stw_mmu(void); +extern void __stl_mmu(void); +extern void __stq_mmu(void); + +static void *qemu_ld_helpers[4] = { + __ldb_mmu, + __ldw_mmu, + __ldl_mmu, + __ldq_mmu, +}; + +static void *qemu_st_helpers[4] = { + __stb_mmu, + __stw_mmu, + __stl_mmu, + __stq_mmu, +}; +#endif + +/* XXX: qemu_ld and qemu_st could be modified to clobber only EDX and + EAX. It will be useful once fixed registers globals are less + common. */ +static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, + int opc) +{ + int addr_reg, data_reg, data_reg2, r0, r1, mem_index, s_bits, bswap; +#if defined(CONFIG_SOFTMMU) + uint8_t *label1_ptr, *label2_ptr; +#endif +#if TARGET_LONG_BITS == 64 +#if defined(CONFIG_SOFTMMU) + uint8_t *label3_ptr; +#endif + int addr_reg2; +#endif + + data_reg = *args++; + if (opc == 3) + data_reg2 = *args++; + else + data_reg2 = 0; + addr_reg = *args++; +#if TARGET_LONG_BITS == 64 + addr_reg2 = *args++; +#endif + mem_index = *args; + s_bits = opc & 3; + + r0 = TCG_REG_EAX; + r1 = TCG_REG_EDX; + +#if defined(CONFIG_SOFTMMU) + tcg_out_mov(s, r1, addr_reg); + + tcg_out_mov(s, r0, addr_reg); + + tcg_out_modrm(s, 0xc1, 5, r1); /* shr $x, r1 */ + tcg_out8(s, TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS); + + tcg_out_modrm(s, 0x81, 4, r0); /* andl $x, r0 */ + tcg_out32(s, TARGET_PAGE_MASK | ((1 << s_bits) - 1)); + + tcg_out_modrm(s, 0x81, 4, r1); /* andl $x, r1 */ + tcg_out32(s, (CPU_TLB_SIZE - 1) << CPU_TLB_ENTRY_BITS); + + tcg_out_opc(s, 0x8d); /* lea offset(r1, %ebp), r1 */ + tcg_out8(s, 0x80 | (r1 << 3) | 0x04); + tcg_out8(s, (5 << 3) | r1); + tcg_out32(s, offsetof(CPUState, tlb_table[mem_index][0].addr_read)); + + /* cmp 0(r1), r0 */ + tcg_out_modrm_offset(s, 0x3b, r0, r1, 0); + + tcg_out_mov(s, r0, addr_reg); + +#if TARGET_LONG_BITS == 32 + /* je label1 */ + tcg_out8(s, 0x70 + JCC_JE); + label1_ptr = s->code_ptr; + s->code_ptr++; +#else + /* jne label3 */ + tcg_out8(s, 0x70 + JCC_JNE); + label3_ptr = s->code_ptr; + s->code_ptr++; + + /* cmp 4(r1), addr_reg2 */ + tcg_out_modrm_offset(s, 0x3b, addr_reg2, r1, 4); + + /* je label1 */ + tcg_out8(s, 0x70 + JCC_JE); + label1_ptr = s->code_ptr; + s->code_ptr++; + + /* label3: */ + *label3_ptr = s->code_ptr - label3_ptr - 1; +#endif + + /* XXX: move that code at the end of the TB */ +#if TARGET_LONG_BITS == 32 + tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_EDX, mem_index); +#else + tcg_out_mov(s, TCG_REG_EDX, addr_reg2); + tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_ECX, mem_index); +#endif + tcg_out8(s, 0xe8); + tcg_out32(s, (tcg_target_long)qemu_ld_helpers[s_bits] - + (tcg_target_long)s->code_ptr - 4); + + switch(opc) { + case 0 | 4: + /* movsbl */ + tcg_out_modrm(s, 0xbe | P_EXT, data_reg, TCG_REG_EAX); + break; + case 1 | 4: + /* movswl */ + tcg_out_modrm(s, 0xbf | P_EXT, data_reg, TCG_REG_EAX); + break; + case 0: + case 1: + case 2: + default: + tcg_out_mov(s, data_reg, TCG_REG_EAX); + break; + case 3: + if (data_reg == TCG_REG_EDX) { + tcg_out_opc(s, 0x90 + TCG_REG_EDX); /* xchg %edx, %eax */ + tcg_out_mov(s, data_reg2, TCG_REG_EAX); + } else { + tcg_out_mov(s, data_reg, TCG_REG_EAX); + tcg_out_mov(s, data_reg2, TCG_REG_EDX); + } + break; + } + + /* jmp label2 */ + tcg_out8(s, 0xeb); + label2_ptr = s->code_ptr; + s->code_ptr++; + + /* label1: */ + *label1_ptr = s->code_ptr - label1_ptr - 1; + + /* add x(r1), r0 */ + tcg_out_modrm_offset(s, 0x03, r0, r1, offsetof(CPUTLBEntry, addend) - + offsetof(CPUTLBEntry, addr_read)); +#else + r0 = addr_reg; +#endif + +#ifdef TARGET_WORDS_BIGENDIAN + bswap = 1; +#else + bswap = 0; +#endif + switch(opc) { + case 0: + /* movzbl */ + tcg_out_modrm_offset(s, 0xb6 | P_EXT, data_reg, r0, 0); + break; + case 0 | 4: + /* movsbl */ + tcg_out_modrm_offset(s, 0xbe | P_EXT, data_reg, r0, 0); + break; + case 1: + /* movzwl */ + tcg_out_modrm_offset(s, 0xb7 | P_EXT, data_reg, r0, 0); + if (bswap) { + /* rolw $8, data_reg */ + tcg_out8(s, 0x66); + tcg_out_modrm(s, 0xc1, 0, data_reg); + tcg_out8(s, 8); + } + break; + case 1 | 4: + /* movswl */ + tcg_out_modrm_offset(s, 0xbf | P_EXT, data_reg, r0, 0); + if (bswap) { + /* rolw $8, data_reg */ + tcg_out8(s, 0x66); + tcg_out_modrm(s, 0xc1, 0, data_reg); + tcg_out8(s, 8); + + /* movswl data_reg, data_reg */ + tcg_out_modrm(s, 0xbf | P_EXT, data_reg, data_reg); + } + break; + case 2: + /* movl (r0), data_reg */ + tcg_out_modrm_offset(s, 0x8b, data_reg, r0, 0); + if (bswap) { + /* bswap */ + tcg_out_opc(s, (0xc8 + data_reg) | P_EXT); + } + break; + case 3: + /* XXX: could be nicer */ + if (r0 == data_reg) { + r1 = TCG_REG_EDX; + if (r1 == data_reg) + r1 = TCG_REG_EAX; + tcg_out_mov(s, r1, r0); + r0 = r1; + } + if (!bswap) { + tcg_out_modrm_offset(s, 0x8b, data_reg, r0, 0); + tcg_out_modrm_offset(s, 0x8b, data_reg2, r0, 4); + } else { + tcg_out_modrm_offset(s, 0x8b, data_reg, r0, 4); + tcg_out_opc(s, (0xc8 + data_reg) | P_EXT); + + tcg_out_modrm_offset(s, 0x8b, data_reg2, r0, 0); + /* bswap */ + tcg_out_opc(s, (0xc8 + data_reg2) | P_EXT); + } + break; + default: + tcg_abort(); + } + +#if defined(CONFIG_SOFTMMU) + /* label2: */ + *label2_ptr = s->code_ptr - label2_ptr - 1; +#endif +} + + +static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, + int opc) +{ + int addr_reg, data_reg, data_reg2, r0, r1, mem_index, s_bits, bswap; +#if defined(CONFIG_SOFTMMU) + uint8_t *label1_ptr, *label2_ptr; +#endif +#if TARGET_LONG_BITS == 64 +#if defined(CONFIG_SOFTMMU) + uint8_t *label3_ptr; +#endif + int addr_reg2; +#endif + + data_reg = *args++; + if (opc == 3) + data_reg2 = *args++; + else + data_reg2 = 0; + addr_reg = *args++; +#if TARGET_LONG_BITS == 64 + addr_reg2 = *args++; +#endif + mem_index = *args; + + s_bits = opc; + + r0 = TCG_REG_EAX; + r1 = TCG_REG_EDX; + +#if defined(CONFIG_SOFTMMU) + tcg_out_mov(s, r1, addr_reg); + + tcg_out_mov(s, r0, addr_reg); + + tcg_out_modrm(s, 0xc1, 5, r1); /* shr $x, r1 */ + tcg_out8(s, TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS); + + tcg_out_modrm(s, 0x81, 4, r0); /* andl $x, r0 */ + tcg_out32(s, TARGET_PAGE_MASK | ((1 << s_bits) - 1)); + + tcg_out_modrm(s, 0x81, 4, r1); /* andl $x, r1 */ + tcg_out32(s, (CPU_TLB_SIZE - 1) << CPU_TLB_ENTRY_BITS); + + tcg_out_opc(s, 0x8d); /* lea offset(r1, %ebp), r1 */ + tcg_out8(s, 0x80 | (r1 << 3) | 0x04); + tcg_out8(s, (5 << 3) | r1); + tcg_out32(s, offsetof(CPUState, tlb_table[mem_index][0].addr_write)); + + /* cmp 0(r1), r0 */ + tcg_out_modrm_offset(s, 0x3b, r0, r1, 0); + + tcg_out_mov(s, r0, addr_reg); + +#if TARGET_LONG_BITS == 32 + /* je label1 */ + tcg_out8(s, 0x70 + JCC_JE); + label1_ptr = s->code_ptr; + s->code_ptr++; +#else + /* jne label3 */ + tcg_out8(s, 0x70 + JCC_JNE); + label3_ptr = s->code_ptr; + s->code_ptr++; + + /* cmp 4(r1), addr_reg2 */ + tcg_out_modrm_offset(s, 0x3b, addr_reg2, r1, 4); + + /* je label1 */ + tcg_out8(s, 0x70 + JCC_JE); + label1_ptr = s->code_ptr; + s->code_ptr++; + + /* label3: */ + *label3_ptr = s->code_ptr - label3_ptr - 1; +#endif + + /* XXX: move that code at the end of the TB */ +#if TARGET_LONG_BITS == 32 + if (opc == 3) { + tcg_out_mov(s, TCG_REG_EDX, data_reg); + tcg_out_mov(s, TCG_REG_ECX, data_reg2); + tcg_out8(s, 0x6a); /* push Ib */ + tcg_out8(s, mem_index); + tcg_out8(s, 0xe8); + tcg_out32(s, (tcg_target_long)qemu_st_helpers[s_bits] - + (tcg_target_long)s->code_ptr - 4); + tcg_out_addi(s, TCG_REG_ESP, 4); + } else { + switch(opc) { + case 0: + /* movzbl */ + tcg_out_modrm(s, 0xb6 | P_EXT, TCG_REG_EDX, data_reg); + break; + case 1: + /* movzwl */ + tcg_out_modrm(s, 0xb7 | P_EXT, TCG_REG_EDX, data_reg); + break; + case 2: + tcg_out_mov(s, TCG_REG_EDX, data_reg); + break; + } + tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_ECX, mem_index); + tcg_out8(s, 0xe8); + tcg_out32(s, (tcg_target_long)qemu_st_helpers[s_bits] - + (tcg_target_long)s->code_ptr - 4); + } +#else + if (opc == 3) { + tcg_out_mov(s, TCG_REG_EDX, addr_reg2); + tcg_out8(s, 0x6a); /* push Ib */ + tcg_out8(s, mem_index); + tcg_out_opc(s, 0x50 + data_reg2); /* push */ + tcg_out_opc(s, 0x50 + data_reg); /* push */ + tcg_out8(s, 0xe8); + tcg_out32(s, (tcg_target_long)qemu_st_helpers[s_bits] - + (tcg_target_long)s->code_ptr - 4); + tcg_out_addi(s, TCG_REG_ESP, 12); + } else { + tcg_out_mov(s, TCG_REG_EDX, addr_reg2); + switch(opc) { + case 0: + /* movzbl */ + tcg_out_modrm(s, 0xb6 | P_EXT, TCG_REG_ECX, data_reg); + break; + case 1: + /* movzwl */ + tcg_out_modrm(s, 0xb7 | P_EXT, TCG_REG_ECX, data_reg); + break; + case 2: + tcg_out_mov(s, TCG_REG_ECX, data_reg); + break; + } + tcg_out8(s, 0x6a); /* push Ib */ + tcg_out8(s, mem_index); + tcg_out8(s, 0xe8); + tcg_out32(s, (tcg_target_long)qemu_st_helpers[s_bits] - + (tcg_target_long)s->code_ptr - 4); + tcg_out_addi(s, TCG_REG_ESP, 4); + } +#endif + + /* jmp label2 */ + tcg_out8(s, 0xeb); + label2_ptr = s->code_ptr; + s->code_ptr++; + + /* label1: */ + *label1_ptr = s->code_ptr - label1_ptr - 1; + + /* add x(r1), r0 */ + tcg_out_modrm_offset(s, 0x03, r0, r1, offsetof(CPUTLBEntry, addend) - + offsetof(CPUTLBEntry, addr_write)); +#else + r0 = addr_reg; +#endif + +#ifdef TARGET_WORDS_BIGENDIAN + bswap = 1; +#else + bswap = 0; +#endif + switch(opc) { + case 0: + /* movb */ + tcg_out_modrm_offset(s, 0x88, data_reg, r0, 0); + break; + case 1: + if (bswap) { + tcg_out_mov(s, r1, data_reg); + tcg_out8(s, 0x66); /* rolw $8, %ecx */ + tcg_out_modrm(s, 0xc1, 0, r1); + tcg_out8(s, 8); + data_reg = r1; + } + /* movw */ + tcg_out8(s, 0x66); + tcg_out_modrm_offset(s, 0x89, data_reg, r0, 0); + break; + case 2: + if (bswap) { + tcg_out_mov(s, r1, data_reg); + /* bswap data_reg */ + tcg_out_opc(s, (0xc8 + r1) | P_EXT); + data_reg = r1; + } + /* movl */ + tcg_out_modrm_offset(s, 0x89, data_reg, r0, 0); + break; + case 3: + if (bswap) { + tcg_out_mov(s, r1, data_reg2); + /* bswap data_reg */ + tcg_out_opc(s, (0xc8 + r1) | P_EXT); + tcg_out_modrm_offset(s, 0x89, r1, r0, 0); + tcg_out_mov(s, r1, data_reg); + /* bswap data_reg */ + tcg_out_opc(s, (0xc8 + r1) | P_EXT); + tcg_out_modrm_offset(s, 0x89, r1, r0, 4); + } else { + tcg_out_modrm_offset(s, 0x89, data_reg, r0, 0); + tcg_out_modrm_offset(s, 0x89, data_reg2, r0, 4); + } + break; + default: + tcg_abort(); + } + +#if defined(CONFIG_SOFTMMU) + /* label2: */ + *label2_ptr = s->code_ptr - label2_ptr - 1; +#endif +} + +static inline void tcg_out_op(TCGContext *s, int opc, + const TCGArg *args, const int *const_args) +{ + int c; + + switch(opc) { + case INDEX_op_exit_tb: + tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_EAX, args[0]); + tcg_out8(s, 0xc3); /* ret */ + break; + case INDEX_op_goto_tb: + if (s->tb_jmp_offset) { + /* direct jump method */ + tcg_out8(s, 0xe9); /* jmp im */ + s->tb_jmp_offset[args[0]] = s->code_ptr - s->code_buf; + tcg_out32(s, 0); + } else { + /* indirect jump method */ + /* jmp Ev */ + tcg_out_modrm_offset(s, 0xff, 4, -1, + (tcg_target_long)(s->tb_next + args[0])); + } + s->tb_next_offset[args[0]] = s->code_ptr - s->code_buf; + break; + case INDEX_op_call: + if (const_args[0]) { + tcg_out8(s, 0xe8); + tcg_out32(s, args[0] - (tcg_target_long)s->code_ptr - 4); + } else { + tcg_out_modrm(s, 0xff, 2, args[0]); + } + break; + case INDEX_op_jmp: + if (const_args[0]) { + tcg_out8(s, 0xe9); + tcg_out32(s, args[0] - (tcg_target_long)s->code_ptr - 4); + } else { + tcg_out_modrm(s, 0xff, 4, args[0]); + } + break; + case INDEX_op_br: + tcg_out_jxx(s, JCC_JMP, args[0]); + break; + case INDEX_op_movi_i32: + tcg_out_movi(s, TCG_TYPE_I32, args[0], args[1]); + break; + case INDEX_op_ld8u_i32: + /* movzbl */ + tcg_out_modrm_offset(s, 0xb6 | P_EXT, args[0], args[1], args[2]); + break; + case INDEX_op_ld8s_i32: + /* movsbl */ + tcg_out_modrm_offset(s, 0xbe | P_EXT, args[0], args[1], args[2]); + break; + case INDEX_op_ld16u_i32: + /* movzwl */ + tcg_out_modrm_offset(s, 0xb7 | P_EXT, args[0], args[1], args[2]); + break; + case INDEX_op_ld16s_i32: + /* movswl */ + tcg_out_modrm_offset(s, 0xbf | P_EXT, args[0], args[1], args[2]); + break; + case INDEX_op_ld_i32: + /* movl */ + tcg_out_modrm_offset(s, 0x8b, args[0], args[1], args[2]); + break; + case INDEX_op_st8_i32: + /* movb */ + tcg_out_modrm_offset(s, 0x88, args[0], args[1], args[2]); + break; + case INDEX_op_st16_i32: + /* movw */ + tcg_out8(s, 0x66); + tcg_out_modrm_offset(s, 0x89, args[0], args[1], args[2]); + break; + case INDEX_op_st_i32: + /* movl */ + tcg_out_modrm_offset(s, 0x89, args[0], args[1], args[2]); + break; + case INDEX_op_sub_i32: + c = ARITH_SUB; + goto gen_arith; + case INDEX_op_and_i32: + c = ARITH_AND; + goto gen_arith; + case INDEX_op_or_i32: + c = ARITH_OR; + goto gen_arith; + case INDEX_op_xor_i32: + c = ARITH_XOR; + goto gen_arith; + case INDEX_op_add_i32: + c = ARITH_ADD; + gen_arith: + if (const_args[2]) { + tgen_arithi(s, c, args[0], args[2]); + } else { + tcg_out_modrm(s, 0x01 | (c << 3), args[2], args[0]); + } + break; + case INDEX_op_mul_i32: + if (const_args[2]) { + int32_t val; + val = args[2]; + if (val == (int8_t)val) { + tcg_out_modrm(s, 0x6b, args[0], args[0]); + tcg_out8(s, val); + } else { + tcg_out_modrm(s, 0x69, args[0], args[0]); + tcg_out32(s, val); + } + } else { + tcg_out_modrm(s, 0xaf | P_EXT, args[0], args[2]); + } + break; + case INDEX_op_mulu2_i32: + tcg_out_modrm(s, 0xf7, 4, args[3]); + break; + case INDEX_op_div2_i32: + tcg_out_modrm(s, 0xf7, 7, args[4]); + break; + case INDEX_op_divu2_i32: + tcg_out_modrm(s, 0xf7, 6, args[4]); + break; + case INDEX_op_shl_i32: + c = SHIFT_SHL; + gen_shift32: + if (const_args[2]) { + if (args[2] == 1) { + tcg_out_modrm(s, 0xd1, c, args[0]); + } else { + tcg_out_modrm(s, 0xc1, c, args[0]); + tcg_out8(s, args[2]); + } + } else { + tcg_out_modrm(s, 0xd3, c, args[0]); + } + break; + case INDEX_op_shr_i32: + c = SHIFT_SHR; + goto gen_shift32; + case INDEX_op_sar_i32: + c = SHIFT_SAR; + goto gen_shift32; + + case INDEX_op_add2_i32: + if (const_args[4]) + tgen_arithi(s, ARITH_ADD, args[0], args[4]); + else + tcg_out_modrm(s, 0x01 | (ARITH_ADD << 3), args[4], args[0]); + if (const_args[5]) + tgen_arithi(s, ARITH_ADC, args[1], args[5]); + else + tcg_out_modrm(s, 0x01 | (ARITH_ADC << 3), args[5], args[1]); + break; + case INDEX_op_sub2_i32: + if (const_args[4]) + tgen_arithi(s, ARITH_SUB, args[0], args[4]); + else + tcg_out_modrm(s, 0x01 | (ARITH_SUB << 3), args[4], args[0]); + if (const_args[5]) + tgen_arithi(s, ARITH_SBB, args[1], args[5]); + else + tcg_out_modrm(s, 0x01 | (ARITH_SBB << 3), args[5], args[1]); + break; + case INDEX_op_brcond_i32: + tcg_out_brcond(s, args[2], args[0], args[1], const_args[1], args[3]); + break; + case INDEX_op_brcond2_i32: + tcg_out_brcond2(s, args, const_args); + break; + + case INDEX_op_qemu_ld8u: + tcg_out_qemu_ld(s, args, 0); + break; + case INDEX_op_qemu_ld8s: + tcg_out_qemu_ld(s, args, 0 | 4); + break; + case INDEX_op_qemu_ld16u: + tcg_out_qemu_ld(s, args, 1); + break; + case INDEX_op_qemu_ld16s: + tcg_out_qemu_ld(s, args, 1 | 4); + break; + case INDEX_op_qemu_ld32u: + tcg_out_qemu_ld(s, args, 2); + break; + case INDEX_op_qemu_ld64: + tcg_out_qemu_ld(s, args, 3); + break; + + case INDEX_op_qemu_st8: + tcg_out_qemu_st(s, args, 0); + break; + case INDEX_op_qemu_st16: + tcg_out_qemu_st(s, args, 1); + break; + case INDEX_op_qemu_st32: + tcg_out_qemu_st(s, args, 2); + break; + case INDEX_op_qemu_st64: + tcg_out_qemu_st(s, args, 3); + break; + + default: + tcg_abort(); + } +} + +static const TCGTargetOpDef x86_op_defs[] = { + { INDEX_op_exit_tb, { } }, + { INDEX_op_goto_tb, { } }, + { INDEX_op_call, { "ri" } }, + { INDEX_op_jmp, { "ri" } }, + { INDEX_op_br, { } }, + { INDEX_op_mov_i32, { "r", "r" } }, + { INDEX_op_movi_i32, { "r" } }, + { INDEX_op_ld8u_i32, { "r", "r" } }, + { INDEX_op_ld8s_i32, { "r", "r" } }, + { INDEX_op_ld16u_i32, { "r", "r" } }, + { INDEX_op_ld16s_i32, { "r", "r" } }, + { INDEX_op_ld_i32, { "r", "r" } }, + { INDEX_op_st8_i32, { "q", "r" } }, + { INDEX_op_st16_i32, { "r", "r" } }, + { INDEX_op_st_i32, { "r", "r" } }, + + { INDEX_op_add_i32, { "r", "0", "ri" } }, + { INDEX_op_sub_i32, { "r", "0", "ri" } }, + { INDEX_op_mul_i32, { "r", "0", "ri" } }, + { INDEX_op_mulu2_i32, { "a", "d", "a", "r" } }, + { INDEX_op_div2_i32, { "a", "d", "0", "1", "r" } }, + { INDEX_op_divu2_i32, { "a", "d", "0", "1", "r" } }, + { INDEX_op_and_i32, { "r", "0", "ri" } }, + { INDEX_op_or_i32, { "r", "0", "ri" } }, + { INDEX_op_xor_i32, { "r", "0", "ri" } }, + + { INDEX_op_shl_i32, { "r", "0", "ci" } }, + { INDEX_op_shr_i32, { "r", "0", "ci" } }, + { INDEX_op_sar_i32, { "r", "0", "ci" } }, + + { INDEX_op_brcond_i32, { "r", "ri" } }, + + { INDEX_op_add2_i32, { "r", "r", "0", "1", "ri", "ri" } }, + { INDEX_op_sub2_i32, { "r", "r", "0", "1", "ri", "ri" } }, + { INDEX_op_brcond2_i32, { "r", "r", "ri", "ri" } }, + +#if TARGET_LONG_BITS == 32 + { INDEX_op_qemu_ld8u, { "r", "L" } }, + { INDEX_op_qemu_ld8s, { "r", "L" } }, + { INDEX_op_qemu_ld16u, { "r", "L" } }, + { INDEX_op_qemu_ld16s, { "r", "L" } }, + { INDEX_op_qemu_ld32u, { "r", "L" } }, + { INDEX_op_qemu_ld64, { "r", "r", "L" } }, + + { INDEX_op_qemu_st8, { "cb", "L" } }, + { INDEX_op_qemu_st16, { "L", "L" } }, + { INDEX_op_qemu_st32, { "L", "L" } }, + { INDEX_op_qemu_st64, { "L", "L", "L" } }, +#else + { INDEX_op_qemu_ld8u, { "r", "L", "L" } }, + { INDEX_op_qemu_ld8s, { "r", "L", "L" } }, + { INDEX_op_qemu_ld16u, { "r", "L", "L" } }, + { INDEX_op_qemu_ld16s, { "r", "L", "L" } }, + { INDEX_op_qemu_ld32u, { "r", "L", "L" } }, + { INDEX_op_qemu_ld64, { "r", "r", "L", "L" } }, + + { INDEX_op_qemu_st8, { "cb", "L", "L" } }, + { INDEX_op_qemu_st16, { "L", "L", "L" } }, + { INDEX_op_qemu_st32, { "L", "L", "L" } }, + { INDEX_op_qemu_st64, { "L", "L", "L", "L" } }, +#endif + { -1 }, +}; + +void tcg_target_init(TCGContext *s) +{ + /* fail safe */ + if ((1 << CPU_TLB_ENTRY_BITS) != sizeof(CPUTLBEntry)) + tcg_abort(); + + tcg_regset_set32(tcg_target_available_regs[TCG_TYPE_I32], 0, 0xff); + tcg_regset_set32(tcg_target_call_clobber_regs, 0, + (1 << TCG_REG_EAX) | + (1 << TCG_REG_EDX) | + (1 << TCG_REG_ECX)); + + tcg_regset_clear(s->reserved_regs); + tcg_regset_set_reg(s->reserved_regs, TCG_REG_ESP); + + tcg_add_target_add_op_defs(x86_op_defs); +} diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h new file mode 100644 index 0000000000..468e2d503e --- /dev/null +++ b/tcg/i386/tcg-target.h @@ -0,0 +1,54 @@ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2008 Fabrice Bellard + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ +#define TCG_TARGET_I386 1 + +#define TCG_TARGET_REG_BITS 32 +//#define TCG_TARGET_WORDS_BIGENDIAN + +#define TCG_TARGET_NB_REGS 8 + +enum { + TCG_REG_EAX = 0, + TCG_REG_ECX, + TCG_REG_EDX, + TCG_REG_EBX, + TCG_REG_ESP, + TCG_REG_EBP, + TCG_REG_ESI, + TCG_REG_EDI, +}; + +/* used for function call generation */ +#define TCG_REG_CALL_STACK TCG_REG_ESP +#define TCG_TARGET_STACK_ALIGN 16 + +/* Note: must be synced with dyngen-exec.h */ +#define TCG_AREG0 TCG_REG_EBP +#define TCG_AREG1 TCG_REG_EBX +#define TCG_AREG2 TCG_REG_ESI +#define TCG_AREG3 TCG_REG_EDI + +static inline void flush_icache_range(unsigned long start, unsigned long stop) +{ +} diff --git a/tcg/tcg-dyngen.c b/tcg/tcg-dyngen.c new file mode 100644 index 0000000000..1cc084e6ef --- /dev/null +++ b/tcg/tcg-dyngen.c @@ -0,0 +1,482 @@ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2008 Fabrice Bellard + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ +#include <assert.h> +#include <stdarg.h> +#include <stdlib.h> +#include <stdio.h> +#include <string.h> +#include <inttypes.h> + +#include "config.h" +#include "osdep.h" + +#include "tcg.h" + +int __op_param1, __op_param2, __op_param3; +#if defined(__sparc__) || defined(__arm__) + void __op_gen_label1(){} + void __op_gen_label2(){} + void __op_gen_label3(){} +#else + int __op_gen_label1, __op_gen_label2, __op_gen_label3; +#endif +int __op_jmp0, __op_jmp1, __op_jmp2, __op_jmp3; + +#if 0 +#if defined(__s390__) +static inline void flush_icache_range(unsigned long start, unsigned long stop) +{ +} +#elif defined(__ia64__) +static inline void flush_icache_range(unsigned long start, unsigned long stop) +{ + while (start < stop) { + asm volatile ("fc %0" :: "r"(start)); + start += 32; + } + asm volatile (";;sync.i;;srlz.i;;"); +} +#elif defined(__powerpc__) + +#define MIN_CACHE_LINE_SIZE 8 /* conservative value */ + +static inline void flush_icache_range(unsigned long start, unsigned long stop) +{ + unsigned long p; + + start &= ~(MIN_CACHE_LINE_SIZE - 1); + stop = (stop + MIN_CACHE_LINE_SIZE - 1) & ~(MIN_CACHE_LINE_SIZE - 1); + + for (p = start; p < stop; p += MIN_CACHE_LINE_SIZE) { + asm volatile ("dcbst 0,%0" : : "r"(p) : "memory"); + } + asm volatile ("sync" : : : "memory"); + for (p = start; p < stop; p += MIN_CACHE_LINE_SIZE) { + asm volatile ("icbi 0,%0" : : "r"(p) : "memory"); + } + asm volatile ("sync" : : : "memory"); + asm volatile ("isync" : : : "memory"); +} +#elif defined(__alpha__) +static inline void flush_icache_range(unsigned long start, unsigned long stop) +{ + asm ("imb"); +} +#elif defined(__sparc__) +static inline void flush_icache_range(unsigned long start, unsigned long stop) +{ + unsigned long p; + + p = start & ~(8UL - 1UL); + stop = (stop + (8UL - 1UL)) & ~(8UL - 1UL); + + for (; p < stop; p += 8) + __asm__ __volatile__("flush\t%0" : : "r" (p)); +} +#elif defined(__arm__) +static inline void flush_icache_range(unsigned long start, unsigned long stop) +{ + register unsigned long _beg __asm ("a1") = start; + register unsigned long _end __asm ("a2") = stop; + register unsigned long _flg __asm ("a3") = 0; + __asm __volatile__ ("swi 0x9f0002" : : "r" (_beg), "r" (_end), "r" (_flg)); +} +#elif defined(__mc68000) + +# include <asm/cachectl.h> +static inline void flush_icache_range(unsigned long start, unsigned long stop) +{ + cacheflush(start,FLUSH_SCOPE_LINE,FLUSH_CACHE_BOTH,stop-start+16); +} +#elif defined(__mips__) + +#include <sys/cachectl.h> +static inline void flush_icache_range(unsigned long start, unsigned long stop) +{ + _flush_cache ((void *)start, stop - start, BCACHE); +} +#else +#error unsupported CPU +#endif + +#ifdef __alpha__ + +register int gp asm("$29"); + +static inline void immediate_ldah(void *p, int val) { + uint32_t *dest = p; + long high = ((val >> 16) + ((val >> 15) & 1)) & 0xffff; + + *dest &= ~0xffff; + *dest |= high; + *dest |= 31 << 16; +} +static inline void immediate_lda(void *dest, int val) { + *(uint16_t *) dest = val; +} +void fix_bsr(void *p, int offset) { + uint32_t *dest = p; + *dest &= ~((1 << 21) - 1); + *dest |= (offset >> 2) & ((1 << 21) - 1); +} + +#endif /* __alpha__ */ + +#ifdef __arm__ + +#define ARM_LDR_TABLE_SIZE 1024 + +typedef struct LDREntry { + uint8_t *ptr; + uint32_t *data_ptr; + unsigned type:2; +} LDREntry; + +static LDREntry arm_ldr_table[1024]; +static uint32_t arm_data_table[ARM_LDR_TABLE_SIZE]; + +extern char exec_loop; + +static inline void arm_reloc_pc24(uint32_t *ptr, uint32_t insn, int val) +{ + *ptr = (insn & ~0xffffff) | ((insn + ((val - (int)ptr) >> 2)) & 0xffffff); +} + +static uint8_t *arm_flush_ldr(uint8_t *gen_code_ptr, + LDREntry *ldr_start, LDREntry *ldr_end, + uint32_t *data_start, uint32_t *data_end, + int gen_jmp) +{ + LDREntry *le; + uint32_t *ptr; + int offset, data_size, target; + uint8_t *data_ptr; + uint32_t insn; + uint32_t mask; + + data_size = (data_end - data_start) << 2; + + if (gen_jmp) { + /* generate branch to skip the data */ + if (data_size == 0) + return gen_code_ptr; + target = (long)gen_code_ptr + data_size + 4; + arm_reloc_pc24((uint32_t *)gen_code_ptr, 0xeafffffe, target); + gen_code_ptr += 4; + } + + /* copy the data */ + data_ptr = gen_code_ptr; + memcpy(gen_code_ptr, data_start, data_size); + gen_code_ptr += data_size; + + /* patch the ldr to point to the data */ + for(le = ldr_start; le < ldr_end; le++) { + ptr = (uint32_t *)le->ptr; + offset = ((unsigned long)(le->data_ptr) - (unsigned long)data_start) + + (unsigned long)data_ptr - + (unsigned long)ptr - 8; + if (offset < 0) { + fprintf(stderr, "Negative constant pool offset\n"); + tcg_abort(); + } + switch (le->type) { + case 0: /* ldr */ + mask = ~0x00800fff; + if (offset >= 4096) { + fprintf(stderr, "Bad ldr offset\n"); + tcg_abort(); + } + break; + case 1: /* ldc */ + mask = ~0x008000ff; + if (offset >= 1024 ) { + fprintf(stderr, "Bad ldc offset\n"); + tcg_abort(); + } + break; + case 2: /* add */ + mask = ~0xfff; + if (offset >= 1024 ) { + fprintf(stderr, "Bad add offset\n"); + tcg_abort(); + } + break; + default: + fprintf(stderr, "Bad pc relative fixup\n"); + tcg_abort(); + } + insn = *ptr & mask; + switch (le->type) { + case 0: /* ldr */ + insn |= offset | 0x00800000; + break; + case 1: /* ldc */ + insn |= (offset >> 2) | 0x00800000; + break; + case 2: /* add */ + insn |= (offset >> 2) | 0xf00; + break; + } + *ptr = insn; + } + return gen_code_ptr; +} + +#endif /* __arm__ */ + +#ifdef __ia64 + +/* Patch instruction with "val" where "mask" has 1 bits. */ +static inline void ia64_patch (uint64_t insn_addr, uint64_t mask, uint64_t val) +{ + uint64_t m0, m1, v0, v1, b0, b1, *b = (uint64_t *) (insn_addr & -16); +# define insn_mask ((1UL << 41) - 1) + unsigned long shift; + + b0 = b[0]; b1 = b[1]; + shift = 5 + 41 * (insn_addr % 16); /* 5 template, 3 x 41-bit insns */ + if (shift >= 64) { + m1 = mask << (shift - 64); + v1 = val << (shift - 64); + } else { + m0 = mask << shift; m1 = mask >> (64 - shift); + v0 = val << shift; v1 = val >> (64 - shift); + b[0] = (b0 & ~m0) | (v0 & m0); + } + b[1] = (b1 & ~m1) | (v1 & m1); +} + +static inline void ia64_patch_imm60 (uint64_t insn_addr, uint64_t val) +{ + ia64_patch(insn_addr, + 0x011ffffe000UL, + ( ((val & 0x0800000000000000UL) >> 23) /* bit 59 -> 36 */ + | ((val & 0x00000000000fffffUL) << 13) /* bit 0 -> 13 */)); + ia64_patch(insn_addr - 1, 0x1fffffffffcUL, val >> 18); +} + +static inline void ia64_imm64 (void *insn, uint64_t val) +{ + /* Ignore the slot number of the relocation; GCC and Intel + toolchains differed for some time on whether IMM64 relocs are + against slot 1 (Intel) or slot 2 (GCC). */ + uint64_t insn_addr = (uint64_t) insn & ~3UL; + + ia64_patch(insn_addr + 2, + 0x01fffefe000UL, + ( ((val & 0x8000000000000000UL) >> 27) /* bit 63 -> 36 */ + | ((val & 0x0000000000200000UL) << 0) /* bit 21 -> 21 */ + | ((val & 0x00000000001f0000UL) << 6) /* bit 16 -> 22 */ + | ((val & 0x000000000000ff80UL) << 20) /* bit 7 -> 27 */ + | ((val & 0x000000000000007fUL) << 13) /* bit 0 -> 13 */) + ); + ia64_patch(insn_addr + 1, 0x1ffffffffffUL, val >> 22); +} + +static inline void ia64_imm60b (void *insn, uint64_t val) +{ + /* Ignore the slot number of the relocation; GCC and Intel + toolchains differed for some time on whether IMM64 relocs are + against slot 1 (Intel) or slot 2 (GCC). */ + uint64_t insn_addr = (uint64_t) insn & ~3UL; + + if (val + ((uint64_t) 1 << 59) >= (1UL << 60)) + fprintf(stderr, "%s: value %ld out of IMM60 range\n", + __FUNCTION__, (int64_t) val); + ia64_patch_imm60(insn_addr + 2, val); +} + +static inline void ia64_imm22 (void *insn, uint64_t val) +{ + if (val + (1 << 21) >= (1 << 22)) + fprintf(stderr, "%s: value %li out of IMM22 range\n", + __FUNCTION__, (int64_t)val); + ia64_patch((uint64_t) insn, 0x01fffcfe000UL, + ( ((val & 0x200000UL) << 15) /* bit 21 -> 36 */ + | ((val & 0x1f0000UL) << 6) /* bit 16 -> 22 */ + | ((val & 0x00ff80UL) << 20) /* bit 7 -> 27 */ + | ((val & 0x00007fUL) << 13) /* bit 0 -> 13 */)); +} + +/* Like ia64_imm22(), but also clear bits 20-21. For addl, this has + the effect of turning "addl rX=imm22,rY" into "addl + rX=imm22,r0". */ +static inline void ia64_imm22_r0 (void *insn, uint64_t val) +{ + if (val + (1 << 21) >= (1 << 22)) + fprintf(stderr, "%s: value %li out of IMM22 range\n", + __FUNCTION__, (int64_t)val); + ia64_patch((uint64_t) insn, 0x01fffcfe000UL | (0x3UL << 20), + ( ((val & 0x200000UL) << 15) /* bit 21 -> 36 */ + | ((val & 0x1f0000UL) << 6) /* bit 16 -> 22 */ + | ((val & 0x00ff80UL) << 20) /* bit 7 -> 27 */ + | ((val & 0x00007fUL) << 13) /* bit 0 -> 13 */)); +} + +static inline void ia64_imm21b (void *insn, uint64_t val) +{ + if (val + (1 << 20) >= (1 << 21)) + fprintf(stderr, "%s: value %li out of IMM21b range\n", + __FUNCTION__, (int64_t)val); + ia64_patch((uint64_t) insn, 0x11ffffe000UL, + ( ((val & 0x100000UL) << 16) /* bit 20 -> 36 */ + | ((val & 0x0fffffUL) << 13) /* bit 0 -> 13 */)); +} + +static inline void ia64_nop_b (void *insn) +{ + ia64_patch((uint64_t) insn, (1UL << 41) - 1, 2UL << 37); +} + +static inline void ia64_ldxmov(void *insn, uint64_t val) +{ + if (val + (1 << 21) < (1 << 22)) + ia64_patch((uint64_t) insn, 0x1fff80fe000UL, 8UL << 37); +} + +static inline int ia64_patch_ltoff(void *insn, uint64_t val, + int relaxable) +{ + if (relaxable && (val + (1 << 21) < (1 << 22))) { + ia64_imm22_r0(insn, val); + return 0; + } + return 1; +} + +struct ia64_fixup { + struct ia64_fixup *next; + void *addr; /* address that needs to be patched */ + long value; +}; + +#define IA64_PLT(insn, plt_index) \ +do { \ + struct ia64_fixup *fixup = alloca(sizeof(*fixup)); \ + fixup->next = plt_fixes; \ + plt_fixes = fixup; \ + fixup->addr = (insn); \ + fixup->value = (plt_index); \ + plt_offset[(plt_index)] = 1; \ +} while (0) + +#define IA64_LTOFF(insn, val, relaxable) \ +do { \ + if (ia64_patch_ltoff(insn, val, relaxable)) { \ + struct ia64_fixup *fixup = alloca(sizeof(*fixup)); \ + fixup->next = ltoff_fixes; \ + ltoff_fixes = fixup; \ + fixup->addr = (insn); \ + fixup->value = (val); \ + } \ +} while (0) + +static inline void ia64_apply_fixes (uint8_t **gen_code_pp, + struct ia64_fixup *ltoff_fixes, + uint64_t gp, + struct ia64_fixup *plt_fixes, + int num_plts, + unsigned long *plt_target, + unsigned int *plt_offset) +{ + static const uint8_t plt_bundle[] = { + 0x04, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, /* nop 0; movl r1=GP */ + 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x60, + + 0x05, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, /* nop 0; brl IP */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc0 + }; + uint8_t *gen_code_ptr = *gen_code_pp, *plt_start, *got_start; + uint64_t *vp; + struct ia64_fixup *fixup; + unsigned int offset = 0; + struct fdesc { + long ip; + long gp; + } *fdesc; + int i; + + if (plt_fixes) { + plt_start = gen_code_ptr; + + for (i = 0; i < num_plts; ++i) { + if (plt_offset[i]) { + plt_offset[i] = offset; + offset += sizeof(plt_bundle); + + fdesc = (struct fdesc *) plt_target[i]; + memcpy(gen_code_ptr, plt_bundle, sizeof(plt_bundle)); + ia64_imm64 (gen_code_ptr + 0x02, fdesc->gp); + ia64_imm60b(gen_code_ptr + 0x12, + (fdesc->ip - (long) (gen_code_ptr + 0x10)) >> 4); + gen_code_ptr += sizeof(plt_bundle); + } + } + + for (fixup = plt_fixes; fixup; fixup = fixup->next) + ia64_imm21b(fixup->addr, + ((long) plt_start + plt_offset[fixup->value] + - ((long) fixup->addr & ~0xf)) >> 4); + } + + got_start = gen_code_ptr; + + /* First, create the GOT: */ + for (fixup = ltoff_fixes; fixup; fixup = fixup->next) { + /* first check if we already have this value in the GOT: */ + for (vp = (uint64_t *) got_start; vp < (uint64_t *) gen_code_ptr; ++vp) + if (*vp == fixup->value) + break; + if (vp == (uint64_t *) gen_code_ptr) { + /* Nope, we need to put the value in the GOT: */ + *vp = fixup->value; + gen_code_ptr += 8; + } + ia64_imm22(fixup->addr, (long) vp - gp); + } + /* Keep code ptr aligned. */ + if ((long) gen_code_ptr & 15) + gen_code_ptr += 8; + *gen_code_pp = gen_code_ptr; +} +#endif +#endif + +const TCGArg *dyngen_op(TCGContext *s, int opc, const TCGArg *opparam_ptr) +{ + uint8_t *gen_code_ptr; + + gen_code_ptr = s->code_ptr; + switch(opc) { + +/* op.h is dynamically generated by dyngen.c from op.c */ +#include "op.h" + + default: + tcg_abort(); + } + s->code_ptr = gen_code_ptr; + return opparam_ptr; +} diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h new file mode 100644 index 0000000000..705e0df971 --- /dev/null +++ b/tcg/tcg-op.h @@ -0,0 +1,1175 @@ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2008 Fabrice Bellard + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ +#include "tcg.h" + +/* legacy dyngen operations */ +#include "gen-op.h" + +int gen_new_label(void); + +static inline void tcg_gen_op1(int opc, TCGArg arg1) +{ + *gen_opc_ptr++ = opc; + *gen_opparam_ptr++ = arg1; +} + +static inline void tcg_gen_op2(int opc, TCGArg arg1, TCGArg arg2) +{ + *gen_opc_ptr++ = opc; + *gen_opparam_ptr++ = arg1; + *gen_opparam_ptr++ = arg2; +} + +static inline void tcg_gen_op3(int opc, TCGArg arg1, TCGArg arg2, TCGArg arg3) +{ + *gen_opc_ptr++ = opc; + *gen_opparam_ptr++ = arg1; + *gen_opparam_ptr++ = arg2; + *gen_opparam_ptr++ = arg3; +} + +static inline void tcg_gen_op4(int opc, TCGArg arg1, TCGArg arg2, TCGArg arg3, + TCGArg arg4) +{ + *gen_opc_ptr++ = opc; + *gen_opparam_ptr++ = arg1; + *gen_opparam_ptr++ = arg2; + *gen_opparam_ptr++ = arg3; + *gen_opparam_ptr++ = arg4; +} + +static inline void tcg_gen_op5(int opc, TCGArg arg1, TCGArg arg2, + TCGArg arg3, TCGArg arg4, + TCGArg arg5) +{ + *gen_opc_ptr++ = opc; + *gen_opparam_ptr++ = arg1; + *gen_opparam_ptr++ = arg2; + *gen_opparam_ptr++ = arg3; + *gen_opparam_ptr++ = arg4; + *gen_opparam_ptr++ = arg5; +} + +static inline void tcg_gen_op6(int opc, TCGArg arg1, TCGArg arg2, + TCGArg arg3, TCGArg arg4, + TCGArg arg5, TCGArg arg6) +{ + *gen_opc_ptr++ = opc; + *gen_opparam_ptr++ = arg1; + *gen_opparam_ptr++ = arg2; + *gen_opparam_ptr++ = arg3; + *gen_opparam_ptr++ = arg4; + *gen_opparam_ptr++ = arg5; + *gen_opparam_ptr++ = arg6; +} + +static inline void gen_set_label(int n) +{ + tcg_gen_op1(INDEX_op_set_label, n); +} + +static inline void tcg_gen_mov_i32(int ret, int arg) +{ + tcg_gen_op2(INDEX_op_mov_i32, ret, arg); +} + +static inline void tcg_gen_movi_i32(int ret, int32_t arg) +{ + tcg_gen_op2(INDEX_op_movi_i32, ret, arg); +} + +/* helper calls */ +#define TCG_HELPER_CALL_FLAGS 0 + +static inline void tcg_gen_helper_0_0(void *func) +{ + tcg_gen_call(&tcg_ctx, + tcg_const_ptr((tcg_target_long)func), TCG_HELPER_CALL_FLAGS, + 0, NULL, 0, NULL); +} + +static inline void tcg_gen_helper_0_1(void *func, TCGArg arg) +{ + tcg_gen_call(&tcg_ctx, + tcg_const_ptr((tcg_target_long)func), TCG_HELPER_CALL_FLAGS, + 0, NULL, 1, &arg); +} + +static inline void tcg_gen_helper_0_2(void *func, TCGArg arg1, TCGArg arg2) +{ + TCGArg args[2]; + args[0] = arg1; + args[1] = arg2; + tcg_gen_call(&tcg_ctx, + tcg_const_ptr((tcg_target_long)func), TCG_HELPER_CALL_FLAGS, + 0, NULL, 2, args); +} + +static inline void tcg_gen_helper_1_2(void *func, TCGArg ret, + TCGArg arg1, TCGArg arg2) +{ + TCGArg args[2]; + args[0] = arg1; + args[1] = arg2; + tcg_gen_call(&tcg_ctx, + tcg_const_ptr((tcg_target_long)func), TCG_HELPER_CALL_FLAGS, + 1, &ret, 2, args); +} + +/* 32 bit ops */ + +static inline void tcg_gen_ld8u_i32(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld8u_i32, ret, arg2, offset); +} + +static inline void tcg_gen_ld8s_i32(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld8s_i32, ret, arg2, offset); +} + +static inline void tcg_gen_ld16u_i32(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld16u_i32, ret, arg2, offset); +} + +static inline void tcg_gen_ld16s_i32(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld16s_i32, ret, arg2, offset); +} + +static inline void tcg_gen_ld_i32(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld_i32, ret, arg2, offset); +} + +static inline void tcg_gen_st8_i32(int arg1, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_st8_i32, arg1, arg2, offset); +} + +static inline void tcg_gen_st16_i32(int arg1, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_st16_i32, arg1, arg2, offset); +} + +static inline void tcg_gen_st_i32(int arg1, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_st_i32, arg1, arg2, offset); +} + +static inline void tcg_gen_add_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_add_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_addi_i32(int ret, int arg1, int32_t arg2) +{ + tcg_gen_add_i32(ret, arg1, tcg_const_i32(arg2)); +} + +static inline void tcg_gen_sub_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_sub_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_subi_i32(int ret, int arg1, int32_t arg2) +{ + tcg_gen_sub_i32(ret, arg1, tcg_const_i32(arg2)); +} + +static inline void tcg_gen_and_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_and_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_andi_i32(int ret, int arg1, int32_t arg2) +{ + /* some cases can be optimized here */ + if (arg2 == 0) { + tcg_gen_movi_i32(ret, 0); + } else if (arg2 == 0xffffffff) { + tcg_gen_mov_i32(ret, arg1); + } else { + tcg_gen_and_i32(ret, arg1, tcg_const_i32(arg2)); + } +} + +static inline void tcg_gen_or_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_or_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_ori_i32(int ret, int arg1, int32_t arg2) +{ + /* some cases can be optimized here */ + if (arg2 == 0xffffffff) { + tcg_gen_movi_i32(ret, 0); + } else if (arg2 == 0) { + tcg_gen_mov_i32(ret, arg1); + } else { + tcg_gen_or_i32(ret, arg1, tcg_const_i32(arg2)); + } +} + +static inline void tcg_gen_xor_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_xor_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_xori_i32(int ret, int arg1, int32_t arg2) +{ + /* some cases can be optimized here */ + if (arg2 == 0) { + tcg_gen_mov_i32(ret, arg1); + } else { + tcg_gen_xor_i32(ret, arg1, tcg_const_i32(arg2)); + } +} + +static inline void tcg_gen_shl_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_shl_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_shli_i32(int ret, int arg1, int32_t arg2) +{ + tcg_gen_shl_i32(ret, arg1, tcg_const_i32(arg2)); +} + +static inline void tcg_gen_shr_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_shr_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_shri_i32(int ret, int arg1, int32_t arg2) +{ + tcg_gen_shr_i32(ret, arg1, tcg_const_i32(arg2)); +} + +static inline void tcg_gen_sar_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_sar_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_sari_i32(int ret, int arg1, int32_t arg2) +{ + tcg_gen_sar_i32(ret, arg1, tcg_const_i32(arg2)); +} + +static inline void tcg_gen_brcond_i32(int cond, TCGArg arg1, TCGArg arg2, + int label_index) +{ + tcg_gen_op4(INDEX_op_brcond_i32, arg1, arg2, cond, label_index); +} + +static inline void tcg_gen_mul_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_mul_i32, ret, arg1, arg2); +} + +#ifdef TCG_TARGET_HAS_div_i32 +static inline void tcg_gen_div_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_div_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_rem_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_rem_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_divu_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_divu_i32, ret, arg1, arg2); +} + +static inline void tcg_gen_remu_i32(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_remu_i32, ret, arg1, arg2); +} +#else +static inline void tcg_gen_div_i32(int ret, int arg1, int arg2) +{ + int t0; + t0 = tcg_temp_new(TCG_TYPE_I32); + tcg_gen_sari_i32(t0, arg1, 31); + tcg_gen_op5(INDEX_op_div2_i32, ret, t0, arg1, t0, arg2); +} + +static inline void tcg_gen_rem_i32(int ret, int arg1, int arg2) +{ + int t0; + t0 = tcg_temp_new(TCG_TYPE_I32); + tcg_gen_sari_i32(t0, arg1, 31); + tcg_gen_op5(INDEX_op_div2_i32, t0, ret, arg1, t0, arg2); +} + +static inline void tcg_gen_divu_i32(int ret, int arg1, int arg2) +{ + int t0; + t0 = tcg_temp_new(TCG_TYPE_I32); + tcg_gen_movi_i32(t0, 0); + tcg_gen_op5(INDEX_op_divu2_i32, ret, t0, arg1, t0, arg2); +} + +static inline void tcg_gen_remu_i32(int ret, int arg1, int arg2) +{ + int t0; + t0 = tcg_temp_new(TCG_TYPE_I32); + tcg_gen_movi_i32(t0, 0); + tcg_gen_op5(INDEX_op_divu2_i32, t0, ret, arg1, t0, arg2); +} +#endif + +#if TCG_TARGET_REG_BITS == 32 + +static inline void tcg_gen_mov_i64(int ret, int arg) +{ + tcg_gen_mov_i32(ret, arg); + tcg_gen_mov_i32(ret + 1, arg + 1); +} + +static inline void tcg_gen_movi_i64(int ret, int64_t arg) +{ + tcg_gen_movi_i32(ret, arg); + tcg_gen_movi_i32(ret + 1, arg >> 32); +} + +static inline void tcg_gen_ld8u_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_ld8u_i32(ret, arg2, offset); + tcg_gen_movi_i32(ret + 1, 0); +} + +static inline void tcg_gen_ld8s_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_ld8s_i32(ret, arg2, offset); + tcg_gen_sari_i32(ret + 1, ret, 31); +} + +static inline void tcg_gen_ld16u_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_ld16u_i32(ret, arg2, offset); + tcg_gen_movi_i32(ret + 1, 0); +} + +static inline void tcg_gen_ld16s_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_ld16s_i32(ret, arg2, offset); + tcg_gen_sari_i32(ret + 1, ret, 31); +} + +static inline void tcg_gen_ld32u_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_ld_i32(ret, arg2, offset); + tcg_gen_movi_i32(ret + 1, 0); +} + +static inline void tcg_gen_ld32s_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_ld_i32(ret, arg2, offset); + tcg_gen_sari_i32(ret + 1, ret, 31); +} + +static inline void tcg_gen_ld_i64(int ret, int arg2, tcg_target_long offset) +{ + /* since arg2 and ret have different types, they cannot be the + same temporary */ +#ifdef TCG_TARGET_WORDS_BIGENDIAN + tcg_gen_ld_i32(ret + 1, arg2, offset); + tcg_gen_ld_i32(ret, arg2, offset + 4); +#else + tcg_gen_ld_i32(ret, arg2, offset); + tcg_gen_ld_i32(ret + 1, arg2, offset + 4); +#endif +} + +static inline void tcg_gen_st8_i64(int arg1, int arg2, tcg_target_long offset) +{ + tcg_gen_st8_i32(arg1, arg2, offset); +} + +static inline void tcg_gen_st16_i64(int arg1, int arg2, tcg_target_long offset) +{ + tcg_gen_st16_i32(arg1, arg2, offset); +} + +static inline void tcg_gen_st32_i64(int arg1, int arg2, tcg_target_long offset) +{ + tcg_gen_st_i32(arg1, arg2, offset); +} + +static inline void tcg_gen_st_i64(int arg1, int arg2, tcg_target_long offset) +{ +#ifdef TCG_TARGET_WORDS_BIGENDIAN + tcg_gen_st_i32(arg1 + 1, arg2, offset); + tcg_gen_st_i32(arg1, arg2, offset + 4); +#else + tcg_gen_st_i32(arg1, arg2, offset); + tcg_gen_st_i32(arg1 + 1, arg2, offset + 4); +#endif +} + +static inline void tcg_gen_add_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op6(INDEX_op_add2_i32, ret, ret + 1, + arg1, arg1 + 1, arg2, arg2 + 1); +} + +static inline void tcg_gen_addi_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_add_i64(ret, arg1, tcg_const_i64(arg2)); +} + +static inline void tcg_gen_sub_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op6(INDEX_op_sub2_i32, ret, ret + 1, + arg1, arg1 + 1, arg2, arg2 + 1); +} + +static inline void tcg_gen_subi_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_sub_i64(ret, arg1, tcg_const_i64(arg2)); +} + +static inline void tcg_gen_and_i64(int ret, int arg1, int arg2) +{ + tcg_gen_and_i32(ret, arg1, arg2); + tcg_gen_and_i32(ret + 1, arg1 + 1, arg2 + 1); +} + +static inline void tcg_gen_andi_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_andi_i32(ret, arg1, arg2); + tcg_gen_andi_i32(ret + 1, arg1 + 1, arg2 >> 32); +} + +static inline void tcg_gen_or_i64(int ret, int arg1, int arg2) +{ + tcg_gen_or_i32(ret, arg1, arg2); + tcg_gen_or_i32(ret + 1, arg1 + 1, arg2 + 1); +} + +static inline void tcg_gen_ori_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_ori_i32(ret, arg1, arg2); + tcg_gen_ori_i32(ret + 1, arg1 + 1, arg2 >> 32); +} + +static inline void tcg_gen_xor_i64(int ret, int arg1, int arg2) +{ + tcg_gen_xor_i32(ret, arg1, arg2); + tcg_gen_xor_i32(ret + 1, arg1 + 1, arg2 + 1); +} + +static inline void tcg_gen_xori_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_xori_i32(ret, arg1, arg2); + tcg_gen_xori_i32(ret + 1, arg1 + 1, arg2 >> 32); +} + +/* XXX: use generic code when basic block handling is OK or CPU + specific code (x86) */ +static inline void tcg_gen_shl_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_helper_1_2(tcg_helper_shl_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_shli_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_shifti_i64(ret, arg1, arg2, 0, 0); +} + +static inline void tcg_gen_shr_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_helper_1_2(tcg_helper_shr_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_shri_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_shifti_i64(ret, arg1, arg2, 1, 0); +} + +static inline void tcg_gen_sar_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_helper_1_2(tcg_helper_sar_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_sari_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_shifti_i64(ret, arg1, arg2, 1, 1); +} + +static inline void tcg_gen_brcond_i64(int cond, TCGArg arg1, TCGArg arg2, + int label_index) +{ + tcg_gen_op6(INDEX_op_brcond2_i32, + arg1, arg1 + 1, arg2, arg2 + 1, cond, label_index); +} + +static inline void tcg_gen_mul_i64(int ret, int arg1, int arg2) +{ + int t0, t1; + + t0 = tcg_temp_new(TCG_TYPE_I64); + t1 = tcg_temp_new(TCG_TYPE_I32); + + tcg_gen_op4(INDEX_op_mulu2_i32, t0, t0 + 1, arg1, arg2); + + tcg_gen_mul_i32(t1, arg1, arg2 + 1); + tcg_gen_add_i32(t0 + 1, t0 + 1, t1); + tcg_gen_mul_i32(t1, arg1 + 1, arg2); + tcg_gen_add_i32(t0 + 1, t0 + 1, t1); + + tcg_gen_mov_i64(ret, t0); +} + +static inline void tcg_gen_div_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_helper_1_2(tcg_helper_div_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_rem_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_helper_1_2(tcg_helper_rem_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_divu_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_helper_1_2(tcg_helper_divu_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_remu_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_helper_1_2(tcg_helper_remu_i64, ret, arg1, arg2); +} + +#else + +static inline void tcg_gen_mov_i64(int ret, int arg) +{ + tcg_gen_op2(INDEX_op_mov_i64, ret, arg); +} + +static inline void tcg_gen_movi_i64(int ret, int64_t arg) +{ + tcg_gen_op2(INDEX_op_movi_i64, ret, arg); +} + +static inline void tcg_gen_ld8u_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld8u_i64, ret, arg2, offset); +} + +static inline void tcg_gen_ld8s_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld8s_i64, ret, arg2, offset); +} + +static inline void tcg_gen_ld16u_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld16u_i64, ret, arg2, offset); +} + +static inline void tcg_gen_ld16s_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld16s_i64, ret, arg2, offset); +} + +static inline void tcg_gen_ld32u_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld32u_i64, ret, arg2, offset); +} + +static inline void tcg_gen_ld32s_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld32s_i64, ret, arg2, offset); +} + +static inline void tcg_gen_ld_i64(int ret, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_ld_i64, ret, arg2, offset); +} + +static inline void tcg_gen_st8_i64(int arg1, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_st8_i64, arg1, arg2, offset); +} + +static inline void tcg_gen_st16_i64(int arg1, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_st16_i64, arg1, arg2, offset); +} + +static inline void tcg_gen_st32_i64(int arg1, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_st32_i64, arg1, arg2, offset); +} + +static inline void tcg_gen_st_i64(int arg1, int arg2, tcg_target_long offset) +{ + tcg_gen_op3(INDEX_op_st_i64, arg1, arg2, offset); +} + +static inline void tcg_gen_add_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_add_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_addi_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_add_i64(ret, arg1, tcg_const_i64(arg2)); +} + +static inline void tcg_gen_sub_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_sub_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_subi_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_sub_i64(ret, arg1, tcg_const_i64(arg2)); +} + +static inline void tcg_gen_and_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_and_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_andi_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_and_i64(ret, arg1, tcg_const_i64(arg2)); +} + +static inline void tcg_gen_or_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_or_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_ori_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_or_i64(ret, arg1, tcg_const_i64(arg2)); +} + +static inline void tcg_gen_xor_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_xor_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_xori_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_xor_i64(ret, arg1, tcg_const_i64(arg2)); +} + +static inline void tcg_gen_shl_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_shl_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_shli_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_shl_i64(ret, arg1, tcg_const_i64(arg2)); +} + +static inline void tcg_gen_shr_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_shr_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_shri_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_shr_i64(ret, arg1, tcg_const_i64(arg2)); +} + +static inline void tcg_gen_sar_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_sar_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_sari_i64(int ret, int arg1, int64_t arg2) +{ + tcg_gen_sar_i64(ret, arg1, tcg_const_i64(arg2)); +} + +static inline void tcg_gen_brcond_i64(int cond, TCGArg arg1, TCGArg arg2, + int label_index) +{ + tcg_gen_op4(INDEX_op_brcond_i64, arg1, arg2, cond, label_index); +} + +static inline void tcg_gen_mul_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_mul_i64, ret, arg1, arg2); +} + +#ifdef TCG_TARGET_HAS_div_i64 +static inline void tcg_gen_div_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_div_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_rem_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_rem_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_divu_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_divu_i64, ret, arg1, arg2); +} + +static inline void tcg_gen_remu_i64(int ret, int arg1, int arg2) +{ + tcg_gen_op3(INDEX_op_remu_i64, ret, arg1, arg2); +} +#else +static inline void tcg_gen_div_i64(int ret, int arg1, int arg2) +{ + int t0; + t0 = tcg_temp_new(TCG_TYPE_I64); + tcg_gen_sari_i64(t0, arg1, 63); + tcg_gen_op5(INDEX_op_div2_i64, ret, t0, arg1, t0, arg2); +} + +static inline void tcg_gen_rem_i64(int ret, int arg1, int arg2) +{ + int t0; + t0 = tcg_temp_new(TCG_TYPE_I64); + tcg_gen_sari_i64(t0, arg1, 63); + tcg_gen_op5(INDEX_op_div2_i64, t0, ret, arg1, t0, arg2); +} + +static inline void tcg_gen_divu_i64(int ret, int arg1, int arg2) +{ + int t0; + t0 = tcg_temp_new(TCG_TYPE_I64); + tcg_gen_movi_i64(t0, 0); + tcg_gen_op5(INDEX_op_divu2_i64, ret, t0, arg1, t0, arg2); +} + +static inline void tcg_gen_remu_i64(int ret, int arg1, int arg2) +{ + int t0; + t0 = tcg_temp_new(TCG_TYPE_I64); + tcg_gen_movi_i64(t0, 0); + tcg_gen_op5(INDEX_op_divu2_i64, t0, ret, arg1, t0, arg2); +} +#endif + +#endif + +/***************************************/ +/* optional operations */ + +static inline void tcg_gen_ext8s_i32(int ret, int arg) +{ +#ifdef TCG_TARGET_HAS_ext8s_i32 + tcg_gen_op2(INDEX_op_ext8s_i32, ret, arg); +#else + tcg_gen_shli_i32(ret, arg, 24); + tcg_gen_sari_i32(ret, arg, 24); +#endif +} + +static inline void tcg_gen_ext16s_i32(int ret, int arg) +{ +#ifdef TCG_TARGET_HAS_ext16s_i32 + tcg_gen_op2(INDEX_op_ext16s_i32, ret, arg); +#else + tcg_gen_shli_i32(ret, arg, 16); + tcg_gen_sari_i32(ret, arg, 16); +#endif +} + +/* Note: we assume the two high bytes are set to zero */ +static inline void tcg_gen_bswap16_i32(TCGArg ret, TCGArg arg) +{ +#ifdef TCG_TARGET_HAS_bswap16_i32 + tcg_gen_op2(INDEX_op_bswap16_i32, ret, arg); +#else + TCGArg t0, t1; + t0 = tcg_temp_new(TCG_TYPE_I32); + t1 = tcg_temp_new(TCG_TYPE_I32); + + tcg_gen_shri_i32(t0, arg, 8); + tcg_gen_andi_i32(t1, arg, 0x000000ff); + tcg_gen_shli_i32(t1, t1, 8); + tcg_gen_or_i32(ret, t0, t1); +#endif +} + +static inline void tcg_gen_bswap_i32(TCGArg ret, TCGArg arg) +{ +#ifdef TCG_TARGET_HAS_bswap_i32 + tcg_gen_op2(INDEX_op_bswap_i32, ret, arg); +#else + TCGArg t0, t1; + t0 = tcg_temp_new(TCG_TYPE_I32); + t1 = tcg_temp_new(TCG_TYPE_I32); + + tcg_gen_shli_i32(t0, arg, 24); + + tcg_gen_andi_i32(t1, arg, 0x0000ff00); + tcg_gen_shli_i32(t1, t1, 8); + tcg_gen_or_i32(t0, t0, t1); + + tcg_gen_shri_i32(t1, arg, 8); + tcg_gen_andi_i32(t1, t1, 0x0000ff00); + tcg_gen_or_i32(t0, t0, t1); + + tcg_gen_shri_i32(t1, arg, 24); + tcg_gen_or_i32(ret, t0, t1); +#endif +} + +#if TCG_TARGET_REG_BITS == 32 +static inline void tcg_gen_ext8s_i64(int ret, int arg) +{ + tcg_gen_ext8s_i32(ret, arg); + tcg_gen_sari_i32(ret + 1, ret, 31); +} + +static inline void tcg_gen_ext16s_i64(int ret, int arg) +{ + tcg_gen_ext16s_i32(ret, arg); + tcg_gen_sari_i32(ret + 1, ret, 31); +} + +static inline void tcg_gen_ext32s_i64(int ret, int arg) +{ + tcg_gen_mov_i32(ret, arg); + tcg_gen_sari_i32(ret + 1, ret, 31); +} + +static inline void tcg_gen_trunc_i64_i32(int ret, int arg) +{ + tcg_gen_mov_i32(ret, arg); +} + +static inline void tcg_gen_extu_i32_i64(int ret, int arg) +{ + tcg_gen_mov_i32(ret, arg); + tcg_gen_movi_i32(ret + 1, 0); +} + +static inline void tcg_gen_ext_i32_i64(int ret, int arg) +{ + tcg_gen_mov_i32(ret, arg); + tcg_gen_sari_i32(ret + 1, ret, 31); +} + +static inline void tcg_gen_bswap_i64(int ret, int arg) +{ + int t0, t1; + t0 = tcg_temp_new(TCG_TYPE_I32); + t1 = tcg_temp_new(TCG_TYPE_I32); + + tcg_gen_bswap_i32(t0, arg); + tcg_gen_bswap_i32(t1, arg + 1); + tcg_gen_mov_i32(ret, t1); + tcg_gen_mov_i32(ret + 1, t0); +} +#else + +static inline void tcg_gen_ext8s_i64(int ret, int arg) +{ +#ifdef TCG_TARGET_HAS_ext8s_i64 + tcg_gen_op2(INDEX_op_ext8s_i64, ret, arg); +#else + tcg_gen_shli_i64(ret, arg, 56); + tcg_gen_sari_i64(ret, arg, 56); +#endif +} + +static inline void tcg_gen_ext16s_i64(int ret, int arg) +{ +#ifdef TCG_TARGET_HAS_ext16s_i64 + tcg_gen_op2(INDEX_op_ext16s_i64, ret, arg); +#else + tcg_gen_shli_i64(ret, arg, 48); + tcg_gen_sari_i64(ret, arg, 48); +#endif +} + +static inline void tcg_gen_ext32s_i64(int ret, int arg) +{ +#ifdef TCG_TARGET_HAS_ext32s_i64 + tcg_gen_op2(INDEX_op_ext32s_i64, ret, arg); +#else + tcg_gen_shli_i64(ret, arg, 32); + tcg_gen_sari_i64(ret, arg, 32); +#endif +} + +/* Note: we assume the target supports move between 32 and 64 bit + registers */ +static inline void tcg_gen_trunc_i64_i32(int ret, int arg) +{ + tcg_gen_mov_i32(ret, arg); +} + +/* Note: we assume the target supports move between 32 and 64 bit + registers */ +static inline void tcg_gen_extu_i32_i64(int ret, int arg) +{ + tcg_gen_andi_i64(ret, arg, 0xffffffff); +} + +/* Note: we assume the target supports move between 32 and 64 bit + registers */ +static inline void tcg_gen_ext_i32_i64(int ret, int arg) +{ + tcg_gen_ext32s_i64(ret, arg); +} + +static inline void tcg_gen_bswap_i64(TCGArg ret, TCGArg arg) +{ +#ifdef TCG_TARGET_HAS_bswap_i64 + tcg_gen_op2(INDEX_op_bswap_i64, ret, arg); +#else + TCGArg t0, t1; + t0 = tcg_temp_new(TCG_TYPE_I32); + t1 = tcg_temp_new(TCG_TYPE_I32); + + tcg_gen_shli_i64(t0, arg, 56); + + tcg_gen_andi_i64(t1, arg, 0x0000ff00); + tcg_gen_shli_i64(t1, t1, 40); + tcg_gen_or_i64(t0, t0, t1); + + tcg_gen_andi_i64(t1, arg, 0x00ff0000); + tcg_gen_shli_i64(t1, t1, 24); + tcg_gen_or_i64(t0, t0, t1); + + tcg_gen_andi_i64(t1, arg, 0xff000000); + tcg_gen_shli_i64(t1, t1, 8); + tcg_gen_or_i64(t0, t0, t1); + + tcg_gen_shri_i64(t1, arg, 8); + tcg_gen_andi_i64(t1, t1, 0xff000000); + tcg_gen_or_i64(t0, t0, t1); + + tcg_gen_shri_i64(t1, arg, 24); + tcg_gen_andi_i64(t1, t1, 0x00ff0000); + tcg_gen_or_i64(t0, t0, t1); + + tcg_gen_shri_i64(t1, arg, 40); + tcg_gen_andi_i64(t1, t1, 0x0000ff00); + tcg_gen_or_i64(t0, t0, t1); + + tcg_gen_shri_i64(t1, arg, 56); + tcg_gen_or_i64(ret, t0, t1); +#endif +} + +#endif + +/***************************************/ +static inline void tcg_gen_macro_2(int ret0, int ret1, int macro_id) +{ + tcg_gen_op3(INDEX_op_macro_2, ret0, ret1, macro_id); +} + +/***************************************/ +/* QEMU specific operations. Their type depend on the QEMU CPU + type. */ +#ifndef TARGET_LONG_BITS +#error must include QEMU headers +#endif + +static inline void tcg_gen_exit_tb(tcg_target_long val) +{ + tcg_gen_op1(INDEX_op_exit_tb, val); +} + +static inline void tcg_gen_goto_tb(int idx) +{ + tcg_gen_op1(INDEX_op_goto_tb, idx); +} + +#if TCG_TARGET_REG_BITS == 32 +static inline void tcg_gen_qemu_ld8u(int ret, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op3(INDEX_op_qemu_ld8u, ret, addr, mem_index); +#else + tcg_gen_op4(INDEX_op_qemu_ld8u, ret, addr, addr + 1, mem_index); + tcg_gen_movi_i32(ret + 1, 0); +#endif +} + +static inline void tcg_gen_qemu_ld8s(int ret, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op3(INDEX_op_qemu_ld8s, ret, addr, mem_index); +#else + tcg_gen_op4(INDEX_op_qemu_ld8s, ret, addr, addr + 1, mem_index); + tcg_gen_ext8s_i32(ret + 1, ret); +#endif +} + +static inline void tcg_gen_qemu_ld16u(int ret, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op3(INDEX_op_qemu_ld16u, ret, addr, mem_index); +#else + tcg_gen_op4(INDEX_op_qemu_ld16u, ret, addr, addr + 1, mem_index); + tcg_gen_movi_i32(ret + 1, 0); +#endif +} + +static inline void tcg_gen_qemu_ld16s(int ret, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op3(INDEX_op_qemu_ld16s, ret, addr, mem_index); +#else + tcg_gen_op4(INDEX_op_qemu_ld16s, ret, addr, addr + 1, mem_index); + tcg_gen_ext16s_i32(ret + 1, ret); +#endif +} + +static inline void tcg_gen_qemu_ld32u(int ret, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op3(INDEX_op_qemu_ld32u, ret, addr, mem_index); +#else + tcg_gen_op4(INDEX_op_qemu_ld32u, ret, addr, addr + 1, mem_index); + tcg_gen_movi_i32(ret + 1, 0); +#endif +} + +static inline void tcg_gen_qemu_ld32s(int ret, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op3(INDEX_op_qemu_ld32u, ret, addr, mem_index); +#else + tcg_gen_op4(INDEX_op_qemu_ld32u, ret, addr, addr + 1, mem_index); + tcg_gen_sari_i32(ret + 1, ret, 31); +#endif +} + +static inline void tcg_gen_qemu_ld64(int ret, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op4(INDEX_op_qemu_ld64, ret, ret + 1, addr, mem_index); +#else + tcg_gen_op5(INDEX_op_qemu_ld64, ret, ret + 1, addr, addr + 1, mem_index); +#endif +} + +static inline void tcg_gen_qemu_st8(int arg, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op3(INDEX_op_qemu_st8, arg, addr, mem_index); +#else + tcg_gen_op4(INDEX_op_qemu_st8, arg, addr, addr + 1, mem_index); +#endif +} + +static inline void tcg_gen_qemu_st16(int arg, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op3(INDEX_op_qemu_st16, arg, addr, mem_index); +#else + tcg_gen_op4(INDEX_op_qemu_st16, arg, addr, addr + 1, mem_index); +#endif +} + +static inline void tcg_gen_qemu_st32(int arg, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op3(INDEX_op_qemu_st32, arg, addr, mem_index); +#else + tcg_gen_op4(INDEX_op_qemu_st32, arg, addr, addr + 1, mem_index); +#endif +} + +static inline void tcg_gen_qemu_st64(int arg, int addr, int mem_index) +{ +#if TARGET_LONG_BITS == 32 + tcg_gen_op4(INDEX_op_qemu_st64, arg, arg + 1, addr, mem_index); +#else + tcg_gen_op5(INDEX_op_qemu_st64, arg, arg + 1, addr, addr + 1, mem_index); +#endif +} + +#else /* TCG_TARGET_REG_BITS == 32 */ + +static inline void tcg_gen_qemu_ld8u(int ret, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_ld8u, ret, addr, mem_index); +} + +static inline void tcg_gen_qemu_ld8s(int ret, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_ld8s, ret, addr, mem_index); +} + +static inline void tcg_gen_qemu_ld16u(int ret, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_ld16u, ret, addr, mem_index); +} + +static inline void tcg_gen_qemu_ld16s(int ret, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_ld16s, ret, addr, mem_index); +} + +static inline void tcg_gen_qemu_ld32u(int ret, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_ld32u, ret, addr, mem_index); +} + +static inline void tcg_gen_qemu_ld32s(int ret, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_ld32s, ret, addr, mem_index); +} + +static inline void tcg_gen_qemu_ld64(int ret, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_ld64, ret, addr, mem_index); +} + +static inline void tcg_gen_qemu_st8(int arg, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_st8, arg, addr, mem_index); +} + +static inline void tcg_gen_qemu_st16(int arg, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_st16, arg, addr, mem_index); +} + +static inline void tcg_gen_qemu_st32(int arg, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_st32, arg, addr, mem_index); +} + +static inline void tcg_gen_qemu_st64(int arg, int addr, int mem_index) +{ + tcg_gen_op3(INDEX_op_qemu_st64, arg, addr, mem_index); +} + +#endif /* TCG_TARGET_REG_BITS != 32 */ diff --git a/tcg/tcg-opc.h b/tcg/tcg-opc.h new file mode 100644 index 0000000000..ebf4f82fb8 --- /dev/null +++ b/tcg/tcg-opc.h @@ -0,0 +1,228 @@ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2008 Fabrice Bellard + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ +#include "dyngen-opc.h" + +#ifndef DEF2 +#define DEF2(name, oargs, iargs, cargs, flags) DEF(name, oargs + iargs + cargs, 0) +#endif + +/* predefined ops */ +DEF2(end, 0, 0, 0, 0) /* must be kept first */ +DEF2(nop, 0, 0, 0, 0) +DEF2(nop1, 0, 0, 1, 0) +DEF2(nop2, 0, 0, 2, 0) +DEF2(nop3, 0, 0, 3, 0) +DEF2(nopn, 0, 0, 1, 0) /* variable number of parameters */ +/* macro handling */ +DEF2(macro_2, 2, 0, 1, 0) +DEF2(macro_start, 0, 0, 2, 0) +DEF2(macro_end, 0, 0, 2, 0) +DEF2(macro_goto, 0, 0, 3, 0) + +DEF2(set_label, 0, 0, 1, 0) +DEF2(call, 0, 1, 2, 0) /* variable number of parameters */ +DEF2(jmp, 0, 1, 0, TCG_OPF_BB_END) +DEF2(br, 0, 0, 1, TCG_OPF_BB_END) + +DEF2(mov_i32, 1, 1, 0, 0) +DEF2(movi_i32, 1, 0, 1, 0) +/* load/store */ +DEF2(ld8u_i32, 1, 1, 1, 0) +DEF2(ld8s_i32, 1, 1, 1, 0) +DEF2(ld16u_i32, 1, 1, 1, 0) +DEF2(ld16s_i32, 1, 1, 1, 0) +DEF2(ld_i32, 1, 1, 1, 0) +DEF2(st8_i32, 0, 2, 1, 0) +DEF2(st16_i32, 0, 2, 1, 0) +DEF2(st_i32, 0, 2, 1, 0) +/* arith */ +DEF2(add_i32, 1, 2, 0, 0) +DEF2(sub_i32, 1, 2, 0, 0) +DEF2(mul_i32, 1, 2, 0, 0) +#ifdef TCG_TARGET_HAS_div_i32 +DEF2(div_i32, 1, 2, 0, 0) +DEF2(divu_i32, 1, 2, 0, 0) +DEF2(rem_i32, 1, 2, 0, 0) +DEF2(remu_i32, 1, 2, 0, 0) +#else +DEF2(div2_i32, 2, 3, 0, 0) +DEF2(divu2_i32, 2, 3, 0, 0) +#endif +DEF2(and_i32, 1, 2, 0, 0) +DEF2(or_i32, 1, 2, 0, 0) +DEF2(xor_i32, 1, 2, 0, 0) +/* shifts */ +DEF2(shl_i32, 1, 2, 0, 0) +DEF2(shr_i32, 1, 2, 0, 0) +DEF2(sar_i32, 1, 2, 0, 0) + +DEF2(brcond_i32, 0, 2, 2, TCG_OPF_BB_END) +#if TCG_TARGET_REG_BITS == 32 +DEF2(add2_i32, 2, 4, 0, 0) +DEF2(sub2_i32, 2, 4, 0, 0) +DEF2(brcond2_i32, 0, 4, 2, TCG_OPF_BB_END) +DEF2(mulu2_i32, 2, 2, 0, 0) +#endif +#ifdef TCG_TARGET_HAS_ext8s_i32 +DEF2(ext8s_i32, 1, 1, 0, 0) +#endif +#ifdef TCG_TARGET_HAS_ext16s_i32 +DEF2(ext16s_i32, 1, 1, 0, 0) +#endif +#ifdef TCG_TARGET_HAS_bswap_i32 +DEF2(bswap_i32, 1, 1, 0, 0) +#endif + +#if TCG_TARGET_REG_BITS == 64 +DEF2(mov_i64, 1, 1, 0, 0) +DEF2(movi_i64, 1, 0, 1, 0) +/* load/store */ +DEF2(ld8u_i64, 1, 1, 1, 0) +DEF2(ld8s_i64, 1, 1, 1, 0) +DEF2(ld16u_i64, 1, 1, 1, 0) +DEF2(ld16s_i64, 1, 1, 1, 0) +DEF2(ld32u_i64, 1, 1, 1, 0) +DEF2(ld32s_i64, 1, 1, 1, 0) +DEF2(ld_i64, 1, 1, 1, 0) +DEF2(st8_i64, 0, 2, 1, 0) +DEF2(st16_i64, 0, 2, 1, 0) +DEF2(st32_i64, 0, 2, 1, 0) +DEF2(st_i64, 0, 2, 1, 0) +/* arith */ +DEF2(add_i64, 1, 2, 0, 0) +DEF2(sub_i64, 1, 2, 0, 0) +DEF2(mul_i64, 1, 2, 0, 0) +#ifdef TCG_TARGET_HAS_div_i64 +DEF2(div_i64, 1, 2, 0, 0) +DEF2(divu_i64, 1, 2, 0, 0) +DEF2(rem_i64, 1, 2, 0, 0) +DEF2(remu_i64, 1, 2, 0, 0) +#else +DEF2(div2_i64, 2, 3, 0, 0) +DEF2(divu2_i64, 2, 3, 0, 0) +#endif +DEF2(and_i64, 1, 2, 0, 0) +DEF2(or_i64, 1, 2, 0, 0) +DEF2(xor_i64, 1, 2, 0, 0) +/* shifts */ +DEF2(shl_i64, 1, 2, 0, 0) +DEF2(shr_i64, 1, 2, 0, 0) +DEF2(sar_i64, 1, 2, 0, 0) + +DEF2(brcond_i64, 0, 2, 2, TCG_OPF_BB_END) +#ifdef TCG_TARGET_HAS_ext8s_i64 +DEF2(ext8s_i64, 1, 1, 0, 0) +#endif +#ifdef TCG_TARGET_HAS_ext16s_i64 +DEF2(ext16s_i64, 1, 1, 0, 0) +#endif +#ifdef TCG_TARGET_HAS_ext32s_i64 +DEF2(ext32s_i64, 1, 1, 0, 0) +#endif +#ifdef TCG_TARGET_HAS_bswap_i64 +DEF2(bswap_i64, 1, 1, 0, 0) +#endif +#endif + +/* QEMU specific */ +DEF2(exit_tb, 0, 0, 1, TCG_OPF_BB_END) +DEF2(goto_tb, 0, 0, 1, TCG_OPF_BB_END) +/* Note: even if TARGET_LONG_BITS is not defined, the INDEX_op + constants must be defined */ +#if TCG_TARGET_REG_BITS == 32 +#if TARGET_LONG_BITS == 32 +DEF2(qemu_ld8u, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_ld8u, 1, 2, 1, TCG_OPF_CALL_CLOBBER) +#endif +#if TARGET_LONG_BITS == 32 +DEF2(qemu_ld8s, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_ld8s, 1, 2, 1, TCG_OPF_CALL_CLOBBER) +#endif +#if TARGET_LONG_BITS == 32 +DEF2(qemu_ld16u, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_ld16u, 1, 2, 1, TCG_OPF_CALL_CLOBBER) +#endif +#if TARGET_LONG_BITS == 32 +DEF2(qemu_ld16s, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_ld16s, 1, 2, 1, TCG_OPF_CALL_CLOBBER) +#endif +#if TARGET_LONG_BITS == 32 +DEF2(qemu_ld32u, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_ld32u, 1, 2, 1, TCG_OPF_CALL_CLOBBER) +#endif +#if TARGET_LONG_BITS == 32 +DEF2(qemu_ld32s, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_ld32s, 1, 2, 1, TCG_OPF_CALL_CLOBBER) +#endif +#if TARGET_LONG_BITS == 32 +DEF2(qemu_ld64, 2, 1, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_ld64, 2, 2, 1, TCG_OPF_CALL_CLOBBER) +#endif + +#if TARGET_LONG_BITS == 32 +DEF2(qemu_st8, 0, 2, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_st8, 0, 3, 1, TCG_OPF_CALL_CLOBBER) +#endif +#if TARGET_LONG_BITS == 32 +DEF2(qemu_st16, 0, 2, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_st16, 0, 3, 1, TCG_OPF_CALL_CLOBBER) +#endif +#if TARGET_LONG_BITS == 32 +DEF2(qemu_st32, 0, 2, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_st32, 0, 3, 1, TCG_OPF_CALL_CLOBBER) +#endif +#if TARGET_LONG_BITS == 32 +DEF2(qemu_st64, 0, 3, 1, TCG_OPF_CALL_CLOBBER) +#else +DEF2(qemu_st64, 0, 4, 1, TCG_OPF_CALL_CLOBBER) +#endif + +#else /* TCG_TARGET_REG_BITS == 32 */ + +DEF2(qemu_ld8u, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +DEF2(qemu_ld8s, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +DEF2(qemu_ld16u, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +DEF2(qemu_ld16s, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +DEF2(qemu_ld32u, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +DEF2(qemu_ld32s, 1, 1, 1, TCG_OPF_CALL_CLOBBER) +DEF2(qemu_ld64, 1, 1, 1, TCG_OPF_CALL_CLOBBER) + +DEF2(qemu_st8, 0, 2, 1, TCG_OPF_CALL_CLOBBER) +DEF2(qemu_st16, 0, 2, 1, TCG_OPF_CALL_CLOBBER) +DEF2(qemu_st32, 0, 2, 1, TCG_OPF_CALL_CLOBBER) +DEF2(qemu_st64, 0, 2, 1, TCG_OPF_CALL_CLOBBER) + +#endif /* TCG_TARGET_REG_BITS != 32 */ + +#undef DEF2 diff --git a/tcg/tcg-runtime.c b/tcg/tcg-runtime.c new file mode 100644 index 0000000000..f41703ec8b --- /dev/null +++ b/tcg/tcg-runtime.c @@ -0,0 +1,68 @@ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2008 Fabrice Bellard + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ +#include <stdarg.h> +#include <stdlib.h> +#include <stdio.h> +#include <string.h> +#include <inttypes.h> + +#include "config.h" +#include "osdep.h" +#include "tcg.h" + +int64_t tcg_helper_shl_i64(int64_t arg1, int64_t arg2) +{ + return arg1 << arg2; +} + +int64_t tcg_helper_shr_i64(int64_t arg1, int64_t arg2) +{ + return (uint64_t)arg1 >> arg2; +} + +int64_t tcg_helper_sar_i64(int64_t arg1, int64_t arg2) +{ + return arg1 >> arg2; +} + +int64_t tcg_helper_div_i64(int64_t arg1, int64_t arg2) +{ + return arg1 / arg2; +} + +int64_t tcg_helper_rem_i64(int64_t arg1, int64_t arg2) +{ + return arg1 / arg2; +} + +uint64_t tcg_helper_divu_i64(uint64_t arg1, uint64_t arg2) +{ + return arg1 / arg2; +} + +uint64_t tcg_helper_remu_i64(uint64_t arg1, uint64_t arg2) +{ + return arg1 / arg2; +} + diff --git a/tcg/tcg.c b/tcg/tcg.c new file mode 100644 index 0000000000..211e7fc987 --- /dev/null +++ b/tcg/tcg.c @@ -0,0 +1,1781 @@ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2008 Fabrice Bellard + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +/* define it to suppress various consistency checks (faster) */ +#define NDEBUG + +/* define it to use liveness analysis (better code) */ +#define USE_LIVENESS_ANALYSIS + +#include <assert.h> +#include <stdarg.h> +#include <stdlib.h> +#include <stdio.h> +#include <string.h> +#include <inttypes.h> + +#include "config.h" +#include "osdep.h" + +/* Note: the long term plan is to reduce the dependancies on the QEMU + CPU definitions. Currently they are used for qemu_ld/st + instructions */ +#define NO_CPU_IO_DEFS +#include "cpu.h" +#include "exec-all.h" + +#include "tcg-op.h" +#include "elf.h" + + +static void patch_reloc(uint8_t *code_ptr, int type, + tcg_target_long value); + +TCGOpDef tcg_op_defs[] = { +#define DEF(s, n, copy_size) { #s, 0, 0, n, n, 0, copy_size }, +#define DEF2(s, iargs, oargs, cargs, flags) { #s, iargs, oargs, cargs, iargs + oargs + cargs, flags, 0 }, +#include "tcg-opc.h" +#undef DEF +#undef DEF2 +}; + +TCGRegSet tcg_target_available_regs[2]; +TCGRegSet tcg_target_call_clobber_regs; + +/* XXX: move that inside the context */ +uint16_t *gen_opc_ptr; +TCGArg *gen_opparam_ptr; + +static inline void tcg_out8(TCGContext *s, uint8_t v) +{ + *s->code_ptr++ = v; +} + +static inline void tcg_out16(TCGContext *s, uint16_t v) +{ + *(uint16_t *)s->code_ptr = v; + s->code_ptr += 2; +} + +static inline void tcg_out32(TCGContext *s, uint32_t v) +{ + *(uint32_t *)s->code_ptr = v; + s->code_ptr += 4; +} + +/* label relocation processing */ + +void tcg_out_reloc(TCGContext *s, uint8_t *code_ptr, int type, + int label_index, long addend) +{ + TCGLabel *l; + TCGRelocation *r; + + l = &s->labels[label_index]; + if (l->has_value) { + patch_reloc(code_ptr, type, l->u.value + addend); + } else { + /* add a new relocation entry */ + r = tcg_malloc(sizeof(TCGRelocation)); + r->type = type; + r->ptr = code_ptr; + r->addend = addend; + r->next = l->u.first_reloc; + l->u.first_reloc = r; + } +} + +static void tcg_out_label(TCGContext *s, int label_index, + tcg_target_long value) +{ + TCGLabel *l; + TCGRelocation *r; + + l = &s->labels[label_index]; + if (l->has_value) + tcg_abort(); + r = l->u.first_reloc; + while (r != NULL) { + patch_reloc(r->ptr, r->type, value + r->addend); + r = r->next; + } + l->has_value = 1; + l->u.value = value; +} + +int gen_new_label(void) +{ + TCGContext *s = &tcg_ctx; + int idx; + TCGLabel *l; + + if (s->nb_labels >= TCG_MAX_LABELS) + tcg_abort(); + idx = s->nb_labels++; + l = &s->labels[idx]; + l->has_value = 0; + l->u.first_reloc = NULL; + return idx; +} + +#include "tcg-target.c" + +/* XXX: factorize */ +static void pstrcpy(char *buf, int buf_size, const char *str) +{ + int c; + char *q = buf; + + if (buf_size <= 0) + return; + + for(;;) { + c = *str++; + if (c == 0 || q >= buf + buf_size - 1) + break; + *q++ = c; + } + *q = '\0'; +} + +#if TCG_TARGET_REG_BITS == 32 +/* strcat and truncate. */ +static char *pstrcat(char *buf, int buf_size, const char *s) +{ + int len; + len = strlen(buf); + if (len < buf_size) + pstrcpy(buf + len, buf_size - len, s); + return buf; +} +#endif + +/* pool based memory allocation */ +void *tcg_malloc_internal(TCGContext *s, int size) +{ + TCGPool *p; + int pool_size; + + if (size > TCG_POOL_CHUNK_SIZE) { + /* big malloc: insert a new pool (XXX: could optimize) */ + p = qemu_malloc(sizeof(TCGPool) + size); + p->size = size; + if (s->pool_current) + s->pool_current->next = p; + else + s->pool_first = p; + p->next = s->pool_current; + } else { + p = s->pool_current; + if (!p) { + p = s->pool_first; + if (!p) + goto new_pool; + } else { + if (!p->next) { + new_pool: + pool_size = TCG_POOL_CHUNK_SIZE; + p = qemu_malloc(sizeof(TCGPool) + pool_size); + p->size = pool_size; + p->next = NULL; + if (s->pool_current) + s->pool_current->next = p; + else + s->pool_first = p; + } else { + p = p->next; + } + } + } + s->pool_current = p; + s->pool_cur = p->data + size; + s->pool_end = p->data + p->size; + return p->data; +} + +void tcg_pool_reset(TCGContext *s) +{ + s->pool_cur = s->pool_end = NULL; + s->pool_current = NULL; +} + +/* free all the pool */ +void tcg_pool_free(TCGContext *s) +{ + TCGPool *p, *p1; + + for(p = s->pool_first; p != NULL; p = p1) { + p1 = p->next; + qemu_free(p); + } + s->pool_first = NULL; + s->pool_cur = s->pool_end = NULL; +} + +void tcg_context_init(TCGContext *s) +{ + int op, total_args, n; + TCGOpDef *def; + TCGArgConstraint *args_ct; + int *sorted_args; + + memset(s, 0, sizeof(*s)); + s->temps = s->static_temps; + s->nb_globals = 0; + + /* Count total number of arguments and allocate the corresponding + space */ + total_args = 0; + for(op = 0; op < NB_OPS; op++) { + def = &tcg_op_defs[op]; + n = def->nb_iargs + def->nb_oargs; + total_args += n; + } + + args_ct = qemu_malloc(sizeof(TCGArgConstraint) * total_args); + sorted_args = qemu_malloc(sizeof(int) * total_args); + + for(op = 0; op < NB_OPS; op++) { + def = &tcg_op_defs[op]; + def->args_ct = args_ct; + def->sorted_args = sorted_args; + n = def->nb_iargs + def->nb_oargs; + sorted_args += n; + args_ct += n; + } + + tcg_target_init(s); +} + +void tcg_set_frame(TCGContext *s, int reg, + tcg_target_long start, tcg_target_long size) +{ + s->frame_start = start; + s->frame_end = start + size; + s->frame_reg = reg; +} + +void tcg_set_macro_func(TCGContext *s, TCGMacroFunc *func) +{ + s->macro_func = func; +} + +void tcg_func_start(TCGContext *s) +{ + tcg_pool_reset(s); + s->nb_temps = s->nb_globals; + s->labels = tcg_malloc(sizeof(TCGLabel) * TCG_MAX_LABELS); + s->nb_labels = 0; + s->current_frame_offset = s->frame_start; + + gen_opc_ptr = gen_opc_buf; + gen_opparam_ptr = gen_opparam_buf; +} + +static inline void tcg_temp_alloc(TCGContext *s, int n) +{ + if (n > TCG_MAX_TEMPS) + tcg_abort(); +} + +int tcg_global_reg_new(TCGType type, int reg, const char *name) +{ + TCGContext *s = &tcg_ctx; + TCGTemp *ts; + int idx; + +#if TCG_TARGET_REG_BITS == 32 + if (type != TCG_TYPE_I32) + tcg_abort(); +#endif + if (tcg_regset_test_reg(s->reserved_regs, reg)) + tcg_abort(); + idx = s->nb_globals; + tcg_temp_alloc(s, s->nb_globals + 1); + ts = &s->temps[s->nb_globals]; + ts->base_type = type; + ts->type = type; + ts->fixed_reg = 1; + ts->reg = reg; + ts->val_type = TEMP_VAL_REG; + ts->name = name; + s->nb_globals++; + tcg_regset_set_reg(s->reserved_regs, reg); + return idx; +} + +int tcg_global_mem_new(TCGType type, int reg, tcg_target_long offset, + const char *name) +{ + TCGContext *s = &tcg_ctx; + TCGTemp *ts; + int idx; + + idx = s->nb_globals; +#if TCG_TARGET_REG_BITS == 32 + if (type == TCG_TYPE_I64) { + char buf[64]; + tcg_temp_alloc(s, s->nb_globals + 1); + ts = &s->temps[s->nb_globals]; + ts->base_type = type; + ts->type = TCG_TYPE_I32; + ts->fixed_reg = 0; + ts->mem_allocated = 1; + ts->mem_reg = reg; +#ifdef TCG_TARGET_WORDS_BIGENDIAN + ts->mem_offset = offset + 4; +#else + ts->mem_offset = offset; +#endif + ts->val_type = TEMP_VAL_MEM; + pstrcpy(buf, sizeof(buf), name); + pstrcat(buf, sizeof(buf), "_0"); + ts->name = strdup(buf); + ts++; + + ts->base_type = type; + ts->type = TCG_TYPE_I32; + ts->fixed_reg = 0; + ts->mem_allocated = 1; + ts->mem_reg = reg; +#ifdef TCG_TARGET_WORDS_BIGENDIAN + ts->mem_offset = offset; +#else + ts->mem_offset = offset + 4; +#endif + ts->val_type = TEMP_VAL_MEM; + pstrcpy(buf, sizeof(buf), name); + pstrcat(buf, sizeof(buf), "_1"); + ts->name = strdup(buf); + + s->nb_globals += 2; + } else +#endif + { + tcg_temp_alloc(s, s->nb_globals + 1); + ts = &s->temps[s->nb_globals]; + ts->base_type = type; + ts->type = type; + ts->fixed_reg = 0; + ts->mem_allocated = 1; + ts->mem_reg = reg; + ts->mem_offset = offset; + ts->val_type = TEMP_VAL_MEM; + ts->name = name; + s->nb_globals++; + } + return idx; +} + +int tcg_temp_new(TCGType type) +{ + TCGContext *s = &tcg_ctx; + TCGTemp *ts; + int idx; + + idx = s->nb_temps; +#if TCG_TARGET_REG_BITS == 32 + if (type == TCG_TYPE_I64) { + tcg_temp_alloc(s, s->nb_temps + 1); + ts = &s->temps[s->nb_temps]; + ts->base_type = type; + ts->type = TCG_TYPE_I32; + ts->val_type = TEMP_VAL_DEAD; + ts->mem_allocated = 0; + ts->name = NULL; + ts++; + ts->base_type = TCG_TYPE_I32; + ts->type = TCG_TYPE_I32; + ts->val_type = TEMP_VAL_DEAD; + ts->mem_allocated = 0; + ts->name = NULL; + s->nb_temps += 2; + } else +#endif + { + tcg_temp_alloc(s, s->nb_temps + 1); + ts = &s->temps[s->nb_temps]; + ts->base_type = type; + ts->type = type; + ts->val_type = TEMP_VAL_DEAD; + ts->mem_allocated = 0; + ts->name = NULL; + s->nb_temps++; + } + return idx; +} + +int tcg_const_i32(int32_t val) +{ + TCGContext *s = &tcg_ctx; + TCGTemp *ts; + int idx; + + idx = s->nb_temps; + tcg_temp_alloc(s, idx + 1); + ts = &s->temps[idx]; + ts->base_type = ts->type = TCG_TYPE_I32; + ts->val_type = TEMP_VAL_CONST; + ts->name = NULL; + ts->val = val; + s->nb_temps++; + return idx; +} + +int tcg_const_i64(int64_t val) +{ + TCGContext *s = &tcg_ctx; + TCGTemp *ts; + int idx; + + idx = s->nb_temps; +#if TCG_TARGET_REG_BITS == 32 + tcg_temp_alloc(s, idx + 2); + ts = &s->temps[idx]; + ts->base_type = TCG_TYPE_I64; + ts->type = TCG_TYPE_I32; + ts->val_type = TEMP_VAL_CONST; + ts->name = NULL; + ts->val = val; + ts++; + ts->base_type = TCG_TYPE_I32; + ts->type = TCG_TYPE_I32; + ts->val_type = TEMP_VAL_CONST; + ts->name = NULL; + ts->val = val >> 32; + s->nb_temps += 2; +#else + tcg_temp_alloc(s, idx + 1); + ts = &s->temps[idx]; + ts->base_type = ts->type = TCG_TYPE_I64; + ts->val_type = TEMP_VAL_CONST; + ts->name = NULL; + ts->val = val; + s->nb_temps++; +#endif + return idx; +} + +void tcg_register_helper(void *func, const char *name) +{ + TCGContext *s = &tcg_ctx; + int n; + if ((s->nb_helpers + 1) > s->allocated_helpers) { + n = s->allocated_helpers; + if (n == 0) { + n = 4; + } else { + n *= 2; + } + s->helpers = realloc(s->helpers, n * sizeof(TCGHelperInfo)); + s->allocated_helpers = n; + } + s->helpers[s->nb_helpers].func = func; + s->helpers[s->nb_helpers].name = name; + s->nb_helpers++; +} + +const char *tcg_helper_get_name(TCGContext *s, void *func) +{ + int i; + + for(i = 0; i < s->nb_helpers; i++) { + if (s->helpers[i].func == func) + return s->helpers[i].name; + } + return NULL; +} + +static inline TCGType tcg_get_base_type(TCGContext *s, TCGArg arg) +{ + return s->temps[arg].base_type; +} + +static void tcg_gen_call_internal(TCGContext *s, TCGArg func, + unsigned int flags, + unsigned int nb_rets, const TCGArg *rets, + unsigned int nb_params, const TCGArg *params) +{ + int i; + *gen_opc_ptr++ = INDEX_op_call; + *gen_opparam_ptr++ = (nb_rets << 16) | (nb_params + 1); + for(i = 0; i < nb_rets; i++) { + *gen_opparam_ptr++ = rets[i]; + } + for(i = 0; i < nb_params; i++) { + *gen_opparam_ptr++ = params[i]; + } + *gen_opparam_ptr++ = func; + + *gen_opparam_ptr++ = flags; + /* total parameters, needed to go backward in the instruction stream */ + *gen_opparam_ptr++ = 1 + nb_rets + nb_params + 3; +} + + +#if TCG_TARGET_REG_BITS < 64 +/* Note: we convert the 64 bit args to 32 bit */ +void tcg_gen_call(TCGContext *s, TCGArg func, unsigned int flags, + unsigned int nb_rets, const TCGArg *rets, + unsigned int nb_params, const TCGArg *args1) +{ + TCGArg ret, *args2, rets_2[2], arg; + int j, i, call_type; + + if (nb_rets == 1) { + ret = rets[0]; + if (tcg_get_base_type(s, ret) == TCG_TYPE_I64) { + nb_rets = 2; + rets_2[0] = ret; + rets_2[1] = ret + 1; + rets = rets_2; + } + } + args2 = alloca((nb_params * 2) * sizeof(TCGArg)); + j = 0; + call_type = (flags & TCG_CALL_TYPE_MASK); + for(i = 0; i < nb_params; i++) { + arg = args1[i]; + if (tcg_get_base_type(s, arg) == TCG_TYPE_I64) { +#ifdef TCG_TARGET_I386 + /* REGPARM case: if the third parameter is 64 bit, it is + allocated on the stack */ + if (j == 2 && call_type == TCG_CALL_TYPE_REGPARM) { + call_type = TCG_CALL_TYPE_REGPARM_2; + flags = (flags & ~TCG_CALL_TYPE_MASK) | call_type; + } + args2[j++] = arg; + args2[j++] = arg + 1; +#else +#ifdef TCG_TARGET_WORDS_BIGENDIAN + args2[j++] = arg + 1; + args2[j++] = arg; +#else + args2[j++] = arg; + args2[j++] = arg + 1; +#endif +#endif + } else { + args2[j++] = arg; + } + } + tcg_gen_call_internal(s, func, flags, + nb_rets, rets, j, args2); +} +#else +void tcg_gen_call(TCGContext *s, TCGArg func, unsigned int flags, + unsigned int nb_rets, const TCGArg *rets, + unsigned int nb_params, const TCGArg *args1) +{ + tcg_gen_call_internal(s, func, flags, + nb_rets, rets, nb_params, args1); +} +#endif + +void tcg_gen_shifti_i64(TCGArg ret, TCGArg arg1, + int c, int right, int arith) +{ + if (c == 0) + return; + if (c >= 32) { + c -= 32; + if (right) { + if (arith) { + tcg_gen_sari_i32(ret, arg1 + 1, c); + tcg_gen_sari_i32(ret + 1, arg1 + 1, 31); + } else { + tcg_gen_shri_i32(ret, arg1 + 1, c); + tcg_gen_movi_i32(ret + 1, 0); + } + } else { + tcg_gen_shli_i32(ret + 1, arg1, c); + tcg_gen_movi_i32(ret, 0); + } + } else { + int t0, t1; + + t0 = tcg_temp_new(TCG_TYPE_I32); + t1 = tcg_temp_new(TCG_TYPE_I32); + if (right) { + tcg_gen_shli_i32(t0, arg1 + 1, 32 - c); + if (arith) + tcg_gen_sari_i32(t1, arg1 + 1, c); + else + tcg_gen_shri_i32(t1, arg1 + 1, c); + tcg_gen_shri_i32(ret, arg1, c); + tcg_gen_or_i32(ret, ret, t0); + tcg_gen_mov_i32(ret + 1, t1); + } else { + tcg_gen_shri_i32(t0, arg1, 32 - c); + /* Note: ret can be the same as arg1, so we use t1 */ + tcg_gen_shli_i32(t1, arg1, c); + tcg_gen_shli_i32(ret + 1, arg1 + 1, c); + tcg_gen_or_i32(ret + 1, ret + 1, t0); + tcg_gen_mov_i32(ret, t1); + } + } +} + +void tcg_reg_alloc_start(TCGContext *s) +{ + int i; + TCGTemp *ts; + for(i = 0; i < s->nb_globals; i++) { + ts = &s->temps[i]; + if (ts->fixed_reg) { + ts->val_type = TEMP_VAL_REG; + } else { + ts->val_type = TEMP_VAL_MEM; + } + } + for(i = 0; i < TCG_TARGET_NB_REGS; i++) { + s->reg_to_temp[i] = -1; + } +} + +char *tcg_get_arg_str(TCGContext *s, char *buf, int buf_size, TCGArg arg) +{ + TCGTemp *ts; + if (arg < s->nb_globals) { + pstrcpy(buf, buf_size, s->temps[arg].name); + } else { + ts = &s->temps[arg]; + if (ts->val_type == TEMP_VAL_CONST) { + snprintf(buf, buf_size, "$0x%" TCG_PRIlx , ts->val); + } else { + snprintf(buf, buf_size, "tmp%d", (int)arg - s->nb_globals); + } + } + return buf; +} + +void tcg_dump_ops(TCGContext *s, FILE *outfile) +{ + const uint16_t *opc_ptr; + const TCGArg *args; + TCGArg arg; + int c, i, k, nb_oargs, nb_iargs, nb_cargs; + const TCGOpDef *def; + char buf[128]; + + opc_ptr = gen_opc_buf; + args = gen_opparam_buf; + while (opc_ptr < gen_opc_ptr) { + c = *opc_ptr++; + def = &tcg_op_defs[c]; + fprintf(outfile, " %s ", def->name); + if (c == INDEX_op_call) { + TCGArg arg; + /* variable number of arguments */ + arg = *args++; + nb_oargs = arg >> 16; + nb_iargs = arg & 0xffff; + nb_cargs = def->nb_cargs; + } else if (c == INDEX_op_nopn) { + /* variable number of arguments */ + nb_cargs = *args; + nb_oargs = 0; + nb_iargs = 0; + } else { + nb_oargs = def->nb_oargs; + nb_iargs = def->nb_iargs; + nb_cargs = def->nb_cargs; + } + + k = 0; + for(i = 0; i < nb_oargs; i++) { + if (k != 0) + fprintf(outfile, ","); + fprintf(outfile, "%s", tcg_get_arg_str(s, buf, sizeof(buf), args[k++])); + } + for(i = 0; i < nb_iargs; i++) { + if (k != 0) + fprintf(outfile, ","); + /* XXX: dump helper name for call */ + fprintf(outfile, "%s", tcg_get_arg_str(s, buf, sizeof(buf), args[k++])); + } + for(i = 0; i < nb_cargs; i++) { + if (k != 0) + fprintf(outfile, ","); + arg = args[k++]; + fprintf(outfile, "$0x%" TCG_PRIlx, arg); + } + fprintf(outfile, "\n"); + args += nb_iargs + nb_oargs + nb_cargs; + } +} + +/* we give more priority to constraints with less registers */ +static int get_constraint_priority(const TCGOpDef *def, int k) +{ + const TCGArgConstraint *arg_ct; + + int i, n; + arg_ct = &def->args_ct[k]; + if (arg_ct->ct & TCG_CT_ALIAS) { + /* an alias is equivalent to a single register */ + n = 1; + } else { + if (!(arg_ct->ct & TCG_CT_REG)) + return 0; + n = 0; + for(i = 0; i < TCG_TARGET_NB_REGS; i++) { + if (tcg_regset_test_reg(arg_ct->u.regs, i)) + n++; + } + } + return TCG_TARGET_NB_REGS - n + 1; +} + +/* sort from highest priority to lowest */ +static void sort_constraints(TCGOpDef *def, int start, int n) +{ + int i, j, p1, p2, tmp; + + for(i = 0; i < n; i++) + def->sorted_args[start + i] = start + i; + if (n <= 1) + return; + for(i = 0; i < n - 1; i++) { + for(j = i + 1; j < n; j++) { + p1 = get_constraint_priority(def, def->sorted_args[start + i]); + p2 = get_constraint_priority(def, def->sorted_args[start + j]); + if (p1 < p2) { + tmp = def->sorted_args[start + i]; + def->sorted_args[start + i] = def->sorted_args[start + j]; + def->sorted_args[start + j] = tmp; + } + } + } +} + +void tcg_add_target_add_op_defs(const TCGTargetOpDef *tdefs) +{ + int op; + TCGOpDef *def; + const char *ct_str; + int i, nb_args; + + for(;;) { + if (tdefs->op < 0) + break; + op = tdefs->op; + assert(op >= 0 && op < NB_OPS); + def = &tcg_op_defs[op]; + nb_args = def->nb_iargs + def->nb_oargs; + for(i = 0; i < nb_args; i++) { + ct_str = tdefs->args_ct_str[i]; + tcg_regset_clear(def->args_ct[i].u.regs); + def->args_ct[i].ct = 0; + if (ct_str[0] >= '0' && ct_str[0] <= '9') { + int oarg; + oarg = ct_str[0] - '0'; + assert(oarg < def->nb_oargs); + assert(def->args_ct[oarg].ct & TCG_CT_REG); + /* TCG_CT_ALIAS is for the output arguments. The input + argument is tagged with TCG_CT_IALIAS for + informative purposes. */ + def->args_ct[i] = def->args_ct[oarg]; + def->args_ct[oarg].ct = i | TCG_CT_ALIAS; + def->args_ct[i].ct |= TCG_CT_IALIAS; + } else { + for(;;) { + if (*ct_str == '\0') + break; + switch(*ct_str) { + case 'i': + def->args_ct[i].ct |= TCG_CT_CONST; + ct_str++; + break; + default: + if (target_parse_constraint(&def->args_ct[i], &ct_str) < 0) { + fprintf(stderr, "Invalid constraint '%s' for arg %d of operation '%s'\n", + ct_str, i, def->name); + exit(1); + } + } + } + } + } + + /* sort the constraints (XXX: this is just an heuristic) */ + sort_constraints(def, 0, def->nb_oargs); + sort_constraints(def, def->nb_oargs, def->nb_iargs); + +#if 0 + { + int i; + + printf("%s: sorted=", def->name); + for(i = 0; i < def->nb_oargs + def->nb_iargs; i++) + printf(" %d", def->sorted_args[i]); + printf("\n"); + } +#endif + tdefs++; + } + +} + +#ifdef USE_LIVENESS_ANALYSIS + +/* set a nop for an operation using 'nb_args' */ +static inline void tcg_set_nop(TCGContext *s, uint16_t *opc_ptr, + TCGArg *args, int nb_args) +{ + if (nb_args == 0) { + *opc_ptr = INDEX_op_nop; + } else { + *opc_ptr = INDEX_op_nopn; + args[0] = nb_args; + args[nb_args - 1] = nb_args; + } +} + +/* liveness analysis: end of basic block: globals are live, temps are dead */ +static inline void tcg_la_bb_end(TCGContext *s, uint8_t *dead_temps) +{ + memset(dead_temps, 0, s->nb_globals); + memset(dead_temps + s->nb_globals, 1, s->nb_temps - s->nb_globals); +} + +/* Liveness analysis : update the opc_dead_iargs array to tell if a + given input arguments is dead. Instructions updating dead + temporaries are removed. */ +void tcg_liveness_analysis(TCGContext *s) +{ + int i, op_index, op, nb_args, nb_iargs, nb_oargs, arg, nb_ops; + TCGArg *args; + const TCGOpDef *def; + uint8_t *dead_temps; + unsigned int dead_iargs; + + gen_opc_ptr++; /* skip end */ + + nb_ops = gen_opc_ptr - gen_opc_buf; + + /* XXX: make it really dynamic */ + s->op_dead_iargs = tcg_malloc(OPC_BUF_SIZE * sizeof(uint16_t)); + + dead_temps = tcg_malloc(s->nb_temps); + memset(dead_temps, 1, s->nb_temps); + + args = gen_opparam_ptr; + op_index = nb_ops - 1; + while (op_index >= 0) { + op = gen_opc_buf[op_index]; + def = &tcg_op_defs[op]; + switch(op) { + case INDEX_op_call: + nb_args = args[-1]; + args -= nb_args; + nb_iargs = args[0] & 0xffff; + nb_oargs = args[0] >> 16; + args++; + + /* output args are dead */ + for(i = 0; i < nb_oargs; i++) { + arg = args[i]; + dead_temps[arg] = 1; + } + + /* globals are live (they may be used by the call) */ + memset(dead_temps, 0, s->nb_globals); + + /* input args are live */ + dead_iargs = 0; + for(i = 0; i < nb_iargs; i++) { + arg = args[i + nb_oargs]; + if (dead_temps[arg]) { + dead_iargs |= (1 << i); + } + dead_temps[arg] = 0; + } + s->op_dead_iargs[op_index] = dead_iargs; + args--; + break; + case INDEX_op_set_label: + args--; + /* mark end of basic block */ + tcg_la_bb_end(s, dead_temps); + break; + case INDEX_op_nopn: + nb_args = args[-1]; + args -= nb_args; + break; + case INDEX_op_macro_2: + { + int dead_args[2], macro_id; + int saved_op_index, saved_arg_index; + int macro_op_index, macro_arg_index; + int macro_end_op_index, macro_end_arg_index; + int last_nb_temps; + + nb_args = 3; + args -= nb_args; + dead_args[0] = dead_temps[args[0]]; + dead_args[1] = dead_temps[args[1]]; + macro_id = args[2]; + + /* call the macro function which generate code + depending on the live outputs */ + saved_op_index = op_index; + saved_arg_index = args - gen_opparam_buf; + + /* add a macro start instruction */ + *gen_opc_ptr++ = INDEX_op_macro_start; + *gen_opparam_ptr++ = saved_op_index; + *gen_opparam_ptr++ = saved_arg_index; + + macro_op_index = gen_opc_ptr - gen_opc_buf; + macro_arg_index = gen_opparam_ptr - gen_opparam_buf; + + last_nb_temps = s->nb_temps; + + s->macro_func(s, macro_id, dead_args); + + /* realloc temp info (XXX: make it faster) */ + if (s->nb_temps > last_nb_temps) { + uint8_t *new_dead_temps; + + new_dead_temps = tcg_malloc(s->nb_temps); + memcpy(new_dead_temps, dead_temps, last_nb_temps); + memset(new_dead_temps + last_nb_temps, 1, + s->nb_temps - last_nb_temps); + dead_temps = new_dead_temps; + } + + macro_end_op_index = gen_opc_ptr - gen_opc_buf; + macro_end_arg_index = gen_opparam_ptr - gen_opparam_buf; + + /* end of macro: add a goto to the next instruction */ + *gen_opc_ptr++ = INDEX_op_macro_end; + *gen_opparam_ptr++ = op_index + 1; + *gen_opparam_ptr++ = saved_arg_index + nb_args; + + /* modify the macro operation to be a macro_goto */ + gen_opc_buf[op_index] = INDEX_op_macro_goto; + args[0] = macro_op_index; + args[1] = macro_arg_index; + args[2] = 0; /* dummy third arg to match the + macro parameters */ + + /* set the next instruction to the end of the macro */ + op_index = macro_end_op_index; + args = macro_end_arg_index + gen_opparam_buf; + } + break; + case INDEX_op_macro_start: + args -= 2; + op_index = args[0]; + args = gen_opparam_buf + args[1]; + break; + case INDEX_op_macro_goto: + case INDEX_op_macro_end: + tcg_abort(); /* should never happen in liveness analysis */ + case INDEX_op_end: + break; + /* XXX: optimize by hardcoding common cases (e.g. triadic ops) */ + default: + if (op > INDEX_op_end) { + args -= def->nb_args; + nb_iargs = def->nb_iargs; + nb_oargs = def->nb_oargs; + + /* Test if the operation can be removed because all + its outputs are dead. We may add a flag to + explicitely tell if the op has side + effects. Currently we assume that if nb_oargs == 0 + or OPF_BB_END is set, the operation has side + effects and cannot be removed */ + if (nb_oargs != 0 && !(def->flags & TCG_OPF_BB_END)) { + for(i = 0; i < nb_oargs; i++) { + arg = args[i]; + if (!dead_temps[arg]) + goto do_not_remove; + } + tcg_set_nop(s, gen_opc_buf + op_index, args, def->nb_args); +#ifdef CONFIG_PROFILER + { + extern int64_t dyngen_tcg_del_op_count; + dyngen_tcg_del_op_count++; + } +#endif + } else { + do_not_remove: + + /* output args are dead */ + for(i = 0; i < nb_oargs; i++) { + arg = args[i]; + dead_temps[arg] = 1; + } + + /* if end of basic block, update */ + if (def->flags & TCG_OPF_BB_END) { + tcg_la_bb_end(s, dead_temps); + } + + /* input args are live */ + dead_iargs = 0; + for(i = 0; i < nb_iargs; i++) { + arg = args[i + nb_oargs]; + if (dead_temps[arg]) { + dead_iargs |= (1 << i); + } + dead_temps[arg] = 0; + } + s->op_dead_iargs[op_index] = dead_iargs; + } + } else { + /* legacy dyngen operations */ + args -= def->nb_args; + /* mark end of basic block */ + tcg_la_bb_end(s, dead_temps); + } + break; + } + op_index--; + } + + if (args != gen_opparam_buf) + tcg_abort(); +} +#else +/* dummy liveness analysis */ +void tcg_liveness_analysis(TCGContext *s) +{ + int nb_ops; + nb_ops = gen_opc_ptr - gen_opc_buf; + + s->op_dead_iargs = tcg_malloc(nb_ops * sizeof(uint16_t)); + memset(s->op_dead_iargs, 0, nb_ops * sizeof(uint16_t)); +} +#endif + +#ifndef NDEBUG +static void dump_regs(TCGContext *s) +{ + TCGTemp *ts; + int i; + char buf[64]; + + for(i = 0; i < s->nb_temps; i++) { + ts = &s->temps[i]; + printf(" %10s: ", tcg_get_arg_str(s, buf, sizeof(buf), i)); + switch(ts->val_type) { + case TEMP_VAL_REG: + printf("%s", tcg_target_reg_names[ts->reg]); + break; + case TEMP_VAL_MEM: + printf("%d(%s)", (int)ts->mem_offset, tcg_target_reg_names[ts->mem_reg]); + break; + case TEMP_VAL_CONST: + printf("$0x%" TCG_PRIlx, ts->val); + break; + case TEMP_VAL_DEAD: + printf("D"); + break; + default: + printf("???"); + break; + } + printf("\n"); + } + + for(i = 0; i < TCG_TARGET_NB_REGS; i++) { + if (s->reg_to_temp[i] >= 0) { + printf("%s: %s\n", + tcg_target_reg_names[i], + tcg_get_arg_str(s, buf, sizeof(buf), s->reg_to_temp[i])); + } + } +} + +static void check_regs(TCGContext *s) +{ + int reg, k; + TCGTemp *ts; + char buf[64]; + + for(reg = 0; reg < TCG_TARGET_NB_REGS; reg++) { + k = s->reg_to_temp[reg]; + if (k >= 0) { + ts = &s->temps[k]; + if (ts->val_type != TEMP_VAL_REG || + ts->reg != reg) { + printf("Inconsistency for register %s:\n", + tcg_target_reg_names[reg]); + printf("reg state:\n"); + dump_regs(s); + tcg_abort(); + } + } + } + for(k = 0; k < s->nb_temps; k++) { + ts = &s->temps[k]; + if (ts->val_type == TEMP_VAL_REG && + !ts->fixed_reg && + s->reg_to_temp[ts->reg] != k) { + printf("Inconsistency for temp %s:\n", + tcg_get_arg_str(s, buf, sizeof(buf), k)); + printf("reg state:\n"); + dump_regs(s); + tcg_abort(); + } + } +} +#endif + +static void temp_allocate_frame(TCGContext *s, int temp) +{ + TCGTemp *ts; + ts = &s->temps[temp]; + s->current_frame_offset = (s->current_frame_offset + sizeof(tcg_target_long) - 1) & ~(sizeof(tcg_target_long) - 1); + if (s->current_frame_offset + sizeof(tcg_target_long) > s->frame_end) + abort(); + ts->mem_offset = s->current_frame_offset; + ts->mem_reg = s->frame_reg; + ts->mem_allocated = 1; + s->current_frame_offset += sizeof(tcg_target_long); +} + +/* free register 'reg' by spilling the corresponding temporary if necessary */ +static void tcg_reg_free(TCGContext *s, int reg) +{ + TCGTemp *ts; + int temp; + + temp = s->reg_to_temp[reg]; + if (temp != -1) { + ts = &s->temps[temp]; + assert(ts->val_type == TEMP_VAL_REG); + if (!ts->mem_coherent) { + if (!ts->mem_allocated) + temp_allocate_frame(s, temp); + tcg_out_st(s, reg, ts->mem_reg, ts->mem_offset); + } + ts->val_type = TEMP_VAL_MEM; + s->reg_to_temp[reg] = -1; + } +} + +/* Allocate a register belonging to reg1 & ~reg2 */ +static int tcg_reg_alloc(TCGContext *s, TCGRegSet reg1, TCGRegSet reg2) +{ + int i, reg; + TCGRegSet reg_ct; + + tcg_regset_andnot(reg_ct, reg1, reg2); + + /* first try free registers */ + for(i = 0; i < TCG_TARGET_NB_REGS; i++) { + reg = tcg_target_reg_alloc_order[i]; + if (tcg_regset_test_reg(reg_ct, reg) && s->reg_to_temp[reg] == -1) + return reg; + } + + /* XXX: do better spill choice */ + for(i = 0; i < TCG_TARGET_NB_REGS; i++) { + reg = tcg_target_reg_alloc_order[i]; + if (tcg_regset_test_reg(reg_ct, reg)) { + tcg_reg_free(s, reg); + return reg; + } + } + + tcg_abort(); +} + +/* at the end of a basic block, we assume all temporaries are dead and + all globals are stored at their canonical location */ +/* XXX: optimize by handling constants in another array ? */ +void tcg_reg_alloc_bb_end(TCGContext *s) +{ + TCGTemp *ts; + int i; + + for(i = 0; i < s->nb_globals; i++) { + ts = &s->temps[i]; + if (!ts->fixed_reg) { + if (ts->val_type == TEMP_VAL_REG) { + tcg_reg_free(s, ts->reg); + } + } + } + + for(i = s->nb_globals; i < s->nb_temps; i++) { + ts = &s->temps[i]; + if (ts->val_type != TEMP_VAL_CONST) { + if (ts->val_type == TEMP_VAL_REG) { + s->reg_to_temp[ts->reg] = -1; + } + ts->val_type = TEMP_VAL_DEAD; + } + } +} + +#define IS_DEAD_IARG(n) ((dead_iargs >> (n)) & 1) + +static void tcg_reg_alloc_mov(TCGContext *s, const TCGOpDef *def, + const TCGArg *args, + unsigned int dead_iargs) +{ + TCGTemp *ts, *ots; + int reg; + const TCGArgConstraint *arg_ct; + + ots = &s->temps[args[0]]; + ts = &s->temps[args[1]]; + arg_ct = &def->args_ct[0]; + + if (ts->val_type == TEMP_VAL_REG) { + if (IS_DEAD_IARG(0) && !ts->fixed_reg && !ots->fixed_reg) { + /* the mov can be suppressed */ + if (ots->val_type == TEMP_VAL_REG) + s->reg_to_temp[ots->reg] = -1; + reg = ts->reg; + s->reg_to_temp[reg] = -1; + ts->val_type = TEMP_VAL_DEAD; + } else { + if (ots->val_type == TEMP_VAL_REG) { + reg = ots->reg; + } else { + reg = tcg_reg_alloc(s, arg_ct->u.regs, s->reserved_regs); + } + if (ts->reg != reg) { + tcg_out_mov(s, reg, ts->reg); + } + } + } else if (ts->val_type == TEMP_VAL_MEM) { + if (ots->val_type == TEMP_VAL_REG) { + reg = ots->reg; + } else { + reg = tcg_reg_alloc(s, arg_ct->u.regs, s->reserved_regs); + } + tcg_out_ld(s, reg, ts->mem_reg, ts->mem_offset); + } else if (ts->val_type == TEMP_VAL_CONST) { + if (ots->val_type == TEMP_VAL_REG) { + reg = ots->reg; + } else { + reg = tcg_reg_alloc(s, arg_ct->u.regs, s->reserved_regs); + } + tcg_out_movi(s, ots->type, reg, ts->val); + } else { + tcg_abort(); + } + s->reg_to_temp[reg] = args[0]; + ots->reg = reg; + ots->val_type = TEMP_VAL_REG; + ots->mem_coherent = 0; +} + +static void tcg_reg_alloc_op(TCGContext *s, + const TCGOpDef *def, int opc, + const TCGArg *args, + unsigned int dead_iargs) +{ + TCGRegSet allocated_regs; + int i, k, nb_iargs, nb_oargs, reg; + TCGArg arg; + const TCGArgConstraint *arg_ct; + TCGTemp *ts; + TCGArg new_args[TCG_MAX_OP_ARGS]; + int const_args[TCG_MAX_OP_ARGS]; + + nb_oargs = def->nb_oargs; + nb_iargs = def->nb_iargs; + + /* copy constants */ + memcpy(new_args + nb_oargs + nb_iargs, + args + nb_oargs + nb_iargs, + sizeof(TCGArg) * def->nb_cargs); + + /* satisfy input constraints */ + tcg_regset_set(allocated_regs, s->reserved_regs); + for(k = 0; k < nb_iargs; k++) { + i = def->sorted_args[nb_oargs + k]; + arg = args[i]; + arg_ct = &def->args_ct[i]; + ts = &s->temps[arg]; + if (ts->val_type == TEMP_VAL_MEM) { + reg = tcg_reg_alloc(s, arg_ct->u.regs, allocated_regs); + tcg_out_ld(s, reg, ts->mem_reg, ts->mem_offset); + ts->val_type = TEMP_VAL_REG; + ts->reg = reg; + ts->mem_coherent = 1; + s->reg_to_temp[reg] = arg; + } else if (ts->val_type == TEMP_VAL_CONST) { + if (tcg_target_const_match(ts->val, arg_ct)) { + /* constant is OK for instruction */ + const_args[i] = 1; + new_args[i] = ts->val; + goto iarg_end; + } else { + /* need to move to a register*/ + reg = tcg_reg_alloc(s, arg_ct->u.regs, allocated_regs); + tcg_out_movi(s, ts->type, reg, ts->val); + goto iarg_end1; + } + } + assert(ts->val_type == TEMP_VAL_REG); + if ((arg_ct->ct & TCG_CT_IALIAS) && + !IS_DEAD_IARG(i - nb_oargs)) { + /* if the input is aliased to an output and if it is + not dead after the instruction, we must allocate + a new register and move it */ + goto allocate_in_reg; + } + reg = ts->reg; + if (tcg_regset_test_reg(arg_ct->u.regs, reg)) { + /* nothing to do : the constraint is satisfied */ + } else { + allocate_in_reg: + /* allocate a new register matching the constraint + and move the temporary register into it */ + reg = tcg_reg_alloc(s, arg_ct->u.regs, allocated_regs); + tcg_out_mov(s, reg, ts->reg); + } + iarg_end1: + new_args[i] = reg; + const_args[i] = 0; + tcg_regset_set_reg(allocated_regs, reg); + iarg_end: ; + } + + /* mark dead temporaries and free the associated registers */ + for(i = 0; i < nb_iargs; i++) { + arg = args[nb_oargs + i]; + if (IS_DEAD_IARG(i)) { + ts = &s->temps[arg]; + if (ts->val_type != TEMP_VAL_CONST && !ts->fixed_reg) { + if (ts->val_type == TEMP_VAL_REG) + s->reg_to_temp[ts->reg] = -1; + ts->val_type = TEMP_VAL_DEAD; + } + } + } + + /* XXX: permit generic clobber register list ? */ + if (def->flags & TCG_OPF_CALL_CLOBBER) { + for(reg = 0; reg < TCG_TARGET_NB_REGS; reg++) { + if (tcg_regset_test_reg(tcg_target_call_clobber_regs, reg)) { + tcg_reg_free(s, reg); + } + } + } + + /* satisfy the output constraints */ + tcg_regset_set(allocated_regs, s->reserved_regs); + for(k = 0; k < nb_oargs; k++) { + i = def->sorted_args[k]; + arg = args[i]; + arg_ct = &def->args_ct[i]; + ts = &s->temps[arg]; + if (arg_ct->ct & TCG_CT_ALIAS) { + reg = new_args[arg_ct->ct & ~TCG_CT_ALIAS]; + } else { + /* if fixed register, we try to use it */ + reg = ts->reg; + if (ts->fixed_reg && + tcg_regset_test_reg(arg_ct->u.regs, reg)) { + goto oarg_end; + } + reg = tcg_reg_alloc(s, arg_ct->u.regs, allocated_regs); + } + tcg_regset_set_reg(allocated_regs, reg); + /* if a fixed register is used, then a move will be done afterwards */ + if (!ts->fixed_reg) { + if (ts->val_type == TEMP_VAL_REG) + s->reg_to_temp[ts->reg] = -1; + ts->val_type = TEMP_VAL_REG; + ts->reg = reg; + /* temp value is modified, so the value kept in memory is + potentially not the same */ + ts->mem_coherent = 0; + s->reg_to_temp[reg] = arg; + } + oarg_end: + new_args[i] = reg; + } + + if (def->flags & TCG_OPF_BB_END) + tcg_reg_alloc_bb_end(s); + + /* emit instruction */ + tcg_out_op(s, opc, new_args, const_args); + + /* move the outputs in the correct register if needed */ + for(i = 0; i < nb_oargs; i++) { + ts = &s->temps[args[i]]; + reg = new_args[i]; + if (ts->fixed_reg && ts->reg != reg) { + tcg_out_mov(s, ts->reg, reg); + } + } +} + +static int tcg_reg_alloc_call(TCGContext *s, const TCGOpDef *def, + int opc, const TCGArg *args, + unsigned int dead_iargs) +{ + int nb_iargs, nb_oargs, flags, nb_regs, i, reg, nb_params; + TCGArg arg, func_arg; + TCGTemp *ts; + tcg_target_long stack_offset, call_stack_size; + int const_func_arg; + TCGRegSet allocated_regs; + const TCGArgConstraint *arg_ct; + + arg = *args++; + + nb_oargs = arg >> 16; + nb_iargs = arg & 0xffff; + nb_params = nb_iargs - 1; + + flags = args[nb_oargs + nb_iargs]; + + nb_regs = tcg_target_get_call_iarg_regs_count(flags); + if (nb_regs > nb_params) + nb_regs = nb_params; + + /* assign stack slots first */ + /* XXX: preallocate call stack */ + call_stack_size = (nb_params - nb_regs) * sizeof(tcg_target_long); + call_stack_size = (call_stack_size + TCG_TARGET_STACK_ALIGN - 1) & + ~(TCG_TARGET_STACK_ALIGN - 1); + tcg_out_addi(s, TCG_REG_CALL_STACK, -call_stack_size); + + stack_offset = 0; + for(i = nb_regs; i < nb_params; i++) { + arg = args[nb_oargs + i]; + ts = &s->temps[arg]; + if (ts->val_type == TEMP_VAL_REG) { + tcg_out_st(s, ts->reg, TCG_REG_CALL_STACK, stack_offset); + } else if (ts->val_type == TEMP_VAL_MEM) { + reg = tcg_reg_alloc(s, tcg_target_available_regs[ts->type], + s->reserved_regs); + /* XXX: not correct if reading values from the stack */ + tcg_out_ld(s, reg, ts->mem_reg, ts->mem_offset); + tcg_out_st(s, reg, TCG_REG_CALL_STACK, stack_offset); + } else if (ts->val_type == TEMP_VAL_CONST) { + reg = tcg_reg_alloc(s, tcg_target_available_regs[ts->type], + s->reserved_regs); + /* XXX: sign extend may be needed on some targets */ + tcg_out_movi(s, ts->type, reg, ts->val); + tcg_out_st(s, reg, TCG_REG_CALL_STACK, stack_offset); + } else { + tcg_abort(); + } + stack_offset += sizeof(tcg_target_long); + } + + /* assign input registers */ + tcg_regset_set(allocated_regs, s->reserved_regs); + for(i = 0; i < nb_regs; i++) { + arg = args[nb_oargs + i]; + ts = &s->temps[arg]; + reg = tcg_target_call_iarg_regs[i]; + tcg_reg_free(s, reg); + if (ts->val_type == TEMP_VAL_REG) { + if (ts->reg != reg) { + tcg_out_mov(s, reg, ts->reg); + } + } else if (ts->val_type == TEMP_VAL_MEM) { + tcg_out_ld(s, reg, ts->mem_reg, ts->mem_offset); + } else if (ts->val_type == TEMP_VAL_CONST) { + /* XXX: sign extend ? */ + tcg_out_movi(s, ts->type, reg, ts->val); + } else { + tcg_abort(); + } + tcg_regset_set_reg(allocated_regs, reg); + } + + /* assign function address */ + func_arg = args[nb_oargs + nb_iargs - 1]; + arg_ct = &def->args_ct[0]; + ts = &s->temps[func_arg]; + const_func_arg = 0; + if (ts->val_type == TEMP_VAL_MEM) { + reg = tcg_reg_alloc(s, arg_ct->u.regs, allocated_regs); + tcg_out_ld(s, reg, ts->mem_reg, ts->mem_offset); + func_arg = reg; + } else if (ts->val_type == TEMP_VAL_REG) { + reg = ts->reg; + if (!tcg_regset_test_reg(arg_ct->u.regs, reg)) { + reg = tcg_reg_alloc(s, arg_ct->u.regs, allocated_regs); + tcg_out_mov(s, reg, ts->reg); + } + func_arg = reg; + } else if (ts->val_type == TEMP_VAL_CONST) { + if (tcg_target_const_match(ts->val, arg_ct)) { + const_func_arg = 1; + func_arg = ts->val; + } else { + reg = tcg_reg_alloc(s, arg_ct->u.regs, allocated_regs); + tcg_out_movi(s, ts->type, reg, ts->val); + func_arg = reg; + } + } else { + tcg_abort(); + } + + /* mark dead temporaries and free the associated registers */ + for(i = 0; i < nb_params; i++) { + arg = args[nb_oargs + i]; + if (IS_DEAD_IARG(i)) { + ts = &s->temps[arg]; + if (ts->val_type != TEMP_VAL_CONST && !ts->fixed_reg) { + if (ts->val_type == TEMP_VAL_REG) + s->reg_to_temp[ts->reg] = -1; + ts->val_type = TEMP_VAL_DEAD; + } + } + } + + /* clobber call registers */ + for(reg = 0; reg < TCG_TARGET_NB_REGS; reg++) { + if (tcg_regset_test_reg(tcg_target_call_clobber_regs, reg)) { + tcg_reg_free(s, reg); + } + } + + /* store globals and free associated registers (we assume the call + can modify any global. */ + for(i = 0; i < s->nb_globals; i++) { + ts = &s->temps[i]; + if (!ts->fixed_reg) { + if (ts->val_type == TEMP_VAL_REG) { + tcg_reg_free(s, ts->reg); + } + } + } + + tcg_out_op(s, opc, &func_arg, &const_func_arg); + + tcg_out_addi(s, TCG_REG_CALL_STACK, call_stack_size); + + /* assign output registers and emit moves if needed */ + for(i = 0; i < nb_oargs; i++) { + arg = args[i]; + ts = &s->temps[arg]; + reg = tcg_target_call_oarg_regs[i]; + tcg_reg_free(s, reg); + if (ts->fixed_reg) { + if (ts->reg != reg) { + tcg_out_mov(s, ts->reg, reg); + } + } else { + if (ts->val_type == TEMP_VAL_REG) + s->reg_to_temp[ts->reg] = -1; + ts->val_type = TEMP_VAL_REG; + ts->reg = reg; + ts->mem_coherent = 0; + s->reg_to_temp[reg] = arg; + } + } + + return nb_iargs + nb_oargs + def->nb_cargs + 1; +} + +#ifdef CONFIG_PROFILER + +static int64_t dyngen_table_op_count[NB_OPS]; + +void dump_op_count(void) +{ + int i; + FILE *f; + f = fopen("/tmp/op1.log", "w"); + for(i = 0; i < INDEX_op_end; i++) { + fprintf(f, "%s %" PRId64 "\n", tcg_op_defs[i].name, dyngen_table_op_count[i]); + } + fclose(f); + f = fopen("/tmp/op2.log", "w"); + for(i = INDEX_op_end; i < NB_OPS; i++) { + fprintf(f, "%s %" PRId64 "\n", tcg_op_defs[i].name, dyngen_table_op_count[i]); + } + fclose(f); +} +#endif + + +static inline int tcg_gen_code_common(TCGContext *s, uint8_t *gen_code_buf, + int do_search_pc, + const uint8_t *searched_pc) +{ + int opc, op_index, macro_op_index; + const TCGOpDef *def; + unsigned int dead_iargs; + const TCGArg *args; + +#ifdef DEBUG_DISAS + if (unlikely(loglevel & CPU_LOG_TB_OP)) { + fprintf(logfile, "OP:\n"); + tcg_dump_ops(s, logfile); + fprintf(logfile, "\n"); + } +#endif + + tcg_liveness_analysis(s); + +#ifdef DEBUG_DISAS + if (unlikely(loglevel & CPU_LOG_TB_OP_OPT)) { + fprintf(logfile, "OP after la:\n"); + tcg_dump_ops(s, logfile); + fprintf(logfile, "\n"); + } +#endif + + tcg_reg_alloc_start(s); + + s->code_buf = gen_code_buf; + s->code_ptr = gen_code_buf; + + macro_op_index = -1; + args = gen_opparam_buf; + op_index = 0; + for(;;) { + opc = gen_opc_buf[op_index]; +#ifdef CONFIG_PROFILER + dyngen_table_op_count[opc]++; +#endif + def = &tcg_op_defs[opc]; +#if 0 + printf("%s: %d %d %d\n", def->name, + def->nb_oargs, def->nb_iargs, def->nb_cargs); + // dump_regs(s); +#endif + switch(opc) { + case INDEX_op_mov_i32: +#if TCG_TARGET_REG_BITS == 64 + case INDEX_op_mov_i64: +#endif + dead_iargs = s->op_dead_iargs[op_index]; + tcg_reg_alloc_mov(s, def, args, dead_iargs); + break; + case INDEX_op_nop: + case INDEX_op_nop1: + case INDEX_op_nop2: + case INDEX_op_nop3: + break; + case INDEX_op_nopn: + args += args[0]; + goto next; + case INDEX_op_macro_goto: + macro_op_index = op_index; /* only used for exceptions */ + op_index = args[0] - 1; + args = gen_opparam_buf + args[1]; + goto next; + case INDEX_op_macro_end: + macro_op_index = -1; /* only used for exceptions */ + op_index = args[0] - 1; + args = gen_opparam_buf + args[1]; + goto next; + case INDEX_op_macro_start: + /* must never happen here */ + tcg_abort(); + case INDEX_op_set_label: + tcg_reg_alloc_bb_end(s); + tcg_out_label(s, args[0], (long)s->code_ptr); + break; + case INDEX_op_call: + dead_iargs = s->op_dead_iargs[op_index]; + args += tcg_reg_alloc_call(s, def, opc, args, dead_iargs); + goto next; + case INDEX_op_end: + goto the_end; + case 0 ... INDEX_op_end - 1: + /* legacy dyngen ops */ +#ifdef CONFIG_PROFILER + { + extern int64_t dyngen_old_op_count; + dyngen_old_op_count++; + } +#endif + tcg_reg_alloc_bb_end(s); + if (do_search_pc) { + s->code_ptr += def->copy_size; + args += def->nb_args; + } else { + args = dyngen_op(s, opc, args); + } + goto next; + default: + /* Note: in order to speed up the code, it would be much + faster to have specialized register allocator functions for + some common argument patterns */ + dead_iargs = s->op_dead_iargs[op_index]; + tcg_reg_alloc_op(s, def, opc, args, dead_iargs); + break; + } + args += def->nb_args; + next: ; + if (do_search_pc) { + if (searched_pc < s->code_ptr) { + if (macro_op_index >= 0) + return macro_op_index; + else + return op_index; + } + } + op_index++; +#ifndef NDEBUG + check_regs(s); +#endif + } + the_end: + return -1; +} + +int dyngen_code(TCGContext *s, uint8_t *gen_code_buf) +{ +#ifdef CONFIG_PROFILER + { + extern int64_t dyngen_op_count; + extern int dyngen_op_count_max; + int n; + n = (gen_opc_ptr - gen_opc_buf); + dyngen_op_count += n; + if (n > dyngen_op_count_max) + dyngen_op_count_max = n; + } +#endif + + tcg_gen_code_common(s, gen_code_buf, 0, NULL); + + /* flush instruction cache */ + flush_icache_range((unsigned long)gen_code_buf, + (unsigned long)s->code_ptr); + return s->code_ptr - gen_code_buf; +} + +/* return the index of the micro operation such as the pc after is < + search_pc. Note: gen_code_buf is accessed during the operation, but + its content should not be modified. Return -1 if not found. */ +int dyngen_code_search_pc(TCGContext *s, uint8_t *gen_code_buf, + const uint8_t *searched_pc) +{ + return tcg_gen_code_common(s, gen_code_buf, 1, searched_pc); +} diff --git a/tcg/tcg.h b/tcg/tcg.h new file mode 100644 index 0000000000..e57e5deb01 --- /dev/null +++ b/tcg/tcg.h @@ -0,0 +1,324 @@ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2008 Fabrice Bellard + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ +#include "tcg-target.h" + +#if TCG_TARGET_REG_BITS == 32 +typedef int32_t tcg_target_long; +typedef uint32_t tcg_target_ulong; +#define TCG_PRIlx PRIx32 +#define TCG_PRIld PRId32 +#elif TCG_TARGET_REG_BITS == 64 +typedef int64_t tcg_target_long; +typedef uint64_t tcg_target_ulong; +#define TCG_PRIlx PRIx64 +#define TCG_PRIld PRId64 +#else +#error unsupported +#endif + +#if TCG_TARGET_NB_REGS <= 32 +typedef uint32_t TCGRegSet; +#elif TCG_TARGET_NB_REGS <= 64 +typedef uint64_t TCGRegSet; +#else +#error unsupported +#endif + +enum { +#define DEF(s, n, copy_size) INDEX_op_ ## s, +#include "tcg-opc.h" +#undef DEF + NB_OPS, +}; + +#define tcg_regset_clear(d) (d) = 0 +#define tcg_regset_set(d, s) (d) = (s) +#define tcg_regset_set32(d, reg, val32) (d) |= (val32) << (reg) +#define tcg_regset_set_reg(d, r) (d) |= 1 << (r) +#define tcg_regset_reset_reg(d, r) (d) &= ~(1 << (r)) +#define tcg_regset_test_reg(d, r) (((d) >> (r)) & 1) +#define tcg_regset_or(d, a, b) (d) = (a) | (b) +#define tcg_regset_and(d, a, b) (d) = (a) & (b) +#define tcg_regset_andnot(d, a, b) (d) = (a) & ~(b) +#define tcg_regset_not(d, a) (d) = ~(a) + +typedef struct TCGRelocation { + struct TCGRelocation *next; + int type; + uint8_t *ptr; + tcg_target_long addend; +} TCGRelocation; + +typedef struct TCGLabel { + int has_value; + union { + tcg_target_ulong value; + TCGRelocation *first_reloc; + } u; +} TCGLabel; + +typedef struct TCGPool { + struct TCGPool *next; + int size; + uint8_t data[0]; +} TCGPool; + +#define TCG_POOL_CHUNK_SIZE 32768 + +#define TCG_MAX_LABELS 512 + +#define TCG_MAX_TEMPS 256 + +typedef int TCGType; + +#define TCG_TYPE_I32 0 +#define TCG_TYPE_I64 1 + +#if TCG_TARGET_REG_BITS == 32 +#define TCG_TYPE_PTR TCG_TYPE_I32 +#else +#define TCG_TYPE_PTR TCG_TYPE_I64 +#endif + +typedef tcg_target_ulong TCGArg; + +/* call flags */ +#define TCG_CALL_TYPE_MASK 0x000f +#define TCG_CALL_TYPE_STD 0x0000 /* standard C call */ +#define TCG_CALL_TYPE_REGPARM_1 0x0001 /* i386 style regparm call (1 reg) */ +#define TCG_CALL_TYPE_REGPARM_2 0x0002 /* i386 style regparm call (2 regs) */ +#define TCG_CALL_TYPE_REGPARM 0x0003 /* i386 style regparm call (3 regs) */ + +typedef enum { + TCG_COND_EQ, + TCG_COND_NE, + TCG_COND_LT, + TCG_COND_GE, + TCG_COND_LE, + TCG_COND_GT, + /* unsigned */ + TCG_COND_LTU, + TCG_COND_GEU, + TCG_COND_LEU, + TCG_COND_GTU, +} TCGCond; + +#define TEMP_VAL_DEAD 0 +#define TEMP_VAL_REG 1 +#define TEMP_VAL_MEM 2 +#define TEMP_VAL_CONST 3 + +/* XXX: optimize memory layout */ +typedef struct TCGTemp { + TCGType base_type; + TCGType type; + int val_type; + int reg; + tcg_target_long val; + int mem_reg; + tcg_target_long mem_offset; + unsigned int fixed_reg:1; + unsigned int mem_coherent:1; + unsigned int mem_allocated:1; + const char *name; +} TCGTemp; + +typedef struct TCGHelperInfo { + void *func; + const char *name; +} TCGHelperInfo; + +typedef struct TCGContext TCGContext; + +typedef void TCGMacroFunc(TCGContext *s, int macro_id, const int *dead_args); + +struct TCGContext { + uint8_t *pool_cur, *pool_end; + TCGPool *pool_first, *pool_current; + TCGLabel *labels; + int nb_labels; + TCGTemp *temps; /* globals first, temps after */ + int nb_globals; + int nb_temps; + /* constant indexes (end of temp array) */ + int const_start; + int const_end; + + /* goto_tb support */ + uint8_t *code_buf; + unsigned long *tb_next; + uint16_t *tb_next_offset; + uint16_t *tb_jmp_offset; /* != NULL if USE_DIRECT_JUMP */ + + uint16_t *op_dead_iargs; /* for each operation, each bit tells if the + corresponding input argument is dead */ + /* tells in which temporary a given register is. It does not take + into account fixed registers */ + int reg_to_temp[TCG_TARGET_NB_REGS]; + TCGRegSet reserved_regs; + tcg_target_long current_frame_offset; + tcg_target_long frame_start; + tcg_target_long frame_end; + int frame_reg; + + uint8_t *code_ptr; + TCGTemp static_temps[TCG_MAX_TEMPS]; + + TCGMacroFunc *macro_func; + TCGHelperInfo *helpers; + int nb_helpers; + int allocated_helpers; +}; + +extern TCGContext tcg_ctx; +extern uint16_t *gen_opc_ptr; +extern TCGArg *gen_opparam_ptr; +extern uint16_t gen_opc_buf[]; +extern TCGArg gen_opparam_buf[]; + +/* pool based memory allocation */ + +void *tcg_malloc_internal(TCGContext *s, int size); +void tcg_pool_reset(TCGContext *s); +void tcg_pool_delete(TCGContext *s); + +static inline void *tcg_malloc(int size) +{ + TCGContext *s = &tcg_ctx; + uint8_t *ptr, *ptr_end; + size = (size + sizeof(long) - 1) & ~(sizeof(long) - 1); + ptr = s->pool_cur; + ptr_end = ptr + size; + if (unlikely(ptr_end > s->pool_end)) { + return tcg_malloc_internal(&tcg_ctx, size); + } else { + s->pool_cur = ptr_end; + return ptr; + } +} + +void tcg_context_init(TCGContext *s); +void tcg_func_start(TCGContext *s); + +int dyngen_code(TCGContext *s, uint8_t *gen_code_buf); +int dyngen_code_search_pc(TCGContext *s, uint8_t *gen_code_buf, + const uint8_t *searched_pc); + +void tcg_set_frame(TCGContext *s, int reg, + tcg_target_long start, tcg_target_long size); +void tcg_set_macro_func(TCGContext *s, TCGMacroFunc *func); +int tcg_global_reg_new(TCGType type, int reg, const char *name); +int tcg_global_mem_new(TCGType type, int reg, tcg_target_long offset, + const char *name); +int tcg_temp_new(TCGType type); +char *tcg_get_arg_str(TCGContext *s, char *buf, int buf_size, TCGArg arg); + +#define TCG_CT_ALIAS 0x80 +#define TCG_CT_IALIAS 0x40 +#define TCG_CT_REG 0x01 +#define TCG_CT_CONST 0x02 /* any constant of register size */ + +typedef struct TCGArgConstraint { + uint32_t ct; + union { + TCGRegSet regs; + } u; +} TCGArgConstraint; + +#define TCG_MAX_OP_ARGS 16 + +#define TCG_OPF_BB_END 0x01 /* instruction defines the end of a basic + block */ +#define TCG_OPF_CALL_CLOBBER 0x02 /* instruction clobbers call registers */ + +typedef struct TCGOpDef { + const char *name; + uint8_t nb_oargs, nb_iargs, nb_cargs, nb_args; + uint8_t flags; + uint16_t copy_size; + TCGArgConstraint *args_ct; + int *sorted_args; +} TCGOpDef; + +typedef struct TCGTargetOpDef { + int op; + const char *args_ct_str[TCG_MAX_OP_ARGS]; +} TCGTargetOpDef; + +extern TCGOpDef tcg_op_defs[]; + +void tcg_target_init(TCGContext *s); + +#define tcg_abort() \ +do {\ + fprintf(stderr, "%s:%d: tcg fatal error\n", __FILE__, __LINE__);\ + abort();\ +} while (0) + +void tcg_add_target_add_op_defs(const TCGTargetOpDef *tdefs); + +void tcg_gen_call(TCGContext *s, TCGArg func, unsigned int flags, + unsigned int nb_rets, const TCGArg *rets, + unsigned int nb_params, const TCGArg *args1); +void tcg_gen_shifti_i64(TCGArg ret, TCGArg arg1, + int c, int right, int arith); + +/* only used for debugging purposes */ +void tcg_register_helper(void *func, const char *name); +#define TCG_HELPER(func) tcg_register_helper(func, #func) +const char *tcg_helper_get_name(TCGContext *s, void *func); +void tcg_dump_ops(TCGContext *s, FILE *outfile); + +void dump_ops(const uint16_t *opc_buf, const TCGArg *opparam_buf); +int tcg_const_i32(int32_t val); +int tcg_const_i64(int64_t val); + +#if TCG_TARGET_REG_BITS == 32 +#define tcg_const_ptr tcg_const_i32 +#define tcg_add_ptr tcg_add_i32 +#define tcg_sub_ptr tcg_sub_i32 +#else +#define tcg_const_ptr tcg_const_i64 +#define tcg_add_ptr tcg_add_i64 +#define tcg_sub_ptr tcg_sub_i64 +#endif + +void tcg_out_reloc(TCGContext *s, uint8_t *code_ptr, int type, + int label_index, long addend); +void tcg_reg_alloc_start(TCGContext *s); +void tcg_reg_alloc_bb_end(TCGContext *s); +void tcg_liveness_analysis(TCGContext *s); +const TCGArg *tcg_gen_code_op(TCGContext *s, int opc, const TCGArg *args1, + unsigned int dead_iargs); + +const TCGArg *dyngen_op(TCGContext *s, int opc, const TCGArg *opparam_ptr); + +/* tcg-runtime.c */ +int64_t tcg_helper_shl_i64(int64_t arg1, int64_t arg2); +int64_t tcg_helper_shr_i64(int64_t arg1, int64_t arg2); +int64_t tcg_helper_sar_i64(int64_t arg1, int64_t arg2); +int64_t tcg_helper_div_i64(int64_t arg1, int64_t arg2); +int64_t tcg_helper_rem_i64(int64_t arg1, int64_t arg2); +uint64_t tcg_helper_divu_i64(uint64_t arg1, uint64_t arg2); +uint64_t tcg_helper_remu_i64(uint64_t arg1, uint64_t arg2); diff --git a/tcg/x86_64/tcg-target.c b/tcg/x86_64/tcg-target.c new file mode 100644 index 0000000000..a2f0e4c743 --- /dev/null +++ b/tcg/x86_64/tcg-target.c @@ -0,0 +1,1227 @@ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2008 Fabrice Bellard + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ +const char *tcg_target_reg_names[TCG_TARGET_NB_REGS] = { + "%rax", + "%rcx", + "%rdx", + "%rbx", + "%rsp", + "%rbp", + "%rsi", + "%rdi", + "%r8", + "%r9", + "%r10", + "%r11", + "%r12", + "%r13", + "%r14", + "%r15", +}; + +int tcg_target_reg_alloc_order[TCG_TARGET_NB_REGS] = { + TCG_REG_RDI, + TCG_REG_RSI, + TCG_REG_RDX, + TCG_REG_RCX, + TCG_REG_R8, + TCG_REG_R9, + TCG_REG_RAX, + TCG_REG_R10, + TCG_REG_R11, + + TCG_REG_RBP, + TCG_REG_RBX, + TCG_REG_R12, + TCG_REG_R13, + TCG_REG_R14, + TCG_REG_R15, +}; + +const int tcg_target_call_iarg_regs[6] = { + TCG_REG_RDI, + TCG_REG_RSI, + TCG_REG_RDX, + TCG_REG_RCX, + TCG_REG_R8, + TCG_REG_R9, +}; + +const int tcg_target_call_oarg_regs[2] = { + TCG_REG_RAX, + TCG_REG_RDX +}; + +static void patch_reloc(uint8_t *code_ptr, int type, + tcg_target_long value) +{ + switch(type) { + case R_X86_64_32: + if (value != (uint32_t)value) + tcg_abort(); + *(uint32_t *)code_ptr = value; + break; + case R_X86_64_32S: + if (value != (int32_t)value) + tcg_abort(); + *(uint32_t *)code_ptr = value; + break; + case R_386_PC32: + value -= (long)code_ptr; + if (value != (int32_t)value) + tcg_abort(); + *(uint32_t *)code_ptr = value; + break; + default: + tcg_abort(); + } +} + +/* maximum number of register used for input function arguments */ +static inline int tcg_target_get_call_iarg_regs_count(int flags) +{ + return 6; +} + +/* parse target specific constraints */ +int target_parse_constraint(TCGArgConstraint *ct, const char **pct_str) +{ + const char *ct_str; + + ct_str = *pct_str; + switch(ct_str[0]) { + case 'a': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_RAX); + break; + case 'b': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_RBX); + break; + case 'c': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_RCX); + break; + case 'd': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_RDX); + break; + case 'S': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_RSI); + break; + case 'D': + ct->ct |= TCG_CT_REG; + tcg_regset_set_reg(ct->u.regs, TCG_REG_RDI); + break; + case 'q': + ct->ct |= TCG_CT_REG; + tcg_regset_set32(ct->u.regs, 0, 0xf); + break; + case 'r': + ct->ct |= TCG_CT_REG; + tcg_regset_set32(ct->u.regs, 0, 0xffff); + break; + case 'L': /* qemu_ld/st constraint */ + ct->ct |= TCG_CT_REG; + tcg_regset_set32(ct->u.regs, 0, 0xffff); + tcg_regset_reset_reg(ct->u.regs, TCG_REG_RSI); + tcg_regset_reset_reg(ct->u.regs, TCG_REG_RDI); + break; + case 'e': + ct->ct |= TCG_CT_CONST_S32; + break; + case 'Z': + ct->ct |= TCG_CT_CONST_U32; + break; + default: + return -1; + } + ct_str++; + *pct_str = ct_str; + return 0; +} + +/* test if a constant matches the constraint */ +static inline int tcg_target_const_match(tcg_target_long val, + const TCGArgConstraint *arg_ct) +{ + int ct; + ct = arg_ct->ct; + if (ct & TCG_CT_CONST) + return 1; + else if ((ct & TCG_CT_CONST_S32) && val == (int32_t)val) + return 1; + else if ((ct & TCG_CT_CONST_U32) && val == (uint32_t)val) + return 1; + else + return 0; +} + +#define ARITH_ADD 0 +#define ARITH_OR 1 +#define ARITH_ADC 2 +#define ARITH_SBB 3 +#define ARITH_AND 4 +#define ARITH_SUB 5 +#define ARITH_XOR 6 +#define ARITH_CMP 7 + +#define SHIFT_SHL 4 +#define SHIFT_SHR 5 +#define SHIFT_SAR 7 + +#define JCC_JMP (-1) +#define JCC_JO 0x0 +#define JCC_JNO 0x1 +#define JCC_JB 0x2 +#define JCC_JAE 0x3 +#define JCC_JE 0x4 +#define JCC_JNE 0x5 +#define JCC_JBE 0x6 +#define JCC_JA 0x7 +#define JCC_JS 0x8 +#define JCC_JNS 0x9 +#define JCC_JP 0xa +#define JCC_JNP 0xb +#define JCC_JL 0xc +#define JCC_JGE 0xd +#define JCC_JLE 0xe +#define JCC_JG 0xf + +#define P_EXT 0x100 /* 0x0f opcode prefix */ +#define P_REXW 0x200 /* set rex.w = 1 */ +#define P_REX 0x400 /* force rex usage */ + +static const uint8_t tcg_cond_to_jcc[10] = { + [TCG_COND_EQ] = JCC_JE, + [TCG_COND_NE] = JCC_JNE, + [TCG_COND_LT] = JCC_JL, + [TCG_COND_GE] = JCC_JGE, + [TCG_COND_LE] = JCC_JLE, + [TCG_COND_GT] = JCC_JG, + [TCG_COND_LTU] = JCC_JB, + [TCG_COND_GEU] = JCC_JAE, + [TCG_COND_LEU] = JCC_JBE, + [TCG_COND_GTU] = JCC_JA, +}; + +static inline void tcg_out_opc(TCGContext *s, int opc, int r, int rm, int x) +{ + int rex; + rex = ((opc >> 6) & 0x8) | ((r >> 1) & 0x4) | + ((x >> 2) & 2) | ((rm >> 3) & 1); + if (rex || (opc & P_REX)) { + tcg_out8(s, rex | 0x40); + } + if (opc & P_EXT) + tcg_out8(s, 0x0f); + tcg_out8(s, opc); +} + +static inline void tcg_out_modrm(TCGContext *s, int opc, int r, int rm) +{ + tcg_out_opc(s, opc, r, rm, 0); + tcg_out8(s, 0xc0 | ((r & 7) << 3) | (rm & 7)); +} + +/* rm < 0 means no register index plus (-rm - 1 immediate bytes) */ +static inline void tcg_out_modrm_offset(TCGContext *s, int opc, int r, int rm, + tcg_target_long offset) +{ + if (rm < 0) { + tcg_target_long val; + tcg_out_opc(s, opc, r, 0, 0); + val = offset - ((tcg_target_long)s->code_ptr + 5 + (-rm - 1)); + if (val == (int32_t)val) { + /* eip relative */ + tcg_out8(s, 0x05 | ((r & 7) << 3)); + tcg_out32(s, val); + } else if (offset == (int32_t)offset) { + tcg_out8(s, 0x04 | ((r & 7) << 3)); + tcg_out8(s, 0x25); /* sib */ + tcg_out32(s, offset); + } else { + tcg_abort(); + } + } else if (offset == 0 && (rm & 7) != TCG_REG_RBP) { + tcg_out_opc(s, opc, r, rm, 0); + if ((rm & 7) == TCG_REG_RSP) { + tcg_out8(s, 0x04 | ((r & 7) << 3)); + tcg_out8(s, 0x24); + } else { + tcg_out8(s, 0x00 | ((r & 7) << 3) | (rm & 7)); + } + } else if ((int8_t)offset == offset) { + tcg_out_opc(s, opc, r, rm, 0); + if ((rm & 7) == TCG_REG_RSP) { + tcg_out8(s, 0x44 | ((r & 7) << 3)); + tcg_out8(s, 0x24); + } else { + tcg_out8(s, 0x40 | ((r & 7) << 3) | (rm & 7)); + } + tcg_out8(s, offset); + } else { + tcg_out_opc(s, opc, r, rm, 0); + if ((rm & 7) == TCG_REG_RSP) { + tcg_out8(s, 0x84 | ((r & 7) << 3)); + tcg_out8(s, 0x24); + } else { + tcg_out8(s, 0x80 | ((r & 7) << 3) | (rm & 7)); + } + tcg_out32(s, offset); + } +} + +/* XXX: incomplete. index must be different from ESP */ +static void tcg_out_modrm_offset2(TCGContext *s, int opc, int r, int rm, + int index, int shift, + tcg_target_long offset) +{ + int mod; + if (rm == -1) + tcg_abort(); + if (offset == 0 && (rm & 7) != TCG_REG_RBP) { + mod = 0; + } else if (offset == (int8_t)offset) { + mod = 0x40; + } else if (offset == (int32_t)offset) { + mod = 0x80; + } else { + tcg_abort(); + } + if (index == -1) { + tcg_out_opc(s, opc, r, rm, 0); + if ((rm & 7) == TCG_REG_RSP) { + tcg_out8(s, mod | ((r & 7) << 3) | 0x04); + tcg_out8(s, 0x04 | (rm & 7)); + } else { + tcg_out8(s, mod | ((r & 7) << 3) | (rm & 7)); + } + } else { + tcg_out_opc(s, opc, r, rm, index); + tcg_out8(s, mod | ((r & 7) << 3) | 0x04); + tcg_out8(s, (shift << 6) | ((index & 7) << 3) | (rm & 7)); + } + if (mod == 0x40) { + tcg_out8(s, offset); + } else if (mod == 0x80) { + tcg_out32(s, offset); + } +} + +static inline void tcg_out_mov(TCGContext *s, int ret, int arg) +{ + tcg_out_modrm(s, 0x8b | P_REXW, ret, arg); +} + +static inline void tcg_out_movi(TCGContext *s, TCGType type, + int ret, tcg_target_long arg) +{ + if (arg == 0) { + tcg_out_modrm(s, 0x01 | (ARITH_XOR << 3), ret, ret); /* xor r0,r0 */ + } else if (arg == (uint32_t)arg || type == TCG_TYPE_I32) { + tcg_out_opc(s, 0xb8 + (ret & 7), 0, ret, 0); + tcg_out32(s, arg); + } else if (arg == (int32_t)arg) { + tcg_out_modrm(s, 0xc7 | P_REXW, 0, ret); + tcg_out32(s, arg); + } else { + tcg_out_opc(s, (0xb8 + (ret & 7)) | P_REXW, 0, ret, 0); + tcg_out32(s, arg); + tcg_out32(s, arg >> 32); + } +} + +static inline void tcg_out_ld(TCGContext *s, int ret, + int arg1, tcg_target_long arg2) +{ + tcg_out_modrm_offset(s, 0x8b | P_REXW, ret, arg1, arg2); /* movq */ +} + +static inline void tcg_out_st(TCGContext *s, int arg, + int arg1, tcg_target_long arg2) +{ + tcg_out_modrm_offset(s, 0x89 | P_REXW, arg, arg1, arg2); /* movq */ +} + +static inline void tgen_arithi32(TCGContext *s, int c, int r0, int32_t val) +{ + if (val == (int8_t)val) { + tcg_out_modrm(s, 0x83, c, r0); + tcg_out8(s, val); + } else { + tcg_out_modrm(s, 0x81, c, r0); + tcg_out32(s, val); + } +} + +static inline void tgen_arithi64(TCGContext *s, int c, int r0, int64_t val) +{ + if (val == (int8_t)val) { + tcg_out_modrm(s, 0x83 | P_REXW, c, r0); + tcg_out8(s, val); + } else if (val == (int32_t)val) { + tcg_out_modrm(s, 0x81 | P_REXW, c, r0); + tcg_out32(s, val); + } else if (c == ARITH_AND && val == (uint32_t)val) { + tcg_out_modrm(s, 0x81, c, r0); + tcg_out32(s, val); + } else { + tcg_abort(); + } +} + +void tcg_out_addi(TCGContext *s, int reg, tcg_target_long val) +{ + if (val != 0) + tgen_arithi64(s, ARITH_ADD, reg, val); +} + +static void tcg_out_jxx(TCGContext *s, int opc, int label_index) +{ + int32_t val, val1; + TCGLabel *l = &s->labels[label_index]; + + if (l->has_value) { + val = l->u.value - (tcg_target_long)s->code_ptr; + val1 = val - 2; + if ((int8_t)val1 == val1) { + if (opc == -1) + tcg_out8(s, 0xeb); + else + tcg_out8(s, 0x70 + opc); + tcg_out8(s, val1); + } else { + if (opc == -1) { + tcg_out8(s, 0xe9); + tcg_out32(s, val - 5); + } else { + tcg_out8(s, 0x0f); + tcg_out8(s, 0x80 + opc); + tcg_out32(s, val - 6); + } + } + } else { + if (opc == -1) { + tcg_out8(s, 0xe9); + } else { + tcg_out8(s, 0x0f); + tcg_out8(s, 0x80 + opc); + } + tcg_out_reloc(s, s->code_ptr, R_386_PC32, label_index, -4); + tcg_out32(s, -4); + } +} + +static void tcg_out_brcond(TCGContext *s, int cond, + TCGArg arg1, TCGArg arg2, int const_arg2, + int label_index, int rexw) +{ + int c; + if (const_arg2) { + if (arg2 == 0) { + /* use test */ + switch(cond) { + case TCG_COND_EQ: + c = JCC_JNE; + break; + case TCG_COND_NE: + c = JCC_JNE; + break; + case TCG_COND_LT: + c = JCC_JS; + break; + case TCG_COND_GE: + c = JCC_JNS; + break; + default: + goto do_cmpi; + } + /* test r, r */ + tcg_out_modrm(s, 0x85 | rexw, arg1, arg1); + tcg_out_jxx(s, c, label_index); + } else { + do_cmpi: + if (rexw) + tgen_arithi64(s, ARITH_CMP, arg1, arg2); + else + tgen_arithi32(s, ARITH_CMP, arg1, arg2); + tcg_out_jxx(s, tcg_cond_to_jcc[cond], label_index); + } + } else { + tcg_out_modrm(s, 0x01 | (ARITH_CMP << 3) | rexw, arg1, arg2); + tcg_out_jxx(s, tcg_cond_to_jcc[cond], label_index); + } +} + +#if defined(CONFIG_SOFTMMU) +extern void __ldb_mmu(void); +extern void __ldw_mmu(void); +extern void __ldl_mmu(void); +extern void __ldq_mmu(void); + +extern void __stb_mmu(void); +extern void __stw_mmu(void); +extern void __stl_mmu(void); +extern void __stq_mmu(void); + + +static void *qemu_ld_helpers[4] = { + __ldb_mmu, + __ldw_mmu, + __ldl_mmu, + __ldq_mmu, +}; + +static void *qemu_st_helpers[4] = { + __stb_mmu, + __stw_mmu, + __stl_mmu, + __stq_mmu, +}; +#endif + +static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, + int opc) +{ + int addr_reg, data_reg, r0, r1, mem_index, s_bits, bswap, rexw; +#if defined(CONFIG_SOFTMMU) + uint8_t *label1_ptr, *label2_ptr; +#endif + + data_reg = *args++; + addr_reg = *args++; + mem_index = *args; + s_bits = opc & 3; + + r0 = TCG_REG_RDI; + r1 = TCG_REG_RSI; + +#if TARGET_LONG_BITS == 32 + rexw = 0; +#else + rexw = P_REXW; +#endif +#if defined(CONFIG_SOFTMMU) + /* mov */ + tcg_out_modrm(s, 0x8b | rexw, r1, addr_reg); + + /* mov */ + tcg_out_modrm(s, 0x8b | rexw, r0, addr_reg); + + tcg_out_modrm(s, 0xc1 | rexw, 5, r1); /* shr $x, r1 */ + tcg_out8(s, TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS); + + tcg_out_modrm(s, 0x81 | rexw, 4, r0); /* andl $x, r0 */ + tcg_out32(s, TARGET_PAGE_MASK | ((1 << s_bits) - 1)); + + tcg_out_modrm(s, 0x81, 4, r1); /* andl $x, r1 */ + tcg_out32(s, (CPU_TLB_SIZE - 1) << CPU_TLB_ENTRY_BITS); + + /* lea offset(r1, env), r1 */ + tcg_out_modrm_offset2(s, 0x8d | P_REXW, r1, r1, TCG_AREG0, 0, + offsetof(CPUState, tlb_table[mem_index][0].addr_read)); + + /* cmp 0(r1), r0 */ + tcg_out_modrm_offset(s, 0x3b | rexw, r0, r1, 0); + + /* mov */ + tcg_out_modrm(s, 0x8b | rexw, r0, addr_reg); + + /* je label1 */ + tcg_out8(s, 0x70 + JCC_JE); + label1_ptr = s->code_ptr; + s->code_ptr++; + + /* XXX: move that code at the end of the TB */ + tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_RSI, mem_index); + tcg_out8(s, 0xe8); + tcg_out32(s, (tcg_target_long)qemu_ld_helpers[s_bits] - + (tcg_target_long)s->code_ptr - 4); + + switch(opc) { + case 0 | 4: + /* movsbq */ + tcg_out_modrm(s, 0xbe | P_EXT | P_REXW, data_reg, TCG_REG_RAX); + break; + case 1 | 4: + /* movswq */ + tcg_out_modrm(s, 0xbf | P_EXT | P_REXW, data_reg, TCG_REG_RAX); + break; + case 2 | 4: + /* movslq */ + tcg_out_modrm(s, 0x63 | P_REXW, data_reg, TCG_REG_RAX); + break; + case 0: + case 1: + case 2: + default: + /* movl */ + tcg_out_modrm(s, 0x8b, data_reg, TCG_REG_RAX); + break; + case 3: + tcg_out_mov(s, data_reg, TCG_REG_RAX); + break; + } + + /* jmp label2 */ + tcg_out8(s, 0xeb); + label2_ptr = s->code_ptr; + s->code_ptr++; + + /* label1: */ + *label1_ptr = s->code_ptr - label1_ptr - 1; + + /* add x(r1), r0 */ + tcg_out_modrm_offset(s, 0x03 | P_REXW, r0, r1, offsetof(CPUTLBEntry, addend) - + offsetof(CPUTLBEntry, addr_read)); +#else + r0 = addr_reg; +#endif + +#ifdef TARGET_WORDS_BIGENDIAN + bswap = 1; +#else + bswap = 0; +#endif + switch(opc) { + case 0: + /* movzbl */ + tcg_out_modrm_offset(s, 0xb6 | P_EXT, data_reg, r0, 0); + break; + case 0 | 4: + /* movsbX */ + tcg_out_modrm_offset(s, 0xbe | P_EXT | rexw, data_reg, r0, 0); + break; + case 1: + /* movzwl */ + tcg_out_modrm_offset(s, 0xb7 | P_EXT, data_reg, r0, 0); + if (bswap) { + /* rolw $8, data_reg */ + tcg_out8(s, 0x66); + tcg_out_modrm(s, 0xc1, 0, data_reg); + tcg_out8(s, 8); + } + break; + case 1 | 4: + if (bswap) { + /* movzwl */ + tcg_out_modrm_offset(s, 0xb7 | P_EXT, data_reg, r0, 0); + /* rolw $8, data_reg */ + tcg_out8(s, 0x66); + tcg_out_modrm(s, 0xc1, 0, data_reg); + tcg_out8(s, 8); + + /* movswX data_reg, data_reg */ + tcg_out_modrm(s, 0xbf | P_EXT | rexw, data_reg, data_reg); + } else { + /* movswX */ + tcg_out_modrm_offset(s, 0xbf | P_EXT | rexw, data_reg, r0, 0); + } + break; + case 2: + /* movl (r0), data_reg */ + tcg_out_modrm_offset(s, 0x8b, data_reg, r0, 0); + if (bswap) { + /* bswap */ + tcg_out_opc(s, (0xc8 + (data_reg & 7)) | P_EXT, 0, data_reg, 0); + } + break; + case 2 | 4: + if (bswap) { + /* movl (r0), data_reg */ + tcg_out_modrm_offset(s, 0x8b, data_reg, r0, 0); + /* bswap */ + tcg_out_opc(s, (0xc8 + (data_reg & 7)) | P_EXT, 0, data_reg, 0); + /* movslq */ + tcg_out_modrm(s, 0x63 | P_REXW, data_reg, data_reg); + } else { + /* movslq */ + tcg_out_modrm_offset(s, 0x63 | P_REXW, data_reg, r0, 0); + } + break; + case 3: + /* movq (r0), data_reg */ + tcg_out_modrm_offset(s, 0x8b | P_REXW, data_reg, r0, 0); + if (bswap) { + /* bswap */ + tcg_out_opc(s, (0xc8 + (data_reg & 7)) | P_EXT | P_REXW, 0, data_reg, 0); + } + break; + default: + tcg_abort(); + } + +#if defined(CONFIG_SOFTMMU) + /* label2: */ + *label2_ptr = s->code_ptr - label2_ptr - 1; +#endif +} + +static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, + int opc) +{ + int addr_reg, data_reg, r0, r1, mem_index, s_bits, bswap, rexw; +#if defined(CONFIG_SOFTMMU) + uint8_t *label1_ptr, *label2_ptr; +#endif + + data_reg = *args++; + addr_reg = *args++; + mem_index = *args; + + s_bits = opc; + + r0 = TCG_REG_RDI; + r1 = TCG_REG_RSI; + +#if TARGET_LONG_BITS == 32 + rexw = 0; +#else + rexw = P_REXW; +#endif +#if defined(CONFIG_SOFTMMU) + /* mov */ + tcg_out_modrm(s, 0x8b | rexw, r1, addr_reg); + + /* mov */ + tcg_out_modrm(s, 0x8b | rexw, r0, addr_reg); + + tcg_out_modrm(s, 0xc1 | rexw, 5, r1); /* shr $x, r1 */ + tcg_out8(s, TARGET_PAGE_BITS - CPU_TLB_ENTRY_BITS); + + tcg_out_modrm(s, 0x81 | rexw, 4, r0); /* andl $x, r0 */ + tcg_out32(s, TARGET_PAGE_MASK | ((1 << s_bits) - 1)); + + tcg_out_modrm(s, 0x81, 4, r1); /* andl $x, r1 */ + tcg_out32(s, (CPU_TLB_SIZE - 1) << CPU_TLB_ENTRY_BITS); + + /* lea offset(r1, env), r1 */ + tcg_out_modrm_offset2(s, 0x8d | P_REXW, r1, r1, TCG_AREG0, 0, + offsetof(CPUState, tlb_table[mem_index][0].addr_write)); + + /* cmp 0(r1), r0 */ + tcg_out_modrm_offset(s, 0x3b | rexw, r0, r1, 0); + + /* mov */ + tcg_out_modrm(s, 0x8b | rexw, r0, addr_reg); + + /* je label1 */ + tcg_out8(s, 0x70 + JCC_JE); + label1_ptr = s->code_ptr; + s->code_ptr++; + + /* XXX: move that code at the end of the TB */ + switch(opc) { + case 0: + /* movzbl */ + tcg_out_modrm(s, 0xb6 | P_EXT, TCG_REG_RSI, data_reg); + break; + case 1: + /* movzwl */ + tcg_out_modrm(s, 0xb7 | P_EXT, TCG_REG_RSI, data_reg); + break; + case 2: + /* movl */ + tcg_out_modrm(s, 0x8b, TCG_REG_RSI, data_reg); + break; + default: + case 3: + tcg_out_mov(s, TCG_REG_RSI, data_reg); + break; + } + tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_RDX, mem_index); + tcg_out8(s, 0xe8); + tcg_out32(s, (tcg_target_long)qemu_st_helpers[s_bits] - + (tcg_target_long)s->code_ptr - 4); + + /* jmp label2 */ + tcg_out8(s, 0xeb); + label2_ptr = s->code_ptr; + s->code_ptr++; + + /* label1: */ + *label1_ptr = s->code_ptr - label1_ptr - 1; + + /* add x(r1), r0 */ + tcg_out_modrm_offset(s, 0x03 | P_REXW, r0, r1, offsetof(CPUTLBEntry, addend) - + offsetof(CPUTLBEntry, addr_write)); +#else + r0 = addr_reg; +#endif + +#ifdef TARGET_WORDS_BIGENDIAN + bswap = 1; +#else + bswap = 0; +#endif + switch(opc) { + case 0: + /* movb */ + tcg_out_modrm_offset(s, 0x88 | P_REX, data_reg, r0, 0); + break; + case 1: + if (bswap) { + tcg_out_modrm(s, 0x8b, r1, data_reg); /* movl */ + tcg_out8(s, 0x66); /* rolw $8, %ecx */ + tcg_out_modrm(s, 0xc1, 0, r1); + tcg_out8(s, 8); + data_reg = r1; + } + /* movw */ + tcg_out8(s, 0x66); + tcg_out_modrm_offset(s, 0x89, data_reg, r0, 0); + break; + case 2: + if (bswap) { + tcg_out_modrm(s, 0x8b, r1, data_reg); /* movl */ + /* bswap data_reg */ + tcg_out_opc(s, (0xc8 + r1) | P_EXT, 0, r1, 0); + data_reg = r1; + } + /* movl */ + tcg_out_modrm_offset(s, 0x89, data_reg, r0, 0); + break; + case 3: + if (bswap) { + tcg_out_mov(s, r1, data_reg); + /* bswap data_reg */ + tcg_out_opc(s, (0xc8 + r1) | P_EXT | P_REXW, 0, r1, 0); + data_reg = r1; + } + /* movq */ + tcg_out_modrm_offset(s, 0x89 | P_REXW, data_reg, r0, 0); + break; + default: + tcg_abort(); + } + +#if defined(CONFIG_SOFTMMU) + /* label2: */ + *label2_ptr = s->code_ptr - label2_ptr - 1; +#endif +} + +static inline void tcg_out_op(TCGContext *s, int opc, const TCGArg *args, + const int *const_args) +{ + int c; + + switch(opc) { + case INDEX_op_exit_tb: + tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_RAX, args[0]); + tcg_out8(s, 0xc3); /* ret */ + break; + case INDEX_op_goto_tb: + if (s->tb_jmp_offset) { + /* direct jump method */ + tcg_out8(s, 0xe9); /* jmp im */ + s->tb_jmp_offset[args[0]] = s->code_ptr - s->code_buf; + tcg_out32(s, 0); + } else { + /* indirect jump method */ + /* jmp Ev */ + tcg_out_modrm_offset(s, 0xff, 4, -1, + (tcg_target_long)(s->tb_next + + args[0])); + } + s->tb_next_offset[args[0]] = s->code_ptr - s->code_buf; + break; + case INDEX_op_call: + if (const_args[0]) { + tcg_out8(s, 0xe8); + tcg_out32(s, args[0] - (tcg_target_long)s->code_ptr - 4); + } else { + tcg_out_modrm(s, 0xff, 2, args[0]); + } + break; + case INDEX_op_jmp: + if (const_args[0]) { + tcg_out8(s, 0xe9); + tcg_out32(s, args[0] - (tcg_target_long)s->code_ptr - 4); + } else { + tcg_out_modrm(s, 0xff, 4, args[0]); + } + break; + case INDEX_op_br: + tcg_out_jxx(s, JCC_JMP, args[0]); + break; + case INDEX_op_movi_i32: + tcg_out_movi(s, TCG_TYPE_I32, args[0], (uint32_t)args[1]); + break; + case INDEX_op_movi_i64: + tcg_out_movi(s, TCG_TYPE_I64, args[0], args[1]); + break; + case INDEX_op_ld8u_i32: + case INDEX_op_ld8u_i64: + /* movzbl */ + tcg_out_modrm_offset(s, 0xb6 | P_EXT, args[0], args[1], args[2]); + break; + case INDEX_op_ld8s_i32: + /* movsbl */ + tcg_out_modrm_offset(s, 0xbe | P_EXT, args[0], args[1], args[2]); + break; + case INDEX_op_ld8s_i64: + /* movsbq */ + tcg_out_modrm_offset(s, 0xbe | P_EXT | P_REXW, args[0], args[1], args[2]); + break; + case INDEX_op_ld16u_i32: + case INDEX_op_ld16u_i64: + /* movzwl */ + tcg_out_modrm_offset(s, 0xb7 | P_EXT, args[0], args[1], args[2]); + break; + case INDEX_op_ld16s_i32: + /* movswl */ + tcg_out_modrm_offset(s, 0xbf | P_EXT, args[0], args[1], args[2]); + break; + case INDEX_op_ld16s_i64: + /* movswq */ + tcg_out_modrm_offset(s, 0xbf | P_EXT | P_REXW, args[0], args[1], args[2]); + break; + case INDEX_op_ld_i32: + case INDEX_op_ld32u_i64: + /* movl */ + tcg_out_modrm_offset(s, 0x8b, args[0], args[1], args[2]); + break; + case INDEX_op_ld32s_i64: + /* movslq */ + tcg_out_modrm_offset(s, 0x63 | P_REXW, args[0], args[1], args[2]); + break; + case INDEX_op_ld_i64: + /* movq */ + tcg_out_modrm_offset(s, 0x8b | P_REXW, args[0], args[1], args[2]); + break; + + case INDEX_op_st8_i32: + case INDEX_op_st8_i64: + /* movb */ + tcg_out_modrm_offset(s, 0x88 | P_REX, args[0], args[1], args[2]); + break; + case INDEX_op_st16_i32: + case INDEX_op_st16_i64: + /* movw */ + tcg_out8(s, 0x66); + tcg_out_modrm_offset(s, 0x89, args[0], args[1], args[2]); + break; + case INDEX_op_st_i32: + case INDEX_op_st32_i64: + /* movl */ + tcg_out_modrm_offset(s, 0x89, args[0], args[1], args[2]); + break; + case INDEX_op_st_i64: + /* movq */ + tcg_out_modrm_offset(s, 0x89 | P_REXW, args[0], args[1], args[2]); + break; + + case INDEX_op_sub_i32: + c = ARITH_SUB; + goto gen_arith32; + case INDEX_op_and_i32: + c = ARITH_AND; + goto gen_arith32; + case INDEX_op_or_i32: + c = ARITH_OR; + goto gen_arith32; + case INDEX_op_xor_i32: + c = ARITH_XOR; + goto gen_arith32; + case INDEX_op_add_i32: + c = ARITH_ADD; + gen_arith32: + if (const_args[2]) { + tgen_arithi32(s, c, args[0], args[2]); + } else { + tcg_out_modrm(s, 0x01 | (c << 3), args[2], args[0]); + } + break; + + case INDEX_op_sub_i64: + c = ARITH_SUB; + goto gen_arith64; + case INDEX_op_and_i64: + c = ARITH_AND; + goto gen_arith64; + case INDEX_op_or_i64: + c = ARITH_OR; + goto gen_arith64; + case INDEX_op_xor_i64: + c = ARITH_XOR; + goto gen_arith64; + case INDEX_op_add_i64: + c = ARITH_ADD; + gen_arith64: + if (const_args[2]) { + tgen_arithi64(s, c, args[0], args[2]); + } else { + tcg_out_modrm(s, 0x01 | (c << 3) | P_REXW, args[2], args[0]); + } + break; + + case INDEX_op_mul_i32: + if (const_args[2]) { + int32_t val; + val = args[2]; + if (val == (int8_t)val) { + tcg_out_modrm(s, 0x6b, args[0], args[0]); + tcg_out8(s, val); + } else { + tcg_out_modrm(s, 0x69, args[0], args[0]); + tcg_out32(s, val); + } + } else { + tcg_out_modrm(s, 0xaf | P_EXT, args[0], args[2]); + } + break; + case INDEX_op_mul_i64: + if (const_args[2]) { + int32_t val; + val = args[2]; + if (val == (int8_t)val) { + tcg_out_modrm(s, 0x6b | P_REXW, args[0], args[0]); + tcg_out8(s, val); + } else { + tcg_out_modrm(s, 0x69 | P_REXW, args[0], args[0]); + tcg_out32(s, val); + } + } else { + tcg_out_modrm(s, 0xaf | P_EXT | P_REXW, args[0], args[2]); + } + break; + case INDEX_op_div2_i32: + tcg_out_modrm(s, 0xf7, 7, args[4]); + break; + case INDEX_op_divu2_i32: + tcg_out_modrm(s, 0xf7, 6, args[4]); + break; + case INDEX_op_div2_i64: + tcg_out_modrm(s, 0xf7 | P_REXW, 7, args[4]); + break; + case INDEX_op_divu2_i64: + tcg_out_modrm(s, 0xf7 | P_REXW, 6, args[4]); + break; + + case INDEX_op_shl_i32: + c = SHIFT_SHL; + gen_shift32: + if (const_args[2]) { + if (args[2] == 1) { + tcg_out_modrm(s, 0xd1, c, args[0]); + } else { + tcg_out_modrm(s, 0xc1, c, args[0]); + tcg_out8(s, args[2]); + } + } else { + tcg_out_modrm(s, 0xd3, c, args[0]); + } + break; + case INDEX_op_shr_i32: + c = SHIFT_SHR; + goto gen_shift32; + case INDEX_op_sar_i32: + c = SHIFT_SAR; + goto gen_shift32; + + case INDEX_op_shl_i64: + c = SHIFT_SHL; + gen_shift64: + if (const_args[2]) { + if (args[2] == 1) { + tcg_out_modrm(s, 0xd1 | P_REXW, c, args[0]); + } else { + tcg_out_modrm(s, 0xc1 | P_REXW, c, args[0]); + tcg_out8(s, args[2]); + } + } else { + tcg_out_modrm(s, 0xd3 | P_REXW, c, args[0]); + } + break; + case INDEX_op_shr_i64: + c = SHIFT_SHR; + goto gen_shift64; + case INDEX_op_sar_i64: + c = SHIFT_SAR; + goto gen_shift64; + + case INDEX_op_brcond_i32: + tcg_out_brcond(s, args[2], args[0], args[1], const_args[1], + args[3], 0); + break; + case INDEX_op_brcond_i64: + tcg_out_brcond(s, args[2], args[0], args[1], const_args[1], + args[3], P_REXW); + break; + + case INDEX_op_bswap_i32: + tcg_out_opc(s, (0xc8 + (args[0] & 7)) | P_EXT, 0, args[0], 0); + break; + case INDEX_op_bswap_i64: + tcg_out_opc(s, (0xc8 + (args[0] & 7)) | P_EXT | P_REXW, 0, args[0], 0); + break; + + case INDEX_op_qemu_ld8u: + tcg_out_qemu_ld(s, args, 0); + break; + case INDEX_op_qemu_ld8s: + tcg_out_qemu_ld(s, args, 0 | 4); + break; + case INDEX_op_qemu_ld16u: + tcg_out_qemu_ld(s, args, 1); + break; + case INDEX_op_qemu_ld16s: + tcg_out_qemu_ld(s, args, 1 | 4); + break; + case INDEX_op_qemu_ld32u: + tcg_out_qemu_ld(s, args, 2); + break; + case INDEX_op_qemu_ld32s: + tcg_out_qemu_ld(s, args, 2 | 4); + break; + case INDEX_op_qemu_ld64: + tcg_out_qemu_ld(s, args, 3); + break; + + case INDEX_op_qemu_st8: + tcg_out_qemu_st(s, args, 0); + break; + case INDEX_op_qemu_st16: + tcg_out_qemu_st(s, args, 1); + break; + case INDEX_op_qemu_st32: + tcg_out_qemu_st(s, args, 2); + break; + case INDEX_op_qemu_st64: + tcg_out_qemu_st(s, args, 3); + break; + + default: + tcg_abort(); + } +} + +static const TCGTargetOpDef x86_64_op_defs[] = { + { INDEX_op_exit_tb, { } }, + { INDEX_op_goto_tb, { } }, + { INDEX_op_call, { "ri" } }, /* XXX: might need a specific constant constraint */ + { INDEX_op_jmp, { "ri" } }, /* XXX: might need a specific constant constraint */ + { INDEX_op_br, { } }, + + { INDEX_op_mov_i32, { "r", "r" } }, + { INDEX_op_movi_i32, { "r" } }, + { INDEX_op_ld8u_i32, { "r", "r" } }, + { INDEX_op_ld8s_i32, { "r", "r" } }, + { INDEX_op_ld16u_i32, { "r", "r" } }, + { INDEX_op_ld16s_i32, { "r", "r" } }, + { INDEX_op_ld_i32, { "r", "r" } }, + { INDEX_op_st8_i32, { "r", "r" } }, + { INDEX_op_st16_i32, { "r", "r" } }, + { INDEX_op_st_i32, { "r", "r" } }, + + { INDEX_op_add_i32, { "r", "0", "ri" } }, + { INDEX_op_mul_i32, { "r", "0", "ri" } }, + { INDEX_op_div2_i32, { "a", "d", "0", "1", "r" } }, + { INDEX_op_divu2_i32, { "a", "d", "0", "1", "r" } }, + { INDEX_op_sub_i32, { "r", "0", "ri" } }, + { INDEX_op_and_i32, { "r", "0", "ri" } }, + { INDEX_op_or_i32, { "r", "0", "ri" } }, + { INDEX_op_xor_i32, { "r", "0", "ri" } }, + + { INDEX_op_shl_i32, { "r", "0", "ci" } }, + { INDEX_op_shr_i32, { "r", "0", "ci" } }, + { INDEX_op_sar_i32, { "r", "0", "ci" } }, + + { INDEX_op_brcond_i32, { "r", "ri" } }, + + { INDEX_op_mov_i64, { "r", "r" } }, + { INDEX_op_movi_i64, { "r" } }, + { INDEX_op_ld8u_i64, { "r", "r" } }, + { INDEX_op_ld8s_i64, { "r", "r" } }, + { INDEX_op_ld16u_i64, { "r", "r" } }, + { INDEX_op_ld16s_i64, { "r", "r" } }, + { INDEX_op_ld32u_i64, { "r", "r" } }, + { INDEX_op_ld32s_i64, { "r", "r" } }, + { INDEX_op_ld_i64, { "r", "r" } }, + { INDEX_op_st8_i64, { "r", "r" } }, + { INDEX_op_st16_i64, { "r", "r" } }, + { INDEX_op_st32_i64, { "r", "r" } }, + { INDEX_op_st_i64, { "r", "r" } }, + + { INDEX_op_add_i64, { "r", "0", "re" } }, + { INDEX_op_mul_i64, { "r", "0", "re" } }, + { INDEX_op_div2_i64, { "a", "d", "0", "1", "r" } }, + { INDEX_op_divu2_i64, { "a", "d", "0", "1", "r" } }, + { INDEX_op_sub_i64, { "r", "0", "re" } }, + { INDEX_op_and_i64, { "r", "0", "reZ" } }, + { INDEX_op_or_i64, { "r", "0", "re" } }, + { INDEX_op_xor_i64, { "r", "0", "re" } }, + + { INDEX_op_shl_i64, { "r", "0", "ci" } }, + { INDEX_op_shr_i64, { "r", "0", "ci" } }, + { INDEX_op_sar_i64, { "r", "0", "ci" } }, + + { INDEX_op_brcond_i64, { "r", "re" } }, + + { INDEX_op_bswap_i32, { "r", "0" } }, + { INDEX_op_bswap_i64, { "r", "0" } }, + + { INDEX_op_qemu_ld8u, { "r", "L" } }, + { INDEX_op_qemu_ld8s, { "r", "L" } }, + { INDEX_op_qemu_ld16u, { "r", "L" } }, + { INDEX_op_qemu_ld16s, { "r", "L" } }, + { INDEX_op_qemu_ld32u, { "r", "L" } }, + { INDEX_op_qemu_ld32s, { "r", "L" } }, + { INDEX_op_qemu_ld64, { "r", "L" } }, + + { INDEX_op_qemu_st8, { "L", "L" } }, + { INDEX_op_qemu_st16, { "L", "L" } }, + { INDEX_op_qemu_st32, { "L", "L" } }, + { INDEX_op_qemu_st64, { "L", "L", "L" } }, + + { -1 }, +}; + +void tcg_target_init(TCGContext *s) +{ + tcg_regset_set32(tcg_target_available_regs[TCG_TYPE_I32], 0, 0xffff); + tcg_regset_set32(tcg_target_available_regs[TCG_TYPE_I64], 0, 0xffff); + tcg_regset_set32(tcg_target_call_clobber_regs, 0, + (1 << TCG_REG_RDI) | + (1 << TCG_REG_RSI) | + (1 << TCG_REG_RDX) | + (1 << TCG_REG_RCX) | + (1 << TCG_REG_R8) | + (1 << TCG_REG_R9) | + (1 << TCG_REG_RAX) | + (1 << TCG_REG_R10) | + (1 << TCG_REG_R11)); + + tcg_regset_clear(s->reserved_regs); + tcg_regset_set_reg(s->reserved_regs, TCG_REG_RSP); + /* XXX: will be suppresed when proper global TB entry code will be + generated */ + tcg_regset_set_reg(s->reserved_regs, TCG_REG_RBX); + tcg_regset_set_reg(s->reserved_regs, TCG_REG_RBP); + + tcg_add_target_add_op_defs(x86_64_op_defs); +} diff --git a/tcg/x86_64/tcg-target.h b/tcg/x86_64/tcg-target.h new file mode 100644 index 0000000000..c8421f7f03 --- /dev/null +++ b/tcg/x86_64/tcg-target.h @@ -0,0 +1,69 @@ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2008 Fabrice Bellard + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ +#define TCG_TARGET_X86_64 1 + +#define TCG_TARGET_REG_BITS 64 +//#define TCG_TARGET_WORDS_BIGENDIAN + +#define TCG_TARGET_NB_REGS 16 + +enum { + TCG_REG_RAX = 0, + TCG_REG_RCX, + TCG_REG_RDX, + TCG_REG_RBX, + TCG_REG_RSP, + TCG_REG_RBP, + TCG_REG_RSI, + TCG_REG_RDI, + TCG_REG_R8, + TCG_REG_R9, + TCG_REG_R10, + TCG_REG_R11, + TCG_REG_R12, + TCG_REG_R13, + TCG_REG_R14, + TCG_REG_R15, +}; + +#define TCG_CT_CONST_S32 0x100 +#define TCG_CT_CONST_U32 0x200 + +/* used for function call generation */ +#define TCG_REG_CALL_STACK TCG_REG_RSP +#define TCG_TARGET_STACK_ALIGN 16 + +/* optional instructions */ +#define TCG_TARGET_HAS_bswap_i32 +#define TCG_TARGET_HAS_bswap_i64 + +/* Note: must be synced with dyngen-exec.h */ +#define TCG_AREG0 TCG_REG_R14 +#define TCG_AREG1 TCG_REG_R15 +#define TCG_AREG2 TCG_REG_R12 +#define TCG_AREG3 TCG_REG_R13 + +static inline void flush_icache_range(unsigned long start, unsigned long stop) +{ +} |