summaryrefslogtreecommitdiff
path: root/memory_ldst.inc.c
AgeCommit message (Collapse)Author
2019-09-03memory: Single byte swap along the I/O pathTony Nguyen
Now that MemOp has been pushed down into the memory API, and callers are encoding endianness, we can collapse byte swaps along the I/O path into the accelerator and target independent adjust_endianness. Collapsing byte swaps along the I/O path enables additional endian inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant byte swaps cancelling out. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Message-Id: <911ff31af11922a9afba9b7ce128af8b8b80f316.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03memory: Access MemoryRegion with endiannessTony Nguyen
Preparation for collapsing the two byte swaps adjust_endianness and handle_bswap into the former. Call memory_region_dispatch_{read|write} with endianness encoded into the "MemOp op" operand. This patch does not change any behaviour as memory_region_dispatch_{read|write} is yet to handle the endianness. Once it does handle endianness, callers with byte swaps can collapse them into adjust_endianness. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Message-Id: <8066ab3eb037c0388dfadfe53c5118429dd1de3a.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03exec: Hard code size with MO_{8|16|32|64}Tony Nguyen
Temporarily no-op size_memop was introduced to aid the conversion of memory_region_dispatch_{read|write} operand "unsigned size" into "MemOp op". Now size_memop is implemented, again hard coded size but with MO_{8|16|32|64}. This is more expressive and avoids size_memop calls. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <99f69701cad294db638f84abebc58115e1b9de9a.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03exec: Access MemoryRegion with MemOpTony Nguyen
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op". Convert interfaces by using no-op size_memop. After all interfaces are converted, size_memop will be implemented and the memory_region_dispatch_{read|write} operand "unsigned size" will be converted into a "MemOp op". As size_memop is a no-op, this patch does not change any behaviour. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <3b042deef0a60dd49ae2320ece92120ba6027f2b.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-06-28exec: Fix MAP_RAM for cached accessEric Auger
When an IOMMUMemoryRegion is in front of a virtio device, address_space_cache_init does not set cache->ptr as the memory region is not RAM. However when the device performs an access, we end up in glue() which performs the translation and then uses MAP_RAM. This latter uses the unset ptr and returns a wrong value which leads to a SIGSEV in address_space_lduw_internal_cached_slow, for instance. In slow path cache->ptr is NULL and MAP_RAM must redirect to qemu_map_ram_ptr((mr)->ram_block, ofs). As MAP_RAM, IS_DIRECT and INVALIDATE are the same in _cached_slow and non cached mode, let's remove those macros. This fixes the use cases featuring vIOMMU (Intel and ARM SMMU) which lead to a SIGSEV. Fixes: 48564041a73a (exec: reintroduce MemoryRegion caching) Signed-off-by: Eric Auger <eric.auger@redhat.com> Message-Id: <1528895946-28677-1-git-send-email-eric.auger@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-05-31Make address_space_translate{, _cached}() take a MemTxAttrs argumentPeter Maydell
As part of plumbing MemTxAttrs down to the IOMMU translate method, add MemTxAttrs as an argument to address_space_translate() and address_space_translate_cached(). Callers either have an attrs value to hand, or don't care and can use MEMTXATTRS_UNSPECIFIED. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180521140402.23318-4-peter.maydell@linaro.org
2018-05-09exec: move memory access declarations to a common header, inline *_phys ↵Paolo Bonzini
functions For now, this reduces the text size very slightly due to the newly-added inlining: text size before: 9301965 text size after: 9300645 Later, however, the declarations in include/exec/memory_ldst.inc.h will be reused for the MemoryRegionCache slow path functions. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-12-22exec: introduce memory_ldst.inc.cPaolo Bonzini
Templatize the address_space_* and *_phys functions, so that we can add similar functions in the next patch that work with a lightweight, cache-like version of address_space_map/unmap. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>