Searched refs:ARM64 (Results 1 – 19 of 19) sorted by relevance
1 Test negated bitwise operations simplification on ARM64.
1 Regression test for an issue with VIXL ARM64 veneer pools (b/34850123).
1 Regression test on pattern that caused double removal of AND by ARM64 simplifier.
1 Regression test for the read barrier implementation in ARM64,
1 Regression test checking that the ARM64 scratch register pool is not
24 ARM64("arm64"), enumConstant
24 super("ARM64 Interpreter", 30, listener, Architecture.ARM64, device, in Arm64InterpreterExecutor()
24 super("ARM64 Optimizing Backend", 5, listener, Architecture.ARM64, device, in Arm64OptimizingBackendExecutor()
1 Regression test checking that the VIXL ARM64 scratch register pool is
1 Regression test for the ARM64 Baker's read barrier fast path compiler
3 constant destination position, on ARM64, with read barriers
19 # ARM64 simplifier doing "forward" removals (b/27851582).32 ## CHECK-START-ARM64: int SmaliTests.operations() instruction_simplifier_arm64 (before)38 ## CHECK-START-ARM64: int SmaliTests.operations() instruction_simplifier_arm64 (after)
1 # Valgrind does not recognize the ashmen ioctl() calls on ARM64, so it assumes that a size12 # It seems that on ARM64 Valgrind considers the canary value used by the Clang stack protector to
58 Note that if you wanted to test both ARM and ARM64 on an ARM64 device, you can use60 ARM Optimizing Backend vs. ARM64 Optimizing Backend.110 (ARM/ARM64), and the divergences align with different architectures,
76 /// CHECK-START-ARM64: int MyClass.MyMethod() constant_folding (after)85 /// CHECK-START-{MIPS,ARM,ARM64}: int MyClass.MyMethod() constant_folding (after)
32 ## CHECK-START-ARM64: void Smali.stencilSubInt(int[], int[], int) loop_optimization (after)85 ## CHECK-START-ARM64: void Smali.stencilAddInt(int[], int[], int) loop_optimization (after)
3056 UNIMPLEMENTED_INTRINSIC(ARM64, ReferenceGetReferent)3058 UNIMPLEMENTED_INTRINSIC(ARM64, StringStringIndexOf);3059 UNIMPLEMENTED_INTRINSIC(ARM64, StringStringIndexOfAfter);3060 UNIMPLEMENTED_INTRINSIC(ARM64, StringBufferAppend);3061 UNIMPLEMENTED_INTRINSIC(ARM64, StringBufferLength);3062 UNIMPLEMENTED_INTRINSIC(ARM64, StringBufferToString);3063 UNIMPLEMENTED_INTRINSIC(ARM64, StringBuilderAppend);3064 UNIMPLEMENTED_INTRINSIC(ARM64, StringBuilderLength);3065 UNIMPLEMENTED_INTRINSIC(ARM64, StringBuilderToString);3068 UNIMPLEMENTED_INTRINSIC(ARM64, UnsafeGetAndAddInt)[all …]
240 // VIXL assembly support for ARM64 targets.288 // VIXL assembly support for ARM64 targets.
17 # Configuration for ARM64