1# Speculative Load Hardening
2
3### A Spectre Variant #1 Mitigation Technique
4
5Author: Chandler Carruth - [chandlerc@google.com](mailto:chandlerc@google.com)
6
7## Problem Statement
8
9Recently, Google Project Zero and other researchers have found information leak
10vulnerabilities by exploiting speculative execution in modern CPUs. These
11exploits are currently broken down into three variants:
12* GPZ Variant #1 (a.k.a. Spectre Variant #1): Bounds check (or predicate) bypass
13* GPZ Variant #2 (a.k.a. Spectre Variant #2): Branch target injection
14* GPZ Variant #3 (a.k.a. Meltdown): Rogue data cache load
15
16For more details, see the Google Project Zero blog post and the Spectre research
17paper:
18* https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
19* https://spectreattack.com/spectre.pdf
20
21The core problem of GPZ Variant #1 is that speculative execution uses branch
22prediction to select the path of instructions speculatively executed. This path
23is speculatively executed with the available data, and may load from memory and
24leak the loaded values through various side channels that survive even when the
25speculative execution is unwound due to being incorrect. Mispredicted paths can
26cause code to be executed with data inputs that never occur in correct
27executions, making checks against malicious inputs ineffective and allowing
28attackers to use malicious data inputs to leak secret data. Here is an example,
29extracted and simplified from the Project Zero paper:
30```
31struct array {
32  unsigned long length;
33  unsigned char data[];
34};
35struct array *arr1 = ...; // small array
36struct array *arr2 = ...; // array of size 0x400
37unsigned long untrusted_offset_from_caller = ...;
38if (untrusted_offset_from_caller < arr1->length) {
39  unsigned char value = arr1->data[untrusted_offset_from_caller];
40  unsigned long index2 = ((value&1)*0x100)+0x200;
41  unsigned char value2 = arr2->data[index2];
42}
43```
44
45The key of the attack is to call this with `untrusted_offset_from_caller` that
46is far outside of the bounds when the branch predictor will predict that it
47will be in-bounds. In that case, the body of the `if` will be executed
48speculatively, and may read secret data into `value` and leak it via a
49cache-timing side channel when a dependent access is made to populate `value2`.
50
51## High Level Mitigation Approach
52
53While several approaches are being actively pursued to mitigate specific
54branches and/or loads inside especially risky software (most notably various OS
55kernels), these approaches require manual and/or static analysis aided auditing
56of code and explicit source changes to apply the mitigation. They are unlikely
57to scale well to large applications. We are proposing a comprehensive
58mitigation approach that would apply automatically across an entire program
59rather than through manual changes to the code. While this is likely to have a
60high performance cost, some applications may be in a good position to take this
61performance / security tradeoff.
62
63The specific technique we propose is to cause loads to be checked using
64branchless code to ensure that they are executing along a valid control flow
65path. Consider the following C-pseudo-code representing the core idea of a
66predicate guarding potentially invalid loads:
67```
68void leak(int data);
69void example(int* pointer1, int* pointer2) {
70  if (condition) {
71    // ... lots of code ...
72    leak(*pointer1);
73  } else {
74    // ... more code ...
75    leak(*pointer2);
76  }
77}
78```
79
80This would get transformed into something resembling the following:
81```
82uintptr_t all_ones_mask = std::numerical_limits<uintptr_t>::max();
83uintptr_t all_zeros_mask = 0;
84void leak(int data);
85void example(int* pointer1, int* pointer2) {
86  uintptr_t predicate_state = all_ones_mask;
87  if (condition) {
88    // Assuming ?: is implemented using branchless logic...
89    predicate_state = !condition ? all_zeros_mask : predicate_state;
90    // ... lots of code ...
91    //
92    // Harden the pointer so it can't be loaded
93    pointer1 &= predicate_state;
94    leak(*pointer1);
95  } else {
96    predicate_state = condition ? all_zeros_mask : predicate_state;
97    // ... more code ...
98    //
99    // Alternative: Harden the loaded value
100    int value2 = *pointer2 & predicate_state;
101    leak(value2);
102  }
103}
104```
105
106The result should be that if the `if (condition) {` branch is mis-predicted,
107there is a *data* dependency on the condition used to zero out any pointers
108prior to loading through them or to zero out all of the loaded bits. Even
109though this code pattern may still execute speculatively, *invalid* speculative
110executions are prevented from leaking secret data from memory (but note that
111this data might still be loaded in safe ways, and some regions of memory are
112required to not hold secrets, see below for detailed limitations). This
113approach only requires the underlying hardware have a way to implement a
114branchless and unpredicted conditional update of a register's value. All modern
115architectures have support for this, and in fact such support is necessary to
116correctly implement constant time cryptographic primitives.
117
118Crucial properties of this approach:
119* It is not preventing any particular side-channel from working. This is
120  important as there are an unknown number of potential side channels and we
121  expect to continue discovering more. Instead, it prevents the observation of
122  secret data in the first place.
123* It accumulates the predicate state, protecting even in the face of nested
124  *correctly* predicted control flows.
125* It passes this predicate state across function boundaries to provide
126  [interprocedural protection](#interprocedural-checking).
127* When hardening the address of a load, it uses a *destructive* or
128  *non-reversible* modification of the address to prevent an attacker from
129  reversing the check using attacker-controlled inputs.
130* It does not completely block speculative execution, and merely prevents
131  *mis*-speculated paths from leaking secrets from memory (and stalls
132  speculation until this can be determined).
133* It is completely general and makes no fundamental assumptions about the
134  underlying architecture other than the ability to do branchless conditional
135  data updates and a lack of value prediction.
136* It does not require programmers to identify all possible secret data using
137  static source code annotations or code vulnerable to a variant #1 style
138  attack.
139
140Limitations of this approach:
141* It requires re-compiling source code to insert hardening instruction
142  sequences. Only software compiled in this mode is protected.
143* The performance is heavily dependent on a particular architecture's
144  implementation strategy. We outline a potential x86 implementation below and
145  characterize its performance.
146* It does not defend against secret data already loaded from memory and
147  residing in registers or leaked through other side-channels in
148  non-speculative execution. Code dealing with this, e.g cryptographic
149  routines, already uses constant-time algorithms and code to prevent
150  side-channels. Such code should also scrub registers of secret data following
151  [these
152  guidelines](https://github.com/HACS-workshop/spectre-mitigations/blob/master/crypto_guidelines.md).
153* To achieve reasonable performance, many loads may not be checked, such as
154  those with compile-time fixed addresses. This primarily consists of accesses
155  at compile-time constant offsets of global and local variables. Code which
156  needs this protection and intentionally stores secret data must ensure the
157  memory regions used for secret data are necessarily dynamic mappings or heap
158  allocations. This is an area which can be tuned to provide more comprehensive
159  protection at the cost of performance.
160* [Hardened loads](#hardening-the-address-of-the-load) may still load data from
161  _valid_ addresses if not _attacker-controlled_ addresses. To prevent these
162  from reading secret data, the low 2gb of the address space and 2gb above and
163  below any executable pages should be protected.
164
165Credit:
166* The core idea of tracing misspeculation through data and marking pointers to
167  block misspeculated loads was developed as part of a HACS 2018 discussion
168  between Chandler Carruth, Paul Kocher, Thomas Pornin, and several other
169  individuals.
170* Core idea of masking out loaded bits was part of the original mitigation
171  suggested by Jann Horn when these attacks were reported.
172
173
174### Indirect Branches, Calls, and Returns
175
176It is possible to attack control flow other than conditional branches with
177variant #1 style mispredictions.
178* A prediction towards a hot call target of a virtual method can lead to it
179  being speculatively executed when an expected type is used (often called
180  "type confusion").
181* A hot case may be speculatively executed due to prediction instead of the
182  correct case for a switch statement implemented as a jump table.
183* A hot common return address may be predicted incorrectly when returning from
184  a function.
185
186These code patterns are also vulnerable to Spectre variant #2, and as such are
187best mitigated with a
188[retpoline](https://support.google.com/faqs/answer/7625886) on x86 platforms.
189When a mitigation technique like retpoline is used, speculation simply cannot
190proceed through an indirect control flow edge (or it cannot be mispredicted in
191the case of a filled RSB) and so it is also protected from variant #1 style
192attacks. However, some architectures, micro-architectures, or vendors do not
193employ the retpoline mitigation, and on future x86 hardware (both Intel and
194AMD) it is expected to become unnecessary due to hardware-based mitigation.
195
196When not using a retpoline, these edges will need independent protection from
197variant #1 style attacks. The analogous approach to that used for conditional
198control flow should work:
199```
200uintptr_t all_ones_mask = std::numerical_limits<uintptr_t>::max();
201uintptr_t all_zeros_mask = 0;
202void leak(int data);
203void example(int* pointer1, int* pointer2) {
204  uintptr_t predicate_state = all_ones_mask;
205  switch (condition) {
206  case 0:
207    // Assuming ?: is implemented using branchless logic...
208    predicate_state = (condition != 0) ? all_zeros_mask : predicate_state;
209    // ... lots of code ...
210    //
211    // Harden the pointer so it can't be loaded
212    pointer1 &= predicate_state;
213    leak(*pointer1);
214    break;
215
216  case 1:
217    predicate_state = (condition != 1) ? all_zeros_mask : predicate_state;
218    // ... more code ...
219    //
220    // Alternative: Harden the loaded value
221    int value2 = *pointer2 & predicate_state;
222    leak(value2);
223    break;
224
225    // ...
226  }
227}
228```
229
230The core idea remains the same: validate the control flow using data-flow and
231use that validation to check that loads cannot leak information along
232misspeculated paths. Typically this involves passing the desired target of such
233control flow across the edge and checking that it is correct afterwards. Note
234that while it is tempting to think that this mitigates variant #2 attacks, it
235does not. Those attacks go to arbitrary gadgets that don't include the checks.
236
237
238### Variant #1.1 and #1.2 attacks: "Bounds Check Bypass Store"
239
240Beyond the core variant #1 attack, there are techniques to extend this attack.
241The primary technique is known as "Bounds Check Bypass Store" and is discussed
242in this research paper: https://people.csail.mit.edu/vlk/spectre11.pdf
243
244We will analyze these two variants independently. First, variant #1.1 works by
245speculatively storing over the return address after a bounds check bypass. This
246speculative store then ends up being used by the CPU during speculative
247execution of the return, potentially directing speculative execution to
248arbitrary gadgets in the binary. Let's look at an example.
249```
250unsigned char local_buffer[4];
251unsigned char *untrusted_data_from_caller = ...;
252unsigned long untrusted_size_from_caller = ...;
253if (untrusted_size_from_caller < sizeof(local_buffer)) {
254  // Speculative execution enters here with a too-large size.
255  memcpy(local_buffer, untrusted_data_from_caller,
256         untrusted_size_from_caller);
257  // The stack has now been smashed, writing an attacker-controlled
258  // address over the return adress.
259  minor_processing(local_buffer);
260  return;
261  // Control will speculate to the attacker-written address.
262}
263```
264
265However, this can be mitigated by hardening the load of the return address just
266like any other load. This is sometimes complicated because x86 for example
267*implicitly* loads the return address off the stack. However, the
268implementation technique below is specifically designed to mitigate this
269implicit load by using the stack pointer to communicate misspeculation between
270functions. This additionally causes a misspeculation to have an invalid stack
271pointer and never be able to read the speculatively stored return address. See
272the detailed discussion below.
273
274For variant #1.2, the attacker speculatively stores into the vtable or jump
275table used to implement an indirect call or indirect jump. Because this is
276speculative, this will often be possible even when these are stored in
277read-only pages. For example:
278```
279class FancyObject : public BaseObject {
280public:
281  void DoSomething() override;
282};
283void f(unsigned long attacker_offset, unsigned long attacker_data) {
284  FancyObject object = getMyObject();
285  unsigned long *arr[4] = getFourDataPointers();
286  if (attacker_offset < 4) {
287    // We have bypassed the bounds check speculatively.
288    unsigned long *data = arr[attacker_offset];
289    // Now we have computed a pointer inside of `object`, the vptr.
290    *data = attacker_data;
291    // The vptr points to the virtual table and we speculatively clobber that.
292    g(object); // Hand the object to some other routine.
293  }
294}
295// In another file, we call a method on the object.
296void g(BaseObject &object) {
297  object.DoSomething();
298  // This speculatively calls the address stored over the vtable.
299}
300```
301
302Mitigating this requires hardening loads from these locations, or mitigating
303the indirect call or indirect jump. Any of these are sufficient to block the
304call or jump from using a speculatively stored value that has been read back.
305
306For both of these, using retpolines would be equally sufficient. One possible
307hybrid approach is to use retpolines for indirect call and jump, while relying
308on SLH to mitigate returns.
309
310Another approach that is sufficient for both of these is to harden all of the
311speculative stores. However, as most stores aren't interesting and don't
312inherently leak data, this is expected to be prohibitively expensive given the
313attack it is defending against.
314
315
316## Implementation Details
317
318There are a number of complex details impacting the implementation of this
319technique, both on a particular architecture and within a particular compiler.
320We discuss proposed implementation techniques for the x86 architecture and the
321LLVM compiler. These are primarily to serve as an example, as other
322implementation techniques are very possible.
323
324
325### x86 Implementation Details
326
327On the x86 platform we break down the implementation into three core
328components: accumulating the predicate state through the control flow graph,
329checking the loads, and checking control transfers between procedures.
330
331
332#### Accumulating Predicate State
333
334Consider baseline x86 instructions like the following, which test three
335conditions and if all pass, loads data from memory and potentially leaks it
336through some side channel:
337```
338# %bb.0:                                # %entry
339        pushq   %rax
340        testl   %edi, %edi
341        jne     .LBB0_4
342# %bb.1:                                # %then1
343        testl   %esi, %esi
344        jne     .LBB0_4
345# %bb.2:                                # %then2
346        testl   %edx, %edx
347        je      .LBB0_3
348.LBB0_4:                                # %exit
349        popq    %rax
350        retq
351.LBB0_3:                                # %danger
352        movl    (%rcx), %edi
353        callq   leak
354        popq    %rax
355        retq
356```
357
358When we go to speculatively execute the load, we want to know whether any of
359the dynamically executed predicates have been misspeculated. To track that,
360along each conditional edge, we need to track the data which would allow that
361edge to be taken. On x86, this data is stored in the flags register used by the
362conditional jump instruction. Along both edges after this fork in control flow,
363the flags register remains alive and contains data that we can use to build up
364our accumulated predicate state. We accumulate it using the x86 conditional
365move instruction which also reads the flag registers where the state resides.
366These conditional move instructions are known to not be predicted on any x86
367processors, making them immune to misprediction that could reintroduce the
368vulnerability. When we insert the conditional moves, the code ends up looking
369like the following:
370```
371# %bb.0:                                # %entry
372        pushq   %rax
373        xorl    %eax, %eax              # Zero out initial predicate state.
374        movq    $-1, %r8                # Put all-ones mask into a register.
375        testl   %edi, %edi
376        jne     .LBB0_1
377# %bb.2:                                # %then1
378        cmovneq %r8, %rax               # Conditionally update predicate state.
379        testl   %esi, %esi
380        jne     .LBB0_1
381# %bb.3:                                # %then2
382        cmovneq %r8, %rax               # Conditionally update predicate state.
383        testl   %edx, %edx
384        je      .LBB0_4
385.LBB0_1:
386        cmoveq  %r8, %rax               # Conditionally update predicate state.
387        popq    %rax
388        retq
389.LBB0_4:                                # %danger
390        cmovneq %r8, %rax               # Conditionally update predicate state.
391        ...
392```
393
394Here we create the "empty" or "correct execution" predicate state by zeroing
395`%rax`, and we create a constant "incorrect execution" predicate value by
396putting `-1` into `%r8`. Then, along each edge coming out of a conditional
397branch we do a conditional move that in a correct execution will be a no-op,
398but if misspeculated, will replace the `%rax` with the value of `%r8`.
399Misspeculating any one of the three predicates will cause `%rax` to hold the
400"incorrect execution" value from `%r8` as we preserve incoming values when
401execution is correct rather than overwriting it.
402
403We now have a value in `%rax` in each basic block that indicates if at some
404point previously a predicate was mispredicted. And we have arranged for that
405value to be particularly effective when used below to harden loads.
406
407
408##### Indirect Call, Branch, and Return Predicates
409
410(Not yet implemented.)
411
412There is no analogous flag to use when tracing indirect calls, branches, and
413returns. The predicate state must be accumulated through some other means.
414Fundamentally, this is the reverse of the problem posed in CFI: we need to
415check where we came from rather than where we are going. For function-local
416jump tables, this is easily arranged by testing the input to the jump table
417within each destination:
418```
419        pushq   %rax
420        xorl    %eax, %eax              # Zero out initial predicate state.
421        movq    $-1, %r8                # Put all-ones mask into a register.
422        jmpq    *.LJTI0_0(,%rdi,8)      # Indirect jump through table.
423.LBB0_2:                                # %sw.bb
424        testq   $0, %rdi                # Validate index used for jump table.
425        cmovneq %r8, %rax               # Conditionally update predicate state.
426        ...
427        jmp     _Z4leaki                # TAILCALL
428
429.LBB0_3:                                # %sw.bb1
430        testq   $1, %rdi                # Validate index used for jump table.
431        cmovneq %r8, %rax               # Conditionally update predicate state.
432        ...
433        jmp     _Z4leaki                # TAILCALL
434
435.LBB0_5:                                # %sw.bb10
436        testq   $2, %rdi                # Validate index used for jump table.
437        cmovneq %r8, %rax               # Conditionally update predicate state.
438        ...
439        jmp     _Z4leaki                # TAILCALL
440        ...
441
442        .section        .rodata,"a",@progbits
443        .p2align        3
444.LJTI0_0:
445        .quad   .LBB0_2
446        .quad   .LBB0_3
447        .quad   .LBB0_5
448        ...
449```
450
451Returns have a simple mitigation technique on x86-64 (or other ABIs which have
452what is called a "red zone" region beyond the end of the stack). This region is
453guaranteed to be preserved across interrupts and context switches, making the
454return address used in returning to the current code remain on the stack and
455valid to read. We can emit code in the caller to verify that a return edge was
456not mispredicted:
457```
458        callq   other_function
459return_addr:
460        testq   -8(%rsp), return_addr   # Validate return address.
461        cmovneq %r8, %rax               # Update predicate state.
462```
463
464For an ABI without a "red zone" (and thus unable to read the return address
465from the stack), mitigating returns face similar problems to calls below.
466
467Indirect calls (and returns in the absence of a red zone ABI) pose the most
468significant challenge to propagate. The simplest technique would be to define a
469new ABI such that the intended call target is passed into the called function
470and checked in the entry. Unfortunately, new ABIs are quite expensive to deploy
471in C and C++. While the target function could be passed in TLS, we would still
472require complex logic to handle a mixture of functions compiled with and
473without this extra logic (essentially, making the ABI backwards compatible).
474Currently, we suggest using retpolines here and will continue to investigate
475ways of mitigating this.
476
477
478##### Optimizations, Alternatives, and Tradeoffs
479
480Merely accumulating predicate state involves significant cost. There are
481several key optimizations we employ to minimize this and various alternatives
482that present different tradeoffs in the generated code.
483
484First, we work to reduce the number of instructions used to track the state:
485* Rather than inserting a `cmovCC` instruction along every conditional edge in
486  the original program, we track each set of condition flags we need to capture
487  prior to entering each basic block and reuse a common `cmovCC` sequence for
488  those.
489  * We could further reuse suffixes when there are multiple `cmovCC`
490    instructions required to capture the set of flags. Currently this is
491    believed to not be worth the cost as paired flags are relatively rare and
492    suffixes of them are exceedingly rare.
493* A common pattern in x86 is to have multiple conditional jump instructions
494  that use the same flags but handle different conditions. Naively, we could
495  consider each fallthrough between them an "edge" but this causes a much more
496  complex control flow graph. Instead, we accumulate the set of conditions
497  necessary for fallthrough and use a sequence of `cmovCC` instructions in a
498  single fallthrough edge to track it.
499
500Second, we trade register pressure for simpler `cmovCC` instructions by
501allocating a register for the "bad" state. We could read that value from memory
502as part of the conditional move instruction, however, this creates more
503micro-ops and requires the load-store unit to be involved. Currently, we place
504the value into a virtual register and allow the register allocator to decide
505when the register pressure is sufficient to make it worth spilling to memory
506and reloading.
507
508
509#### Hardening Loads
510
511Once we have the predicate accumulated into a special value for correct vs.
512misspeculated, we need to apply this to loads in a way that ensures they do not
513leak secret data. There are two primary techniques for this: we can either
514harden the loaded value to prevent observation, or we can harden the address
515itself to prevent the load from occuring. These have significantly different
516performance tradeoffs.
517
518
519##### Hardening loaded values
520
521The most appealing way to harden loads is to mask out all of the bits loaded.
522The key requirement is that for each bit loaded, along the misspeculated path
523that bit is always fixed at either 0 or 1 regardless of the value of the bit
524loaded. The most obvious implementation uses either an `and` instruction with
525an all-zero mask along misspeculated paths and an all-one mask along correct
526paths, or an `or` instruction with an all-one mask along misspeculated paths
527and an all-zero mask along correct paths. Other options become less appealing
528such as multiplying by zero, or multiple shift instructions. For reasons we
529elaborate on below, we end up suggesting you use `or` with an all-ones mask,
530making the x86 instruction sequence look like the following:
531```
532        ...
533
534.LBB0_4:                                # %danger
535        cmovneq %r8, %rax               # Conditionally update predicate state.
536        movl    (%rsi), %edi            # Load potentially secret data from %rsi.
537        orl     %eax, %edi
538```
539
540Other useful patterns may be to fold the load into the `or` instruction itself
541at the cost of a register-to-register copy.
542
543There are some challenges with deploying this approach:
5441. Many loads on x86 are folded into other instructions. Separating them would
545   add very significant and costly register pressure with prohibitive
546   performance cost.
5471. Loads may not target a general purpose register requiring extra instructions
548   to map the state value into the correct register class, and potentially more
549   expensive instructions to mask the value in some way.
5501. The flags registers on x86 are very likely to be live, and challenging to
551   preserve cheaply.
5521. There are many more values loaded than pointers & indices used for loads. As
553   a consequence, hardening the result of a load requires substantially more
554   instructions than hardening the address of the load (see below).
555
556Despite these challenges, hardening the result of the load critically allows
557the load to proceed and thus has dramatically less impact on the total
558speculative / out-of-order potential of the execution. There are also several
559interesting techniques to try and mitigate these challenges and make hardening
560the results of loads viable in at least some cases. However, we generally
561expect to fall back when unprofitable from hardening the loaded value to the
562next approach of hardening the address itself.
563
564
565###### Loads folded into data-invariant operations can be hardened after the operation
566
567The first key to making this feasible is to recognize that many operations on
568x86 are "data-invariant". That is, they have no (known) observable behavior
569differences due to the particular input data. These instructions are often used
570when implementing cryptographic primitives dealing with private key data
571because they are not believed to provide any side-channels. Similarly, we can
572defer hardening until after them as they will not in-and-of-themselves
573introduce a speculative execution side-channel. This results in code sequences
574that look like:
575```
576        ...
577
578.LBB0_4:                                # %danger
579        cmovneq %r8, %rax               # Conditionally update predicate state.
580        addl    (%rsi), %edi            # Load and accumulate without leaking.
581        orl     %eax, %edi
582```
583
584While an addition happens to the loaded (potentially secret) value, that
585doesn't leak any data and we then immediately harden it.
586
587
588###### Hardening of loaded values deferred down the data-invariant expression graph
589
590We can generalize the previous idea and sink the hardening down the expression
591graph across as many data-invariant operations as desirable. This can use very
592conservative rules for whether something is data-invariant. The primary goal
593should be to handle multiple loads with a single hardening instruction:
594```
595        ...
596
597.LBB0_4:                                # %danger
598        cmovneq %r8, %rax               # Conditionally update predicate state.
599        addl    (%rsi), %edi            # Load and accumulate without leaking.
600        addl    4(%rsi), %edi           # Continue without leaking.
601        addl    8(%rsi), %edi
602        orl     %eax, %edi              # Mask out bits from all three loads.
603```
604
605
606###### Preserving the flags while hardening loaded values on Haswell, Zen, and newer processors
607
608Sadly, there are no useful instructions on x86 that apply a mask to all 64 bits
609without touching the flag registers. However, we can harden loaded values that
610are narrower than a word (fewer than 32-bits on 32-bit systems and fewer than
61164-bits on 64-bit systems) by zero-extending the value to the full word size
612and then shifting right by at least the number of original bits using the BMI2
613`shrx` instruction:
614```
615        ...
616
617.LBB0_4:                                # %danger
618        cmovneq %r8, %rax               # Conditionally update predicate state.
619        addl    (%rsi), %edi            # Load and accumulate 32 bits of data.
620        shrxq   %rax, %rdi, %rdi        # Shift out all 32 bits loaded.
621```
622
623Because on x86 the zero-extend is free, this can efficiently harden the loaded
624value.
625
626
627##### Hardening the address of the load
628
629When hardening the loaded value is inapplicable, most often because the
630instruction directly leaks information (like `cmp` or `jmpq`), we switch to
631hardening the _address_ of the load instead of the loaded value. This avoids
632increasing register pressure by unfolding the load or paying some other high
633cost.
634
635To understand how this works in practice, we need to examine the exact
636semantics of the x86 addressing modes which, in its fully general form, looks
637like `(%base,%index,scale)offset`. Here `%base` and `%index` are 64-bit
638registers that can potentially be any value, and may be attacker controlled,
639and `scale` and `offset` are fixed immediate values. `scale` must be `1`, `2`,
640`4`, or `8`, and `offset` can be any 32-bit sign extended value. The exact
641computation performed to find the address is then: `%base + (scale * %index) +
642offset` under 64-bit 2's complement modular arithmetic.
643
644One issue with this approach is that, after hardening, the  `%base + (scale *
645%index)` subexpression will compute a value near zero (`-1 + (scale * -1)`) and
646then a large, positive `offset` will index into memory within the first two
647gigabytes of address space. While these offsets are not attacker controlled,
648the attacker could chose to attack a load which happens to have the desired
649offset and then successfully read memory in that region. This significantly
650raises the burden on the attacker and limits the scope of attack but does not
651eliminate it. To fully close the attack we must work with the operating system
652to preclude mapping memory in the low two gigabytes of address space.
653
654
655###### 64-bit load checking instructions
656
657We can use the following instruction sequences to check loads. We set up `%r8`
658in these examples to hold the special value of `-1` which will be `cmov`ed over
659`%rax` in misspeculated paths.
660
661Single register addressing mode:
662```
663        ...
664
665.LBB0_4:                                # %danger
666        cmovneq %r8, %rax               # Conditionally update predicate state.
667        orq     %rax, %rsi              # Mask the pointer if misspeculating.
668        movl    (%rsi), %edi
669```
670
671Two register addressing mode:
672```
673        ...
674
675.LBB0_4:                                # %danger
676        cmovneq %r8, %rax               # Conditionally update predicate state.
677        orq     %rax, %rsi              # Mask the pointer if misspeculating.
678        orq     %rax, %rcx              # Mask the index if misspeculating.
679        movl    (%rsi,%rcx), %edi
680```
681
682This will result in a negative address near zero or in `offset` wrapping the
683address space back to a small positive address. Small, negative addresses will
684fault in user-mode for most operating systems, but targets which need the high
685address space to be user accessible may need to adjust the exact sequence used
686above. Additionally, the low addresses will need to be marked unreadable by the
687OS to fully harden the load.
688
689
690###### RIP-relative addressing is even easier to break
691
692There is a common addressing mode idiom that is substantially harder to check:
693addressing relative to the instruction pointer. We cannot change the value of
694the instruction pointer register and so we have the harder problem of forcing
695`%base + scale * %index + offset` to be an invalid address, by *only* changing
696`%index`. The only advantage we have is that the attacker also cannot modify
697`%base`. If we use the fast instruction sequence above, but only apply it to
698the index, we will always access `%rip + (scale * -1) + offset`. If the
699attacker can find a load which with this address happens to point to secret
700data, then they can reach it. However, the loader and base libraries can also
701simply refuse to map the heap, data segments, or stack within 2gb of any of the
702text in the program, much like it can reserve the low 2gb of address space.
703
704
705###### The flag registers again make everything hard
706
707Unfortunately, the technique of using `orq`-instructions has a serious flaw on
708x86. The very thing that makes it easy to accumulate state, the flag registers
709containing predicates, causes serious problems here because they may be alive
710and used by the loading instruction or subsequent instructions. On x86, the
711`orq` instruction **sets** the flags and will override anything already there.
712This makes inserting them into the instruction stream very hazardous.
713Unfortunately, unlike when hardening the loaded value, we have no fallback here
714and so we must have a fully general approach available.
715
716The first thing we must do when generating these sequences is try to analyze
717the surrounding code to prove that the flags are not in fact alive or being
718used. Typically, it has been set by some other instruction which just happens
719to set the flags register (much like ours!) with no actual dependency. In those
720cases, it is safe to directly insert these instructions. Alternatively we may
721be able to move them earlier to avoid clobbering the used value.
722
723However, this may ultimately be impossible. In that case, we need to preserve
724the flags around these instructions:
725```
726        ...
727
728.LBB0_4:                                # %danger
729        cmovneq %r8, %rax               # Conditionally update predicate state.
730        pushfq
731        orq     %rax, %rcx              # Mask the pointer if misspeculating.
732        orq     %rax, %rdx              # Mask the index if misspeculating.
733        popfq
734        movl    (%rcx,%rdx), %edi
735```
736
737Using the `pushf` and `popf` instructions saves the flags register around our
738inserted code, but comes at a high cost. First, we must store the flags to the
739stack and reload them. Second, this causes the stack pointer to be adjusted
740dynamically, requiring a frame pointer be used for referring to temporaries
741spilled to the stack, etc.
742
743On newer x86 processors we can use the `lahf` and `sahf` instructions to save
744all of the flags besides the overflow flag in a register rather than on the
745stack. We can then use `seto` and `add` to save and restore the overflow flag
746in a register. Combined, this will save and restore flags in the same manner as
747above but using two registers rather than the stack. That is still very
748expensive if slightly less expensive than `pushf` and `popf` in most cases.
749
750
751###### A flag-less alternative on Haswell, Zen and newer processors
752
753Starting with the BMI2 x86 instruction set extensions available on Haswell and
754Zen processors, there is an instruction for shifting that does not set any
755flags: `shrx`. We can use this and the `lea` instruction to implement analogous
756code sequences to the above ones. However, these are still very marginally
757slower, as there are fewer ports able to dispatch shift instructions in most
758modern x86 processors than there are for `or` instructions.
759
760Fast, single register addressing mode:
761```
762        ...
763
764.LBB0_4:                                # %danger
765        cmovneq %r8, %rax               # Conditionally update predicate state.
766        shrxq   %rax, %rsi, %rsi        # Shift away bits if misspeculating.
767        movl    (%rsi), %edi
768```
769
770This will collapse the register to zero or one, and everything but the offset
771in the addressing mode to be less than or equal to 9. This means the full
772address can only be guaranteed to be less than `(1 << 31) + 9`. The OS may wish
773to protect an extra page of the low address space to account for this
774
775
776##### Optimizations
777
778A very large portion of the cost for this approach comes from checking loads in
779this way, so it is important to work to optimize this. However, beyond making
780the instruction sequences to *apply* the checks efficient (for example by
781avoiding `pushfq` and `popfq` sequences), the only significant optimization is
782to check fewer loads without introducing a vulnerability. We apply several
783techniques to accomplish that.
784
785
786###### Don't check loads from compile-time constant stack offsets
787
788We implement this optimization on x86 by skipping the checking of loads which
789use a fixed frame pointer offset.
790
791The result of this optimization is that patterns like reloading a spilled
792register or accessing a global field don't get checked. This is a very
793significant performance win.
794
795
796###### Don't check dependent loads
797
798A core part of why this mitigation strategy works is that it establishes a
799data-flow check on the loaded address. However, this means that if the address
800itself was already loaded using a checked load, there is no need to check a
801dependent load provided it is within the same basic block as the checked load,
802and therefore has no additional predicates guarding it. Consider code like the
803following:
804```
805        ...
806
807.LBB0_4:                                # %danger
808        movq    (%rcx), %rdi
809        movl    (%rdi), %edx
810```
811
812This will get transformed into:
813```
814        ...
815
816.LBB0_4:                                # %danger
817        cmovneq %r8, %rax               # Conditionally update predicate state.
818        orq     %rax, %rcx              # Mask the pointer if misspeculating.
819        movq    (%rcx), %rdi            # Hardened load.
820        movl    (%rdi), %edx            # Unhardened load due to dependent addr.
821```
822
823This doesn't check the load through `%rdi` as that pointer is dependent on a
824checked load already.
825
826
827###### Protect large, load-heavy blocks with a single lfence
828
829It may be worth using a single `lfence` instruction at the start of a block
830which begins with a (very) large number of loads that require independent
831protection *and* which require hardening the address of the load. However, this
832is unlikely to be profitable in practice. The latency hit of the hardening
833would need to exceed that of an `lfence` when *correctly* speculatively
834executed. But in that case, the `lfence` cost is a complete loss of speculative
835execution (at a minimum). So far, the evidence we have of the performance cost
836of using `lfence` indicates few if any hot code patterns where this trade off
837would make sense.
838
839
840###### Tempting optimizations that break the security model
841
842Several optimizations were considered which didn't pan out due to failure to
843uphold the security model. One in particular is worth discussing as many others
844will reduce to it.
845
846We wondered whether only the *first* load in a basic block could be checked. If
847the check works as intended, it forms an invalid pointer that doesn't even
848virtual-address translate in the hardware. It should fault very early on in its
849processing. Maybe that would stop things in time for the misspeculated path to
850fail to leak any secrets. This doesn't end up working because the processor is
851fundamentally out-of-order, even in its speculative domain. As a consequence,
852the attacker could cause the initial address computation itself to stall and
853allow an arbitrary number of unrelated loads (including attacked loads of
854secret data) to pass through.
855
856
857#### Interprocedural Checking
858
859Modern x86 processors may speculate into called functions and out of functions
860to their return address. As a consequence, we need a way to check loads that
861occur after a misspeculated predicate but where the load and the misspeculated
862predicate are in different functions. In essence, we need some interprocedural
863generalization of the predicate state tracking. A primary challenge to passing
864the predicate state between functions is that we would like to not require a
865change to the ABI or calling convention in order to make this mitigation more
866deployable, and further would like code mitigated in this way to be easily
867mixed with code not mitigated in this way and without completely losing the
868value of the mitigation.
869
870
871##### Embed the predicate state into the high bit(s) of the stack pointer
872
873We can use the same technique that allows hardening pointers to pass the
874predicate state into and out of functions. The stack pointer is trivially
875passed between functions and we can test for it having the high bits set to
876detect when it has been marked due to misspeculation. The callsite instruction
877sequence looks like (assuming a misspeculated state value of `-1`):
878```
879        ...
880
881.LBB0_4:                                # %danger
882        cmovneq %r8, %rax               # Conditionally update predicate state.
883        shlq    $47, %rax
884        orq     %rax, %rsp
885        callq   other_function
886        movq    %rsp, %rax
887        sarq    63, %rax                # Sign extend the high bit to all bits.
888```
889
890This first puts the predicate state into the high bits of `%rsp` before calling
891the function and then reads it back out of high bits of `%rsp` afterward. When
892correctly executing (speculatively or not), these are all no-ops. When
893misspeculating, the stack pointer will end up negative. We arrange for it to
894remain a canonical address, but otherwise leave the low bits alone to allow
895stack adjustments to proceed normally without disrupting this. Within the
896called function, we can extract this predicate state and then reset it on
897return:
898```
899other_function:
900        # prolog
901        callq   other_function
902        movq    %rsp, %rax
903        sarq    63, %rax                # Sign extend the high bit to all bits.
904        # ...
905
906.LBB0_N:
907        cmovneq %r8, %rax               # Conditionally update predicate state.
908        shlq    $47, %rax
909        orq     %rax, %rsp
910        retq
911```
912
913This approach is effective when all code is mitigated in this fashion, and can
914even survive very limited reaches into unmitigated code (the state will
915round-trip in and back out of an unmitigated function, it just won't be
916updated). But it does have some limitations. There is a cost to merging the
917state into `%rsp` and it doesn't insulate mitigated code from misspeculation in
918an unmitigated caller.
919
920There is also an advantage to using this form of interprocedural mitigation: by
921forming these invalid stack pointer addresses we can prevent speculative
922returns from successfully reading speculatively written values to the actual
923stack. This works first by forming a data-dependency between computing the
924address of the return address on the stack and our predicate state. And even
925when satisfied, if a misprediction causes the state to be poisoned the
926resulting stack pointer will be invalid.
927
928
929##### Rewrite API of internal functions to directly propagate predicate state
930
931(Not yet implemented.)
932
933We have the option with internal functions to directly adjust their API to
934accept the predicate as an argument and return it. This is likely to be
935marginally cheaper than embedding into `%rsp` for entering functions.
936
937
938##### Use `lfence` to guard function transitions
939
940An `lfence` instruction can be used to prevent subsequent loads from
941speculatively executing until all prior mispredicted predicates have resolved.
942We can use this broader barrier to speculative loads executing between
943functions. We emit it in the entry block to handle calls, and prior to each
944return. This approach also has the advantage of providing the strongest degree
945of mitigation when mixed with unmitigated code by halting all misspeculation
946entering a function which is mitigated, regardless of what occured in the
947caller. However, such a mixture is inherently more risky. Whether this kind of
948mixture is a sufficient mitigation requires careful analysis.
949
950Unfortunately, experimental results indicate that the performance overhead of
951this approach is very high for certain patterns of code. A classic example is
952any form of recursive evaluation engine. The hot, rapid call and return
953sequences exhibit dramatic performance loss when mitigated with `lfence`. This
954component alone can regress performance by 2x or more, making it an unpleasant
955tradeoff even when only used in a mixture of code.
956
957
958##### Use an internal TLS location to pass predicate state
959
960We can define a special thread-local value to hold the predicate state between
961functions. This avoids direct ABI implications by using a side channel between
962callers and callees to communicate the predicate state. It also allows implicit
963zero-initialization of the state, which allows non-checked code to be the first
964code executed.
965
966However, this requires a load from TLS in the entry block, a store to TLS
967before every call and every ret, and a load from TLS after every call. As a
968consequence it is expected to be substantially more expensive even than using
969`%rsp` and potentially `lfence` within the function entry block.
970
971
972##### Define a new ABI and/or calling convention
973
974We could define a new ABI and/or calling convention to explicitly pass the
975predicate state in and out of functions. This may be interesting if none of the
976alternatives have adequate performance, but it makes deployment and adoption
977dramatically more complex, and potentially infeasible.
978
979
980## High-Level Alternative Mitigation Strategies
981
982There are completely different alternative approaches to mitigating variant 1
983attacks. [Most](https://lwn.net/Articles/743265/)
984[discussion](https://lwn.net/Articles/744287/) so far focuses on mitigating
985specific known attackable components in the Linux kernel (or other kernels) by
986manually rewriting the code to contain an instruction sequence that is not
987vulnerable. For x86 systems this is done by either injecting an `lfence`
988instruction along the code path which would leak data if executed speculatively
989or by rewriting memory accesses to have branch-less masking to a known safe
990region. On Intel systems, `lfence` [will prevent the speculative load of secret
991data](https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Intel-Analysis-of-Speculative-Execution-Side-Channels.pdf).
992On AMD systems `lfence` is currently a no-op, but can be made
993dispatch-serializing by setting an MSR, and thus preclude misspeculation of the
994code path ([mitigation G-2 +
995V1-1](https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf)).
996
997However, this relies on finding and enumerating all possible points in code
998which could be attacked to leak information. While in some cases static
999analysis is effective at doing this at scale, in many cases it still relies on
1000human judgement to evaluate whether code might be vulnerable. Especially for
1001software systems which receive less detailed scrutiny but remain sensitive to
1002these attacks, this seems like an impractical security model. We need an
1003automatic and systematic mitigation strategy.
1004
1005
1006### Automatic `lfence` on Conditional Edges
1007
1008A natural way to scale up the existing hand-coded mitigations is simply to
1009inject an `lfence` instruction into both the target and fallthrough
1010destinations of every conditional branch. This ensures that no predicate or
1011bounds check can be bypassed speculatively. However, the performance overhead
1012of this approach is, simply put, catastrophic. Yet it remains the only truly
1013"secure by default" approach known prior to this effort and serves as the
1014baseline for performance.
1015
1016One attempt to address the performance overhead of this and make it more
1017realistic to deploy is [MSVC's /Qspectre
1018switch](https://blogs.msdn.microsoft.com/vcblog/2018/01/15/spectre-mitigations-in-msvc/).
1019Their technique is to use static analysis within the compiler to only insert
1020`lfence` instructions into conditional edges at risk of attack. However,
1021[initial](https://arstechnica.com/gadgets/2018/02/microsofts-compiler-level-spectre-fix-shows-how-hard-this-problem-will-be-to-solve/)
1022[analysis](https://www.paulkocher.com/doc/MicrosoftCompilerSpectreMitigation.html)
1023has shown that this approach is incomplete and only catches a small and limited
1024subset of attackable patterns which happen to resemble very closely the initial
1025proofs of concept. As such, while its performance is acceptable, it does not
1026appear to be an adequate systematic mitigation.
1027
1028
1029## Performance Overhead
1030
1031The performance overhead of this style of comprehensive mitigation is very
1032high. However, it compares very favorably with previously recommended
1033approaches such as the `lfence` instruction. Just as users can restrict the
1034scope of `lfence` to control its performance impact, this mitigation technique
1035could be restricted in scope as well.
1036
1037However, it is important to understand what it would cost to get a fully
1038mitigated baseline. Here we assume targeting a Haswell (or newer) processor and
1039using all of the tricks to improve performance (so leaves the low 2gb
1040unprotected and +/- 2gb surrounding any PC in the program). We ran both
1041Google's microbenchmark suite and a large highly-tuned server built using
1042ThinLTO and PGO. All were built with `-march=haswell` to give access to BMI2
1043instructions, and benchmarks were run on large Haswell servers. We collected
1044data both with an `lfence`-based mitigation and load hardening as presented
1045here. The summary is that mitigating with load hardening is 1.77x faster than
1046mitigating with `lfence`, and the overhead of load hardening compared to a
1047normal program is likely between a 10% overhead and a 50% overhead with most
1048large applications seeing a 30% overhead or less.
1049
1050| Benchmark                              | `lfence` | Load Hardening | Mitigated Speedup |
1051| -------------------------------------- | -------: | -------------: | ----------------: |
1052| Google microbenchmark suite            |   -74.8% |         -36.4% |          **2.5x** |
1053| Large server QPS (using ThinLTO & PGO) |   -62%   |         -29%   |          **1.8x** |
1054
1055Below is a visualization of the microbenchmark suite results which helps show
1056the distribution of results that is somewhat lost in the summary. The y-axis is
1057a log-scale speedup ratio of load hardening relative to `lfence` (up -> faster
1058-> better). Each box-and-whiskers represents one microbenchmark which may have
1059many different metrics measured. The red line marks the median, the box marks
1060the first and third quartiles, and the whiskers mark the min and max.
1061
1062![Microbenchmark result visualization](speculative_load_hardening_microbenchmarks.png)
1063
1064We don't yet have benchmark data on SPEC or the LLVM test suite, but we can
1065work on getting that. Still, the above should give a pretty clear
1066characterization of the performance, and specific benchmarks are unlikely to
1067reveal especially interesting properties.
1068
1069
1070### Future Work: Fine Grained Control and API-Integration
1071
1072The performance overhead of this technique is likely to be very significant and
1073something users wish to control or reduce. There are interesting options here
1074that impact the implementation strategy used.
1075
1076One particularly appealing option is to allow both opt-in and opt-out of this
1077mitigation at reasonably fine granularity such as on a per-function basis,
1078including intelligent handling of inlining decisions -- protected code can be
1079prevented from inlining into unprotected code, and unprotected code will become
1080protected when inlined into protected code. For systems where only a limited
1081set of code is reachable by externally controlled inputs, it may be possible to
1082limit the scope of mitigation through such mechanisms without compromising the
1083application's overall security. The performance impact may also be focused in a
1084few key functions that can be hand-mitigated in ways that have lower
1085performance overhead while the remainder of the application receives automatic
1086protection.
1087
1088For both limiting the scope of mitigation or manually mitigating hot functions,
1089there needs to be some support for mixing mitigated and unmitigated code
1090without completely defeating the mitigation. For the first use case, it would
1091be particularly desirable that mitigated code remains safe when being called
1092during misspeculation from unmitigated code.
1093
1094For the second use case, it may be important to connect the automatic
1095mitigation technique to explicit mitigation APIs such as what is described in
1096http://wg21.link/p0928 (or any other eventual API) so that there is a clean way
1097to switch from automatic to manual mitigation without immediately exposing a
1098hole. However, the design for how to do this is hard to come up with until the
1099APIs are better established. We will revisit this as those APIs mature.
1100