1=====================================
2Performance Tips for Frontend Authors
3=====================================
4
5.. contents::
6   :local:
7   :depth: 2
8
9Abstract
10========
11
12The intended audience of this document is developers of language frontends
13targeting LLVM IR. This document is home to a collection of tips on how to
14generate IR that optimizes well.  As with any optimizer, LLVM has its strengths
15and weaknesses.  In some cases, surprisingly small changes in the source IR
16can have a large effect on the generated code.
17
18Avoid loads and stores of large aggregate type
19================================================
20
21LLVM currently does not optimize well loads and stores of large :ref:`aggregate
22types <t_aggregate>` (i.e. structs and arrays).  As an alternative, consider
23loading individual fields from memory.
24
25Aggregates that are smaller than the largest (performant) load or store
26instruction supported by the targeted hardware are well supported.  These can
27be an effective way to represent collections of small packed fields.
28
29Prefer zext over sext when legal
30==================================
31
32On some architectures (X86_64 is one), sign extension can involve an extra
33instruction whereas zero extension can be folded into a load.  LLVM will try to
34replace a sext with a zext when it can be proven safe, but if you have
35information in your source language about the range of a integer value, it can
36be profitable to use a zext rather than a sext.
37
38Alternatively, you can :ref:`specify the range of the value using metadata
39<range-metadata>` and LLVM can do the sext to zext conversion for you.
40
41Zext GEP indices to machine register width
42============================================
43
44Internally, LLVM often promotes the width of GEP indices to machine register
45width.  When it does so, it will default to using sign extension (sext)
46operations for safety.  If your source language provides information about
47the range of the index, you may wish to manually extend indices to machine
48register width using a zext instruction.
49
50Other things to consider
51=========================
52
53#. Make sure that a DataLayout is provided (this will likely become required in
54   the near future, but is certainly important for optimization).
55
56#. Add nsw/nuw flags as appropriate.  Reasoning about overflow is
57   generally hard for an optimizer so providing these facts from the frontend
58   can be very impactful.  For languages which need overflow semantics,
59   consider using the :ref:`overflow intrinsics <int_overflow>`.
60
61#. Use fast-math flags on floating point operations if legal.  If you don't
62   need strict IEEE floating point semantics, there are a number of additional
63   optimizations that can be performed.  This can be highly impactful for
64   floating point intensive computations.
65
66#. Use inbounds on geps.  This can help to disambiguate some aliasing queries.
67
68#. Add noalias/align/dereferenceable/nonnull to function arguments and return
69   values as appropriate
70
71#. Mark functions as readnone/readonly or noreturn/nounwind when known.  The
72   optimizer will try to infer these flags, but may not always be able to.
73   Manual annotations are particularly important for external functions that
74   the optimizer can not analyze.
75
76#. Use ptrtoint/inttoptr sparingly (they interfere with pointer aliasing
77   analysis), prefer GEPs
78
79#. Use the lifetime.start/lifetime.end and invariant.start/invariant.end
80   intrinsics where possible.  Common profitable uses are for stack like data
81   structures (thus allowing dead store elimination) and for describing
82   life times of allocas (thus allowing smaller stack sizes).
83
84#. Use pointer aliasing metadata, especially tbaa metadata, to communicate
85   otherwise-non-deducible pointer aliasing facts
86
87#. Use the "most-private" possible linkage types for the functions being defined
88   (private, internal or linkonce_odr preferably)
89
90#. Mark invariant locations using !invariant.load and TBAA's constant flags
91
92#. Prefer globals over inttoptr of a constant address - this gives you
93   dereferencability information.  In MCJIT, use getSymbolAddress to provide
94   actual address.
95
96#. Be wary of ordered and atomic memory operations.  They are hard to optimize
97   and may not be well optimized by the current optimizer.  Depending on your
98   source language, you may consider using fences instead.
99
100#. If calling a function which is known to throw an exception (unwind), use
101   an invoke with a normal destination which contains an unreachable
102   instruction.  This form conveys to the optimizer that the call returns
103   abnormally.  For an invoke which neither returns normally or requires unwind
104   code in the current function, you can use a noreturn call instruction if
105   desired.  This is generally not required because the optimizer will convert
106   an invoke with an unreachable unwind destination to a call instruction.
107
108#. If you language uses range checks, consider using the IRCE pass.  It is not
109   currently part of the standard pass order.
110
111#. For languages with numerous rarely executed guard conditions (e.g. null
112   checks, type checks, range checks) consider adding an extra execution or
113   two of LoopUnswith and LICM to your pass order.  The standard pass order,
114   which is tuned for C and C++ applications, may not be sufficient to remove
115   all dischargeable checks from loops.
116
117#. Use profile metadata to indicate statically known cold paths, even if
118   dynamic profiling information is not available.  This can make a large
119   difference in code placement and thus the performance of tight loops.
120
121#. When generating code for loops, try to avoid terminating the header block of
122   the loop earlier than necessary.  If the terminator of the loop header
123   block is a loop exiting conditional branch, the effectiveness of LICM will
124   be limited for loads not in the header.  (This is due to the fact that LLVM
125   may not know such a load is safe to speculatively execute and thus can't
126   lift an otherwise loop invariant load unless it can prove the exiting
127   condition is not taken.)  It can be profitable, in some cases, to emit such
128   instructions into the header even if they are not used along a rarely
129   executed path that exits the loop.  This guidance specifically does not
130   apply if the condition which terminates the loop header is itself invariant,
131   or can be easily discharged by inspecting the loop index variables.
132
133#. In hot loops, consider duplicating instructions from small basic blocks
134   which end in highly predictable terminators into their successor blocks.
135   If a hot successor block contains instructions which can be vectorized
136   with the duplicated ones, this can provide a noticeable throughput
137   improvement.  Note that this is not always profitable and does involve a
138   potentially large increase in code size.
139
140#. Avoid high in-degree basic blocks (e.g. basic blocks with dozens or hundreds
141   of predecessors).  Among other issues, the register allocator is known to
142   perform badly with confronted with such structures.  The only exception to
143   this guidance is that a unified return block with high in-degree is fine.
144
145p.s. If you want to help improve this document, patches expanding any of the
146above items into standalone sections of their own with a more complete
147discussion would be very welcome.
148
149
150Adding to this document
151=======================
152
153If you run across a case that you feel deserves to be covered here, please send
154a patch to `llvm-commits
155<http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits>`_ for review.
156
157If you have questions on these items, please direct them to `llvmdev
158<http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev>`_.  The more relevant
159context you are able to give to your question, the more likely it is to be
160answered.
161
162