1Reactor Documentation
2=====================
3
4Reactor is an embedded language for C++ to facilitate dynamic code generation and specialization.
5
6Introduction
7------------
8
9To generate the code for an expression such as
10```C++
11float y = 1 - x;
12```
13using the LLVM compiler framework, one needs to execute
14```C++
15Value *valueY = BinaryOperator::CreateSub(ConstantInt::get(Type::getInt32Ty(Context), 1), valueX, "y", basicBlock);
16```
17
18For large expressions this quickly becomes hard to read, and tedious to write and modify.
19
20With Reactor, it becomes as simple as writing
21```C++
22Float y = 1 - x;
23```
24Note the capital letter for the type. This is not the code to perform the calculation. It's the code that when executed will record the calculation to be performed.
25
26This is possible through the use of C++ operator overloading. Reactor also supports control flow constructs and pointer arithmetic with C-like syntax.
27
28Motivation
29----------
30
31Just-in-time (JIT) compiled code has the potential to be faster than statically compiled code, through [run-time specialization](http://en.wikipedia.org/wiki/Run-time_algorithm_specialisation). However, this is rarely achieved in practice.
32
33Specialization in general is the use of a more optimal routine that is specific for a certain set of conditions. For example when sorting two numbers it is faster to swap them if they are not yet in order, than to call a generic quicksort function. Specialization can be done statically, by explicitly writing each variant or by using metaprogramming to generate multiple variants at static compile time, or dynamically by examining the parameters at run-time and generating a specialized path.
34
35Because specialization can be done statically, sometimes aided by metaprogramming, the ability of a JIT-compiler to do it at run-time is often disregarded. Specialized benchmarks show no advantage of JIT code over static code. However, having a specialized benchmark does not take into account that a typical real-world application deals with many unpredictable conditions. Systems can have one core or several dozen cores, and many different ISA extensions. This alone can make it impractical to write fully specialized routines manually, and with the help of metaprogramming it results in code bloat. Worse yet, any non-trivial application has a layered architecture in which lower layers (e.g. framework APIs) know very little or nothing about the usage by higher layers. Various parameters also depend on user input. Run-time specialization can have access to the full context in which each routine executes, and although the optimization contribution of specialization for a single parameter is small, the combined speedup can be huge. As an extreme example, interpreters can execute any kind of program in any language, but by specializing for a specific program you get a compiled version of that program. But you don't need a full-blown language to observe a huge difference between interpretation and specialization through compilation. Most applications process some form of list of commands in an interpreted fashion, and even the series of calls into a framework API can be compiled into a more efficient whole at run-time.
36
37While the benefit of run-time specialization should now be apparent, JIT-compiled languages lack many of the practical advantages of static compilation. JIT-compilers are very constrained in how much time they can spend on compiling the bytecode into machine code. This limits their ability to even reach parity with static compilation, let alone attempt to exceed it by performing run-time specialization. Also, even if the compilation time was not as constrained, they can't specialize at every opportunity because it would result in an explosive growth of the amount of generated code. There's a need to be very selective in only specializing the hotspots for often recurring conditions, and to manage a cache of the different variants. Even just selecting the size of the set of variables that form the entire condition to specialize for can get immensely complicated.
38
39Clearly we need a manageable way to benefit from run-time specialization where it would help significantly, while still resorting to static compilation for anything else. A crucial observation is that the developer has expectations about the application's behavior, which is valuable information which can be exploited to choose between static or JIT-compilation. One way to do that is to use an API which JIT-compiles the commands provided by the application developer. An example of this is an advanced DBMS which compiles the query into an optimized sequence of routines, each specialized to the data types involved, the sizes of the CPU caches, etc. Another example is a modern graphics API, which takes shaders (a routine executed per pixel or other element) and a set of parameters which affect their execution, and compiles them into GPU-specific code. However, these examples have a very hard divide between what goes on inside the API and outside. You can't exchange data between the statically compiled outside world and the JIT-compiled routines, unless through the API, and they have very different execution models. In other words they are highly domain specific and not generic ways to exploit run-time specialization in arbitrary code.
40
41This is becoming especially problematic for GPUs, as they are now just as programmable as CPUs but you can still only command them through an API. Attempts to disguise this by using a single language, such as C++AMP and SYCL, still have difficulties expressing how data is exchanged, don't actually provide control over the specialization, they have hidden overhead, and they have unpredictable performance characteristics across devices. Meanwhile CPUs gain ever more cores and wider SIMD vector units, but statically compiled languages don't readily exploit this and can't deal with the many code paths required to extract optimal performance. A different language and framework is required.
42
43Concepts and Syntax
44-------------------
45
46### Routine and Function<>
47
48Reactor allows you to create new functions at run-time. Their generation happens in C++, and after materializing them they can be called during the execution of the same C++ program. We call these dynamically generated functions "routines", to discern them from statically compiled functions and methods. Reactor's ```Routine``` class encapsulates a routine. Deleting a Routine object also frees the memory used to store the routine.
49
50To declare the function signature of a routine, use the ```Function<>``` template. The template argument is the signature of a function, using Reactor variable types. Here's a complete definition of a routine taking no arguments and returning an integer:
51
52```C++
53Function<Int(Void)> function;
54{
55    Return(1);
56}
57```
58
59The braces are superfluous. They just make the syntax look more like regular C++, and they offer a new scope for Reactor variables.
60
61The Routine is obtained and materialized by "calling" the ```Function<>``` object to give it a name:
62
63```C++
64Routine *routine = function("one");
65```
66
67Finally, we can obtain the function pointer to the entry point of the routine, and call it:
68
69```C++
70int (*callable)() = (int(*)())routine->getEntry();
71
72int result = callable();
73assert(result == 1);
74```
75
76Note that ```Function<>``` objects are relatively heavyweight, since they have the entire JIT-compiler behind them, while ```Routine``` objects are lightweight and merely provide storage and lifetime management of generated routines. So we typically allow the ```Function<>``` object to be destroyed (by going out of scope), while the ```Routine``` object is retained until we no longer need to call the routine. Hence the distinction between them and the need for a couple of lines of boilerplate code.
77
78### Arguments and Expressions
79
80Routines can take various arguments. The following example illustrates the syntax for accessing the arguments of a routine which takes two integer arguments and returns their sum:
81
82```C++
83Function<Int(Int, Int)> function;
84{
85    Int x = function.Arg<0>();
86    Int y = function.Arg<1>();
87
88    Int sum = x + y;
89
90    Return(sum);
91}
92```
93
94Reactor supports various types which correspond to C++ types:
95
96| Class name    | C++ equivalent |
97| ------------- |----------------|
98| Int           | int32_t        |
99| UInt          | uint32_t       |
100| Short         | int16_t        |
101| UShort        | uint16_t       |
102| Byte          | uint8_t        |
103| SByte         | int8_t         |
104| Long          | int64_t        |
105| ULong         | uint64_t       |
106| Float         | float          |
107
108Note that bytes are unsigned unless prefixed with S, while larger integers are signed unless prefixed with U.
109
110These scalar types support all of the C++ arithmetic operations.
111
112Reactor also supports several vector types. For example ```Float4``` is a vector of four floats. They support a select number of C++ operators, and several "intrinsic" functions such as ```Max()``` to compute the element-wise maximum and return a bit mask. Check [Reactor.hpp](../src/Reactor/Reactor.hpp) for all the types, operators and intrinsics.
113
114### Casting and Reinterpreting
115
116Types can be cast using the constructor-style syntax:
117
118```C++
119Function<Int(Float)> function;
120{
121    Float x = function.Arg<0>();
122
123    Int cast = Int(x);
124
125    Return(cast);
126}
127```
128
129You can reinterpret-cast a variable using ```As<>```:
130
131```C++
132Function<Int(Float)> function;
133{
134    Float x = function.Arg<0>();
135
136    Int reinterpret = As<Int>(x);
137
138    Return(reinterpret);
139}
140```
141
142Note that this is a bitwise cast. Unlike C++'s ```reinterpret_cast<>```, it does not allow casting between different sized types. Think of it as storing the value in memory and then loading from that same address into the casted type.
143
144### Pointers
145
146Pointers also use a template class:
147
148```C++
149Function<Int(Pointer<Int>)> function;
150{
151    Pointer<Int> x = function.Arg<0>();
152
153    Int dereference = *x;
154
155    Return(dereference);
156}
157```
158
159Pointer arithmetic is only supported on ```Pointer<Byte>```, and can be used to access structure fields:
160
161```C++
162struct S
163{
164    int x;
165    int y;
166};
167
168Function<Int(Pointer<Byte>)> function;
169{
170    Pointer<Byte> s = function.Arg<0>();
171
172    Int y = *Pointer<Int>(s + offsetof(S, y));
173
174    Return(y);
175}
176```
177
178Reactor also defines an OFFSET() macro equivalent to the standard offsetof() macro.
179
180### Conditionals
181
182To generate for example the [unit step](https://en.wikipedia.org/wiki/Heaviside_step_function) function:
183
184```C++
185Function<Float(Float)> function;
186{
187    Pointer<Float> x = function.Arg<0>();
188
189    If(x > 0.0f)
190    {
191        Return(1.0f);
192    }
193    Else If(x < 0.0f)
194    {
195        Return(0.0f);
196    }
197    Else
198    {
199        Return(0.5f);
200    }
201}
202```
203
204There's also an IfThenElse() intrinsic function which corresponds with the C++ ?: operator.
205
206### Loops
207
208Loops also have a syntax similar to C++:
209
210```C++
211Function<Int(Pointer<Int>, Int)> function;
212{
213    Pointer<Int> p = function.Arg<0>();
214    Int n = function.Arg<1>();
215    Int total = 0;
216
217    For(Int i = 0, i < n, i++)
218    {
219        total += p[i];
220    }
221
222    Return(total);
223}
224```
225
226Note the use of commas instead of semicolons to separate the loop expressions.
227
228```While(expr) {}``` also works as expected, but there is no ```Do {} While(expr)``` equivalent because we can't discern between them. Instead, there's a ```Do {} Until(expr)``` where you can use the inverse expression to exit the loop.
229
230Specialization
231--------------
232
233The above examples don't illustrate anything that can't be written as regular C++ function. The real power of Reactor is to generate routines that are specialized for a certain set of conditions, or "state".
234
235```C++
236Function<Int(Pointer<Int>, Int)> function;
237{
238    Pointer<Int> p = function.Arg<0>();
239    Int n = function.Arg<1>();
240    Int total = 0;
241
242    For(Int i = 0, i < n, i++)
243    {
244        if(state.operation == ADD)
245        {
246            total += p[i];
247        }
248        else if(state.operation == SUBTRACT)
249        {
250            total -= p[i];
251        }
252        else if(state.operation == AND)
253        {
254            total &= p[i];
255        }
256        else if(...)
257        {
258            ...
259        }
260    }
261
262    Return(total);
263}
264```
265
266Note that this example uses regular C++ ```if``` and ```else``` constructs. They only determine which code ends up in the generated routine, and don't end up in the generated code themselves. Thus the routine contains a loop with just one arithmetic or logical operation, making it more efficient than if this was written in regular C++.
267
268Of course one could write an equivalent efficient function in regular C++ like this:
269
270```C++
271int function(int *p, int n)
272{
273    int total = 0;
274
275    if(state.operation == ADD)
276    {
277        for(int i = 0; i < n; i++)
278        {
279            total += p[i];
280        }
281    }
282    else if(state.operation == SUBTRACT)
283    {
284        for(int i = 0; i < n; i++)
285        {
286            total -= p[i];
287        }
288    }
289    else if(state.operation == AND)
290    {
291        for(int i = 0; i < n; i++)
292        {
293            total &= p[i];
294        }
295    }
296    else if(...)
297    {
298        ...
299    }
300
301    return total;
302}
303```
304
305But now there's a lot of repeated code. It could be made more manageable using macros or templates, but that doesn't help reduce the binary size of the statically compiled code. That's fine when there are only a handful of state conditions to specialize for, but when you have multiple state variables with many possible values each, the total number of combinations can be prohibitive.
306
307This is especially the case when implementing APIs which offer a broad set of features but developers are likely to only use a select set. The quintessential example is graphics processing, where there are are long pipelines of optional operations and both fixed-function and programmable stages. Applications configure the state of these stages between each draw call.
308
309With Reactor, we can write the code for such pipelines in a syntax that is as easy to read as a naive unoptimized implementation, while at the same time specializing the code for exactly the operations required by the pipeline configuration.
310