Lines Matching refs:we
28 …that is, producing optimized code -- so that the complexity of Eigen, that we'll explain here, is …
39 The problem is that if we make a naive C++ library where the VectorXf class has an operator+ return…
49 Traversing the arrays twice instead of once is terrible for performance, as it means that we do man…
51 …. Notice that Eigen also supports AltiVec and that all the discussion that we make here applies al…
55 …we have chosen size=50, so our vectors consist of 50 float's, and 50 is not a multiple of 4. This …
81 When we do
87 … be stored as a pointer to a dynamically-allocated array. Because of this, we need to abstract sto…
89 …ensions are Dynamic or fixed at compile-time. The partial specialization that we are looking at is:
102 …amically allocated. Rather than calling new[] or malloc(), as you can see, we have our own interna…
104 … m_columns member: indeed, in this partial specialization of DenseStorage, we know the number of c…
126 Here, v and w are of type VectorXf, which is a typedef for a specialization of Matrix (as we explai…
136 …we said, the operator+ doesn't by itself perform any computation, it just returns an abstract "sum…
138 Now you might ask, what if we did something like
144 …s to compile, we'd need to define an operator+ also in the class CwiseBinaryOp... at this point it…
150 …n C++ is by means of virtual functions. This is dynamic polymorphism. Here we don't want dynamic p…
152 Here, what we want is to have a single class MatrixBase as the base of many subclasses, in such a w…
156 …we define a subclass Subclass, we actually make Subclass inherit MatrixBase\<Subclass\>. The point…
158 This means that we can put almost all the methods and operators in the base class MatrixBase, and h…
160 So let's end this digression and come back to the piece of code from our example program that we we…
184 …we said, CwiseBinaryOp is also used for other operations such as substration, so it takes another …
186 …we want to pass the scalar type (a.k.a. numeric type) of VectorXf, which is \c float. How do we de…
190 …we can't do that here, as the compiler would complain that the type Derived hasn't yet been define…
194 …we define a partial specialization of internal::traits for T=Matrix\<any template parameters\>. In…
196 Anyway, we have declared our operator+. In our case, where \a Derived and \a OtherDerived are Vecto…
219 we now enter the operator=.
221 …ss VectorXf, i.e. Matrix. In src/Core/Matrix.h, inside the definition of class Matrix, we see this:
247 What we can see there is:
274 …naryOp expression doesn't have the EvalBeforeAssigningBit: we said since the beginning that we did…
276 … allow this as a special exception to the general rule that in assignments we require the dimesion…
278 So, here we are in the partial specialization:
306 What do we see here? Some assertions, and then the only interesting line is:
311 OK so now we want to know what is inside internal::assign_impl.
326 So the partial specialization of internal::assign_impl that we're looking at is:
360 As we said at the beginning, vectorization works with blocks of 4 floats. Here, \a PacketSize is 4.
362 There are two potential problems that we need to deal with:
363 …we want to group these coefficients by packets of 4 such that each of these packets is 128-bit-ali…
364 …etSize. Here, there are 50 coefficients to copy and \a packetSize is 4. So we'll have to copy the …
390 First, writePacket() here is a method on the left-hand side VectorXf. So we go to src/Core/Matrix.h…
398 …we are doing a 128-bit-aligned write access, \a PacketScalar is a type representing a "SSE packet …
422 OK, that explains how writePacket() works. Now let's look into the packet() call. Remember that we …
440 …on here is Matrix::packet(). The template parameter \a LoadMode is \a #Aligned. So we're looking at
454 …Op() on them. What is m_functor? Here we must remember what particular template specialization of …
458 …of the empty class internal::scalar_sum_op<float>. As we mentioned above, don't worry about why we…
489 However, it works just like the one we just explained, it is just simpler because there is no SSE v…
491 …we indeed are precisely controlling which assembly instructions we emit. Such is the beauty of C++…