1Date: Fri, 1 Jun 2001 17:08:44 -0500 (CDT) 2From: Chris Lattner <sabre@nondot.org> 3To: Vikram S. Adve <vadve@cs.uiuc.edu> 4Subject: RE: Interesting: GCC passes 5 6> That is very interesting. I agree that some of these could be done on LLVM 7> at link-time, but it is the extra time required that concerns me. Link-time 8> optimization is severely time-constrained. 9 10If we were to reimplement any of these optimizations, I assume that we 11could do them a translation unit at a time, just as GCC does now. This 12would lead to a pipeline like this: 13 14Static optimizations, xlation unit at a time: 15.c --GCC--> .llvm --llvmopt--> .llvm 16 17Link time optimizations: 18.llvm --llvm-ld--> .llvm --llvm-link-opt--> .llvm 19 20Of course, many optimizations could be shared between llvmopt and 21llvm-link-opt, but the wouldn't need to be shared... Thus compile time 22could be faster, because we are using a "smarter" IR (SSA based). 23 24> BTW, about SGI, "borrowing" SSA-based optimizations from one compiler and 25> putting it into another is not necessarily easier than re-doing it. 26> Optimization code is usually heavily tied in to the specific IR they use. 27 28Understood. The only reason that I brought this up is because SGI's IR is 29more similar to LLVM than it is different in many respects (SSA based, 30relatively low level, etc), and could be easily adapted. Also their 31optimizations are written in C++ and are actually somewhat 32structured... of course it would be no walk in the park, but it would be 33much less time consuming to adapt, say, SSA-PRE than to rewrite it. 34 35> But your larger point is valid that adding SSA based optimizations is 36> feasible and should be fun. (Again, link time cost is the issue.) 37 38Assuming linktime cost wasn't an issue, the question is: 39Does using GCC's backend buy us anything? 40 41> It also occurs to me that GCC is probably doing quite a bit of back-end 42> optimization (step 16 in your list). Do you have a breakdown of that? 43 44Not really. The irritating part of GCC is that it mixes it all up and 45doesn't have a clean separation of concerns. A lot of the "back end 46optimization" happens right along with other data optimizations (ie, CSE 47of machine specific things). 48 49As far as REAL back end optimizations go, it looks something like this: 50 511. Instruction combination: try to make CISCy instructions, if available 522. Register movement: try to get registers in the right places for the 53architecture to avoid register to register moves. For example, try to get 54the first argument of a function to naturally land in %o0 for sparc. 553. Instruction scheduling: 'nuff said :) 564. Register class preferencing: ?? 575. Local register allocation 586. global register allocation 597. Spilling 608. Local regalloc 619. Jump optimization 6210. Delay slot scheduling 6311. Branch shorting for CISC machines 6412. Instruction selection & peephole optimization 6513. Debug info output 66 67But none of this would be usable for LLVM anyways, unless we were using 68GCC as a static compiler. 69 70-Chris 71 72