1<html><head><title>The design of toybox</title></head> 2<!--#include file="header.html" --> 3 4<b><h2>Design goals</h2></b> 5 6<p>Toybox should be simple, small, fast, and full featured. Often, these 7things need to be balanced off against each other. In general, keeping the 8code simple the most important (and hardest) goal, and small is slightly more 9important than fast. Features are the reason we write code in the first 10place but this has all been implemented before so if we can't do a better 11job why bother? It should be possible to get 80% of the way to each goal 12before they really start to fight.</p> 13 14<p>Here they are in reverse order of importance:</p> 15 16<b><h3>Features</h3></b> 17 18<p>The <a href=roadmap.html>roadmap</a> has the list of features we're 19trying to implement, and the reasons for them. After the 1.0 release 20some of that material may get moved here.</p> 21 22<p>Some things are simply outside the scope of the project: even though 23posix defines commands for compiling and linking, we're not going to include 24a compiler or linker (and support for a potentially infinite number of hardware 25targets). And until somebody comes up with a ~30k ssh implementation, we're 26going to point you at dropbear or polarssl.</p> 27 28<p>Environmental dependencies are a type of complexity, so needing other 29packages to build or run is a big downside. For example, we don't use curses 30when we can simply output ansi escape sequences and trust all terminal 31programs written in the past 30 years to be able to support them. (A common 32use case is to download a statically linked toybox binary to an arbitrary 33Linux system, and use it in an otherwise unknown environment; being 34self-contained helps support this.)</p> 35 36<b><h3>Speed</h3></b> 37 38<p>It's easy to say lots about optimizing for speed (which is why this section 39is so long), but at the same time it's the optimization we care the least about. 40The essence of speed is being as efficient as possible, which means doing as 41little work as possible. A design that's small and simple gets you 90% of the 42way there, and most of the rest is either fine-tuning or more trouble than 43it's worth (and often actually counterproductive). Still, here's some 44advice:</p> 45 46<p>First, understand the darn problem you're trying to solve. You'd think 47I wouldn't have to say this, but I do. Trying to find a faster sorting 48algorithm is no substitute for figuring out a way to skip the sorting step 49entirely. The fastest way to do anything is not to have to do it at all, 50and _all_ optimization boils down to avoiding unnecessary work.</p> 51 52<p>Speed is easy to measure; there are dozens of profiling tools for Linux 53(although personally I find the "time" command a good starting place). 54Don't waste too much time trying to optimize something you can't measure, 55and there's no much point speeding up things you don't spend much time doing 56anyway.</p> 57 58<p>Understand the difference between throughput and latency. Faster 59processors improve throughput, but don't always do much for latency. 60After 30 years of Moore's Law, most of the remaining problems are latency, 61not throughput. (There are of course a few exceptions, like data compression 62code, encryption, rsync...) Worry about throughput inside long-running 63loops, and worry about latency everywhere else. (And don't worry too much 64about avoiding system calls or function calls or anything else in the name 65of speed unless you are in the middle of a tight loop that's you've already 66proven isn't running fast enough.)</p> 67 68<p>"Locality of reference" is generally nice, in all sorts of contexts. 69It's obvious that waiting for disk access is 1000x slower than doing stuff in 70RAM (and making the disk seek is 10x slower than sequential reads/writes), 71but it's just as true that a loop which stays in L1 cache is many times faster 72than a loop that has to wait for a DRAM fetch on each iteration. Don't worry 73about whether "&" is faster than "%" until your executable loop stays in L1 74cache and the data access is fetching cache lines intelligently. (To 75understand DRAM, L1, and L2 cache, read Hannibal's marvelous ram guide at Ars 76Technica: 77<a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part1-2.html>part one</a>, 78<a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part2-1.html>part two</a>, 79<a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part3-1.html>part three</a>, 80plus this 81<a href=http://arstechnica.com/articles/paedia/cpu/caching.ars/1>article on 82cacheing</a>, and this one on 83<a href=http://arstechnica.com/articles/paedia/cpu/bandwidth-latency.ars>bandwidth 84and latency</a>. 85And there's <a href=http://arstechnica.com/paedia/index.html>more where that came from</a>.) 86Running out of L1 cache can execute one instruction per clock cycle, going 87to L2 cache costs a dozen or so clock cycles, and waiting for a worst case dram 88fetch (round trip latency with a bank switch) can cost thousands of 89clock cycles. (Historically, this disparity has gotten worse with time, 90just like the speed hit for swapping to disk. These days, a _big_ L1 cache 91is 128k and a big L2 cache is a couple of megabytes. A cheap low-power 92embedded processor may have 8k of L1 cache and no L2.)</p> 93 94<p>Learn how <a href=http://nommu.org/memory-faq.txt>virtual memory and 95memory managment units work</a>. Don't touch 96memory you don't have to. Even just reading memory evicts stuff from L1 and L2 97cache, which may have to be read back in later. Writing memory can force the 98operating system to break copy-on-write, which allocates more memory. (The 99memory returned by malloc() is only a virtual allocation, filled with lots of 100copy-on-write mappings of the zero page. Actual physical pages get allocated 101when the copy-on-write gets broken by writing to the virtual page. This 102is why checking the return value of malloc() isn't very useful anymore, it 103only detects running out of virtual memory, not physical memory. Unless 104you're using a <a href=http://nommu.org>NOMMU system</a>, where all bets are off.)</p> 105 106<p>Don't think that just because you don't have a swap file the system can't 107start swap thrashing: any file backed page (ala mmap) can be evicted, and 108there's a reason all running programs require an executable file (they're 109mmaped, and can be flushed back to disk when memory is short). And long 110before that, disk cache gets reclaimed and has to be read back in. When the 111operating system really can't free up any more pages it triggers the out of 112memory killer to free up pages by killing processes (the alternative is the 113entire OS freezing solid). Modern operating systems seldom run out of 114memory gracefully.</p> 115 116<p>Also, it's better to be simple than clever. Many people think that mmap() 117is faster than read() because it avoids a copy, but twiddling with the memory 118management is itself slow, and can cause unnecessary CPU cache flushes. And 119if a read faults in dozens of pages sequentially, but your mmap iterates 120backwards through a file (causing lots of seeks, each of which your program 121blocks waiting for), the read can be many times faster. On the other hand, the 122mmap can sometimes use less memory, since the memory provided by mmap 123comes from the page cache (allocated anyway), and it can be faster if you're 124doing a lot of different updates to the same area. The moral? Measure, then 125try to speed things up, and measure again to confirm it actually _did_ speed 126things up rather than made them worse. (And understanding what's really going 127on underneath is a big help to making it happen faster.)</p> 128 129<p>In general, being simple is better than being clever. Optimization 130strategies change with time. For example, decades ago precalculating a table 131of results (for things like isdigit() or cosine(int degrees)) was clearly 132faster because processors were so slow. Then processors got faster and grew 133math coprocessors, and calculating the value each time became faster than 134the table lookup (because the calculation fit in L1 cache but the lookup 135had to go out to DRAM). Then cache sizes got bigger (the Pentium M has 1362 megabytes of L2 cache) and the table fit in cache, so the table became 137fast again... Predicting how changes in hardware will affect your algorithm 138is difficult, and using ten year old optimization advice and produce 139laughably bad results. But being simple and efficient is always going to 140give at least a reasonable result.</p> 141 142<p>The famous quote from Ken Thompson, "When in doubt, use brute force", 143applies to toybox. Do the simple thing first, do as little of it as possible, 144and make sure it's right. You can always speed it up later.</p> 145 146<b><h3>Size</h3></b> 147<p>Again, simple gives you most of this. An algorithm that does less work 148is generally smaller. Understand the problem, treat size as a cost, and 149get a good bang for the byte.</p> 150 151<p>Understand the difference between binary size, heap size, and stack size. 152Your binary is the executable file on disk, your heap is where malloc() memory 153lives, and your stack is where local variables (and function call return 154addresses) live. Optimizing for binary size is generally good: executing 155fewer instructions makes your program run faster (and fits more of it in 156cache). On embedded systems, binary size is especially precious because 157flash is expensive (and its successor, MRAM, even more so). Small stack size 158is important for nommu systems because they have to preallocate their stack 159and can't make it bigger via page fault. And everybody likes a small heap.</p> 160 161<p>Measure the right things. Especially with modern optimizers, expecting 162something to be smaller is no guarantee it will be after the compiler's done 163with it. Binary size isn't the most accurate indicator of the impact of a 164given change, because lots of things get combined and rounded during 165compilation and linking. Matt Mackall's bloat-o-meter is a python script 166which compares two versions of a program, and shows size changes in each 167symbol (using the "nm" command behind the scenes). To use this, run 168"make baseline" to build a baseline version to compare against, and 169then "make bloatometer" to compare that baseline version against the current 170code.</p> 171 172<p>Avoid special cases. Whenever you see similar chunks of code in more than 173one place, it might be possible to combine them and have the users call shared 174code. (This is the most commonly cited trick, which doesn't make it easy. If 175seeing two lines of code do the same thing makes you slightly uncomfortable, 176you've got the right mindset.)</p> 177 178<p>Some specific advice: Using a char in place of an int when doing math 179produces significantly larger code on some platforms (notably arm), 180because each time the compiler has to emit code to convert it to int, do the 181math, and convert it back. Bitfields have this problem on most platforms. 182Because of this, using char to index a for() loop is probably not a net win, 183although using char (or a bitfield) to store a value in a structure that's 184repeated hundreds of times can be a good tradeoff of binary size for heap 185space.</p> 186 187<b><h3>Simple</h3></b> 188 189<p>Complexity is a cost, just like code size or runtime speed. Treat it as 190a cost, and spend your complexity budget wisely. (Sometimes this means you 191can't afford a feature because it complicates the code too much to be 192worth it.)</p> 193 194<p>Simplicity has lots of benefits. Simple code is easy to maintain, easy to 195port to new processors, easy to audit for security holes, and easy to 196understand.</p> 197 198<p>Simplicity itself can have subtle non-obvious aspects requiring a tradeoff 199between one kind of simplicity and another: simple for the computer to 200execute and simple for a human reader to understand aren't always the 201same thing. A compact and clever algorithm that does very little work may 202not be as easy to explain or understand as a larger more explicit version 203requiring more code, memory, and CPU time. When balancing these, err on the 204side of doing less work, but add comments describing how you 205could be more explicit.</p> 206 207<p>In general, comments are not a substitute for good code (or well chosen 208variable or function names). Commenting "x += y;" with "/* add y to x */" 209can actually detract from the program's readability. If you need to describe 210what the code is doing (rather than _why_ it's doing it), that means the 211code itself isn't very clear.</p> 212 213<p>Prioritizing simplicity tends to serve our other goals: simplifying code 214generally reduces its size (both in terms of binary size and runtime memory 215usage), and avoiding unnecessary work makes code run faster. Smaller code 216also tends to run faster on modern hardware due to CPU cacheing: fitting your 217code into L1 cache is great, and staying in L2 cache is still pretty good.</p> 218 219<p><a href=http://www.joelonsoftware.com/articles/fog0000000069.html>Joel 220Spolsky argues against throwing code out and starting over</a>, and he has 221good points: an existing debugged codebase contains a huge amount of baked 222in knowledge about strange real-world use cases that the designers didn't 223know about until users hit the bugs, and most of this knowledge is never 224explicitly stated anywhere except in the source code.</p> 225 226<p>That said, the Mythical Man-Month's "build one to throw away" advice points 227out that until you've solved the problem you don't properly understand it, and 228about the time you finish your first version is when you've finally figured 229out what you _should_ have done. (The corrolary is that if you build one 230expecting to throw it away, you'll actually wind up throwing away two. You 231don't understand the problem until you _have_ solved it.)</p> 232 233<p>Joel is talking about what closed source software can afford to do: Code 234that works and has been paid for is a corporate asset not lightly abandoned. 235Open source software can afford to re-implement code that works, over and 236over from scratch, for incremental gains. Before toybox, the unix command line 237has already been reimplemented from scratch several times (the 238original AT&T Unix command line in assembly and then in C, the BSD 239versions, Coherent was the first full from-scratch Unix clone in 1980, 240Minix was another clone which Linux was inspired by and developed under, 241the GNU tools were yet another rewrite intended for use in the stillborn 242"Hurd" project, BusyBox was still another rewrite, and more versions 243were written in Plan 9, uclinux, klibc, sash, sbase, s6, and of course 244android toolbox...) but maybe toybox can do a better job. :)</p> 245 246<p>P.S. How could I resist linking to an article about 247<a href=http://blog.outer-court.com/archive/2005-08-24-n14.html>why 248programmers should strive to be lazy and dumb</a>?</p> 249 250<b><h2>Portability issues</h2></b> 251 252<b><h3>Platforms</h3></b> 253<p>Toybox should run on Android (all commands with musl-libc, as large a subset 254as practical with bionic), and every other hardware platform Linux runs on. 255Other posix/susv4 environments (perhaps MacOS X or newlib+libgloss) are vaguely 256interesting but only if they're easy to support; I'm not going to spend much 257effort on them.</p> 258 259<p>I don't do windows.</p> 260 261<p>We depend on C99 and posix-2008 libc features such as the openat() family of 262functions. We also assume certain "modern" linux kernel behavior such 263as large environment sizes (linux commit b6a2fea39318, went into 2.6.22 264released July 2007). In theory this shouldn't prevent us from working on 265older kernels or other implementations (ala BSD), but we don't police their 266corner cases.</p> 267 268<b><h3>32/64 bit</h3></b> 269<p>Toybox should work on both 32 bit and 64 bit systems. By the end of 2008 27064 bit hardware will be the new desktop standard, but 32 bit hardware will 271continue to be important in embedded devices for years to come.</p> 272 273<p>Toybox relies on the fact that on any Unix-like platform, pointer and long 274are always the same size (on both 32 and 64 bit). Pointer and int are _not_ 275the same size on 64 bit systems, but pointer and long are.</p> 276 277<p>This is guaranteed by the LP64 memory model, a Unix standard (which Linux 278and MacOS X both implement, and which modern 64 bit processors such as 279x86-64 were <a href=http://www.pagetable.com/?p=6>designed for</a>). See 280<a href=http://www.unix.org/whitepapers/64bit.html>the LP64 standard</a> and 281<a href=http://www.unix.org/version2/whatsnew/lp64_wp.html>the LP64 282rationale</a> for details.</p> 283 284<p>Note that Windows doesn't work like this, and I don't care. 285<a href=http://blogs.msdn.com/oldnewthing/archive/2005/01/31/363790.aspx>The 286insane legacy reasons why this is broken on Windows are explained here.</a></p> 287 288<b><h3>Signedness of char</h3></b> 289<p>On platforms like x86, variables of type char default to unsigned. On 290platforms like arm, char defaults to signed. This difference can lead to 291subtle portability bugs, and to avoid them we specify which one we want by 292feeding the compiler -funsigned-char.</p> 293 294<p>The reason to pick "unsigned" is that way we're 8-bit clean by default.</p> 295 296<p><h3>Error messages and internationalization:</h3></p> 297 298<p>Error messages are extremely terse not just to save bytes, but because we 299don't use any sort of _("string") translation infrastructure. (We're not 300translating the command names themselves, so we must expect a minimum amount of 301english knowledge from our users, but let's keep it to a minimum.)</p> 302 303<p>Thus "bad -A '%c'" is 304preferable to "Unrecognized address base '%c'", because a non-english speaker 305can see that -A was the problem (giving back the command line argument they 306supplied). A user with a ~20 word english vocabulary is 307more likely to know (or guess) "bad" than the longer message, and you can 308use "bad" in place of "invalid", "inappropriate", "unrecognized"... 309Similarly when atolx_range() complains about range constraints with 310"4 < 17" or "12 > 5", it's intentional: those don't need to be translated.</p> 311 312<p>The strerror() messages produced by perror_exit() and friends should be 313localized by libc, and our error functions also prepend the command name 314(which non-english speakers can presumably recognize already). Keep the 315explanation in between to a minimum, and where possible feed back the values 316they passed in to identify _what_ we couldn't process. 317If you say perror_exit("setsockopt"), you've identified the action you 318were trying to take, and the perror gives a translated error message (from libc) 319explaining _why_ it couldn't do it, so you probably don't need to add english 320words like "failed" or "couldn't assign".</p> 321 322<p>All commands should be 8-bit clean, with explicit 323<a href=http://yarchive.net/comp/linux/utf8.html>UTF-8</a> support where 324necessary. Assume all input data might be utf8, and at least preserve 325it and pass it through. (For this reason, our build is -funsigned-char on 326all architectures; "char" is unsigned unless you stick "signed" in front 327of it.)</p> 328 329<p>Locale support isn't currently a goal; that's a presentation layer issue 330(I.E. a GUI problem).</p> 331 332<a name="codestyle" /> 333<h2>Coding style</h2> 334 335<p>The real coding style holy wars are over things that don't matter 336(whitespace, indentation, curly bracket placement...) and thus have no 337obviously correct answer. As in academia, "the fighting is so vicious because 338the stakes are so small". That said, being consistent makes the code readable, 339so here's how to make toybox code look like other toybox code.</p> 340 341<p>Toybox source uses two spaces per indentation level, and wraps at 80 342columns. (Indentation of continuation lines is awkward no matter what 343you do, sometimes two spaces looks better, sometimes indenting to the 344contents of a parentheses looks better.)</p> 345 346<p>I'm aware this indentation style creeps some people out, so here's 347the sed invocation to convert groups of two leading spaces to tabs:</p> 348<blockquote><pre> 349sed -i ':loop;s/^\( *\) /\1\t/;t loop' filename 350</pre></blockquote> 351 352<p>And here's the sed invocation to convert leading tabs to two spaces each:</p> 353<blockquote><pre> 354sed -i ':loop;s/^\( *\)\t/\1 /;t loop' filename 355</pre></blockquote> 356 357<p>There's a space after C flow control statements that look like functions, so 358"if (blah)" instead of "if(blah)". (Note that sizeof is actually an 359operator, so we don't give it a space for the same reason ++ doesn't get 360one. Yeah, it doesn't need the parentheses either, but it gets them. 361These rules are mostly to make the code look consistent, and thus easier 362to read.) We also put a space around assignment operators (on both sides), 363so "int x = 0;".</p> 364 365<p>Blank lines (vertical whitespace) go between thoughts. "We were doing that, 366now we're doing this." (Not a hard and fast rule about _where_ it goes, 367but there should be some for the same reason writing has paragraph breaks.)</p> 368 369<p>Variable declarations go at the start of blocks, with a blank line between 370them and other code. Yes, c99 allows you to put them anywhere, but they're 371harder to find if you do that. If there's a large enough distance between 372the declaration and the code using it to make you uncomfortable, maybe the 373function's too big, or is there an if statement or something you can 374use as an excuse to start a new closer block?</p> 375 376<p>If statments with a single line body go on the same line if the result 377fits in 80 columns, on a second line if it doesn't. We usually only use 378curly brackets if we need to, either because the body is multiple lines or 379because we need to distinguish which if an else binds to. Curly brackets go 380on the same line as the test/loop statement. The exception to both cases is 381if the test part of an if statement is long enough to split into multiple 382lines, then we put the curly bracket on its own line afterwards (so it doesn't 383get lost in the multple line variably indented mess), and we put it there 384even if it's only grouping one line (because the indentation level is not 385providing clear information in that case).</p> 386 387<p>I.E.</p> 388 389<blockquote> 390<pre> 391if (thingy) thingy; 392else thingy; 393 394if (thingy) { 395 thingy; 396 thingy; 397} else thingy; 398 399if (blah blah blah... 400 && blah blah blah) 401{ 402 thingy; 403} 404</pre></blockquote> 405 406<p>Gotos are allowed for error handling, and for breaking out of 407nested loops. In general, a goto should only jump forward (not back), and 408should either jump to the end of an outer loop, or to error handling code 409at the end of the function. Goto labels are never indented: they override the 410block structure of the file. Putting them at the left edge makes them easy 411to spot as overrides to the normal flow of control, which they are.</p> 412 413<p>When there's a shorter way to say something, we tend to do that for 414consistency. For example, we tend to say "*blah" instead of "blah[0]" unless 415we're referring to more than one element of blah. Similarly, NULL is 416really just 0 (and C will automatically typecast 0 to anything, except in 417varargs), "if (function() != NULL)" is the same as "if (function())", 418"x = (blah == NULL);" is "x = !blah;", and so on.</p> 419 420<p>The goal is to be 421concise, not cryptic: if you're worried about the code being hard to 422understand, splitting it to multiple steps on multiple lines is 423better than a NOP operation like "!= NULL". A common sign of trying to 424hard is nesting ? : three levels deep, sometimes if/else and a temporary 425variable is just plain easier to read. If you think you need a comment, 426you may be right.</p> 427 428<p>Comments are nice, but don't overdo it. Comments should explain _why_, 429not how. If the code doesn't make the how part obvious, that's a problem with 430the code. Sometimes choosing a better variable name is more revealing than a 431comment. Comments on their own line are better than comments on the end of 432lines, and they usually have a blank line before them. Most of toybox's 433comments are c99 style // single line comments, even when there's more than 434one of them. The /* multiline */ style is used at the start for the metadata, 435but not so much in the code itself. They don't nest cleanly, are easy to leave 436accidentally unterminated, need extra nonfunctional * to look right, and if 437you need _that_ much explanation maybe what you really need is a URL citation 438linking to a standards document? Long comments can fall out of sync with what 439the code is doing. Comments do not get regression tested. There's no such 440thing as self-documenting code (if nothing else, code with _no_ comments 441is a bit unfriendly to new readers), but "chocolate sauce isn't the answer 442to bad cooking" either. Don't use comments as a crutch to explain unclear 443code if the code can be fixed.</p> 444 445<!--#include file="footer.html" --> 446