1# Chrome OS Update Process
2
3[TOC]
4
5System updates in more modern operating systems like Chrome OS and Android are
6called A/B updates, over-the-air ([OTA]) updates, seamless updates, or simply
7auto updates. In contrast to more primitive system updates (like Windows or
8macOS) where the system is booted into a special mode to override the system
9partitions with newer updates and may take several minutes or hours, A/B updates
10have several advantages including but not limited to:
11
12*   Updates maintain a workable system that remains on the disk during and after
13    an update. Hence, reducing the likelihood of corrupting a device into a
14    non-usable state. And reducing the need for flashing devices manually or at
15    repair and warranty centers, etc.
16*   Updates can happen while the system is running (normally with minimum
17    overhead) without interrupting the user. The only downside for users is a
18    required reboot (or, in Chrome OS, a sign out which automatically causes a
19    reboot if an update was performed where the reboot duration is about 10
20    seconds and is no different than a normal reboot).
21*   The user does not need (although they can) to request for an update. The
22    update checks happen periodically in the background.
23*   If the update fails to apply, the user is not affected. The user will
24    continue on the old version of the system and the system will attempt to
25    apply the update again at a later time.
26*   If the update applies correctly but fails to boot, the system will rollback
27    to the old partition and the user can still use the system as usual.
28*   The user does not need to reserve enough space for the update. The system
29    has already reserved enough space in terms of two copies (A and B) of a
30    partition. The system doesn’t even need any cache space on the disk,
31    everything happens seamlessly from network to memory to the inactive
32    partitions.
33
34## Life of an A/B Update
35
36In A/B update capable systems, each partition, such as the kernel or root (or
37other artifacts like [DLC]), has two copies. We call these two copies active (A)
38and inactive (B). The system is booted into the active partition (depending on
39which copy has the higher priority at boot time) and when a new update is
40available, it is written into the inactive partition. After a successful reboot,
41the previously inactive partition becomes active and the old active partition
42becomes inactive.
43
44But everything starts with generating update payloads in (Google) servers for
45each new system image. Once the update payloads are generated, they are signed
46with specific keys and stored in a location known to an update server (Omaha).
47
48When the updater client initiates an update (either periodically or user
49initiated), it first consults different device policies to see if the update
50check is allowed. For example, device policies can prevent an update check
51during certain times of a day or they require the update check time to be
52scattered throughout the day randomly, etc.
53
54Once policies allow for the update check, the updater client sends a request to
55the update server (all this communication happens over HTTPS) and identifies its
56parameters like its Application ID, hardware ID, version, board, etc. Then if
57the update server decides to serve an update payload, it will respond with all
58the parameters needed to perform an update like the URLs to download the
59payloads, the metadata signatures, the payload size and hash, etc. The updater
60client continues communicating with the update server after different state
61changes, like reporting that it started to download the payload or it finished
62the update, or reports that the update failed with specific error codes, etc.
63
64Each payload consists of two main sections: metadata and extra data. The
65metadata is basically a list of operations that should be performed for an
66update. The extra data contains the data blobs needed by some or all of these
67operations. The updater client first downloads the metadata and
68cryptographically verifies it using the provided signatures from the update
69server’s response. Once the metadata is verified as valid, the rest of the
70payload can easily be verified cryptographically (mostly through SHA256 hashes).
71
72Next, the updater client marks the inactive partition as unbootable (because it
73needs to write the new updates into it). At this point the system cannot
74rollback to the inactive partition anymore.
75
76Then, the updater client performs the operations defined in the metadata (in the
77order they appear in the metadata) and the rest of the payload is gradually
78downloaded when these operations require their data. Once an operation is
79finished its data is discarded. This eliminates the need for caching the entire
80payload before applying it. During this process the updater client periodically
81checkpoints the last operation performed so in the event of failure or system
82shutdown, etc. it can continue from the point it missed without redoing all
83operations from the beginning.
84
85During the download, the updater client hashes the downloaded bytes and when the
86download finishes, it checks the payload signature (located at the end of the
87payload). If the signature cannot be verified, the update is rejected.
88
89After the inactive partition is updated, the entire partition is re-read, hashed
90and compared to a hash value passed in the metadata to make sure the update was
91successfully written into the partition.
92
93In the next step, the [Postinstall] process (if any) is called. The postinstall
94reconstructs the dm-verity tree hash of the ROOT partition and writes it at the
95end of the partition (after the last block of the file system). The postinstall
96can also perform any board specific or firmware update tasks necessary. If
97postinstall fails, the entire update is considered failed.
98
99Then the updater client goes into a state that identifies the update has
100completed and the user needs to reboot the system. At this point, until the user
101reboots (or signs out), the updater client will not do any more system updates
102even if newer updates are available. However, it does continue to perform
103periodic update checks so we can have statistics on the number of active devices
104in the field.
105
106After the update proved successful, the inactive partition is marked to have a
107higher priority (on a boot, a partition with higher priority is booted
108first). Once the user reboots the system, it will boot into the updated
109partition and it is marked as active. At this point, after the reboot, The
110updater client calls into the [`chromeos-setgoodkernel`] program. The program
111verifies the integrity of the system partitions using the dm-verity and marks
112the active partition as healthy. At this point the system is basically updated
113successfully.
114
115## Update Engine Daemon
116
117The `update_engine` is a single-threaded daemon process that runs all the
118times. This process is the heart of the auto updates. It runs with lower
119priorities in the background and is one of the last processes to start after a
120system boot. Different clients (like Chrome or other services) can send requests
121for update checks to the update engine. The details of how requests are passed
122to the update engine is system dependent, but in Chrome OS it is D-Bus.  Look at
123the [D-Bus interface] for a list of all available methods.
124
125There are many resiliency features embedded in the update engine that makes auto
126updates robust including but not limited to:
127
128*   If the update engine crashes, it will restart automatically.
129*   During an active update it periodically checkpoints the state of the update
130    and if it fails to continue the update or crashes in the middle, it will
131    continue from the last checkpoint.
132*   It retries failed network communication.
133*   If it fails to apply a delta payload (due to bit changes on the active
134    partition) for a few times, it switches to full payload.
135
136The updater clients writes its active preferences in
137`/var/lib/update_engine/prefs`. These preferences help with tracking changes
138during the lifetime of the updater client and allows properly continuing the
139update process after failed attempts or crashes.
140
141The core update engine code base in a Chromium OS checkout is located in
142`src/aosp/system/update_engine` fetching [this repository].
143
144### Policy Management
145
146In Chrome OS, devices are allowed to accept different policies from their
147managing organizations. Some of these policies affect how/when updates should be
148performed. For example, an organization may want to scatter the update checks
149during certain times of the day so as not to interfere with normal
150business. Within the update engine daemon, [UpdateManager] has the
151responsibility of loading such policies and making different decisions based on
152them. For example, some policies may allow the act of checking for updates to
153happen, while they prevent downloading the update payload. Or some policies
154don’t allow the update check within certain time frames, etc.  Anything that
155relates to the Chrome OS update policies should be contained within the
156[update_manager] directory in the source code.
157
158### Rollback vs. Enterprise Rollback
159
160Chrome OS defines a concept for Rollback: Whenever a newly updated system does
161not work as it is intended, under certain circumstances the device can be rolled
162back to a previously working version. There are two types of rollback supported
163in Chrome OS: A (legacy, original) rollback and an enterprise rollback (I know,
164naming is confusing).
165
166A normal rollback, which has existed for as long as Chrome OS had auto updater,
167is performed by switching the currently inactive partition into the active
168partition and rebooting into it. It is as simple as running a successful
169postinstall on the inactive partition, and rebooting the device. It is a feature
170used by Chrome that happens under certain circumstances. Of course rollback
171can’t happen if the inactive partition has been tampered with or has been nuked
172by the updater client to install an even newer update. Normally a rollback is
173followed by a Powerwash which clobbers the stateful partition.
174
175Enterprise rollback is a new feature added to allow enterprise users to
176downgrade the installed image to an older version. It is very similar to a
177normal system update, except that an older update payload is downloaded and
178installed. There is no direct API for entering into the enterprise rollback. It
179is managed by the enterprise device policies only.
180
181Developers should be careful when touching any rollback related feature and make
182sure they know exactly which of these two features they are trying to adapt.
183
184### Interactive vs Non-Interactive vs. Forced Updates
185
186Non-interactive updates are updates that are scheduled periodically by the
187update engine and happen in the background. Interactive updates, on the other
188hand, happen when a user specifically requests an update check (e.g. by clicking
189on “Check For Update” button in Chrome OS’s About page). Depending on the update
190server's policies, interactive updates have higher priority than non-interactive
191updates (by carrying marker hints). They may decide to not provide an update if
192they have busy server load, etc. There are other internal differences between
193these two types of updates too. For example, interactive updates try to install
194the update faster.
195
196Forced updates are similar to interactive updates (initiated by some kind of
197user action), but they can also be configured to act as non-interactive. Since
198non-interactive updates happen periodically, a forced-non-interactive update
199causes a non-interactive update at the moment of the request, not at a later
200time. We can call a forced non-interactive update with:
201
202```bash
203update_engine_client --interactive=false --check_for_update
204```
205
206### P2P Updates
207
208Many organizations might not have the external bandwidth requirements that
209system updates need for all their devices. To help with this, Chrome OS can act
210as a payload server to other client devices in the same network subnet. This is
211basically a peer-to-peer update system that allows the devices to download the
212update payloads from other devices in the network. This has to be enabled
213explicitly in the organization through device policies and specific network
214configurations to be enabled for P2P updates to work. Regardless of the location
215of update payloads, all update requests go through update servers in HTTPS.
216
217Check out the [P2P update related code] for both the server and the client side.
218
219### Network
220
221The updater client has the capability to download the payloads using Ethernet,
222WiFi, or Cellular networks depending on which one the device is connected
223to. Downloading over Cellular networks will prompt permission from the user as
224it can consume a considerable amount of data.
225
226### Logs
227
228In Chrome OS the `update_engine` logs are located in `/var/log/update_engine`
229directory. Whenever `update_engine` starts, it starts a new log file with the
230current data-time format in the log file’s name
231(`update_engine.log-DATE-TIME`). Many log files can be seen in
232`/var/log/update_engine` after a few restarts of the update engine or after the
233system reboots. The latest active log is symlinked to
234`/var/log/update_engine.log`.
235
236## Update Payload Generation
237
238The update payload generation is the process of converting a set of
239partitions/files into a format that is both understandable by the updater client
240(especially if it's a much older version) and is securely verifiable. This
241process involves breaking the input partitions into smaller components and
242compressing them in order to help with network bandwidth when downloading the
243payloads.
244
245For each generated payload, there is a corresponding properties file which
246contains the metadata information of the payload in JSON format. Normally the
247file is located in the same location as the generated payload and its file name
248is the same as the payload file name plus `.json`
249postfix. e.g. `/path/to/payload.bin` and `/path/to/payload.bin.json`. This
250properties file is necessary in order to do any kind of auto update in [`cros
251flash`], AU autotests, etc. Similarly the updater server uses this file to
252dispatch the payload properties to the updater clients.
253
254Once update payloads are generated, their original images cannot be changed
255anymore otherwise the update payloads may not be able to be applied.
256
257`delta_generator` is a tool with a wide range of options for generating
258different types of update payloads. Its code is located in
259`update_engine/payload_generator`. This directory contains all the source code
260related to mechanics of generating an update payload. None of the files in this
261directory should be included or used in any other library/executable other than
262the `delta_generator` which means this directory does not get compiled into the
263rest of the update engine tools.
264
265However, it is not recommended to use `delta_generator` directly. To manually
266generate payloads easier, [`cros_generate_update_payloads`] should be used. Most
267of the higher level policies and tools for generating payloads reside as a
268library in [`chromite/lib/paygen`]. Whenever calls to the update payload
269generation API are needed, this library should be used instead.
270
271### Update Payload File Specification
272
273Each update payload file has a specific structure defined in the table below:
274
275|Field|Size (bytes)|Type|Description|
276|-----|------------|----|-----------|
277|Magic Number|4|char[4]|Magic string "CrAU" identifying this is an update payload.|
278|Major Version|8|uint64|Payload major version number.|
279|Manifest Size|8|uint64|Manifest size in bytes.|
280|Manifest Signature Size|4|uint32|Manifest signature blob size in bytes (only in major version 2).|
281|Manifest|Varies|[DeltaArchiveManifest]|The list of operations to be performed.|
282|Manifest Signature|Varies|[Signatures]|The signature of the first five fields. There could be multiple signatures if the key has changed.|
283|Payload Data|Varies|List of raw or compressed data blobs|The list of binary blobs used by operations in the metadata.|
284|Payload Signature Size|Varies|uint64|The size of the payload signature.|
285|Payload Signature|Varies|[Signatures]|The signature of the entire payload except the metadata signature. There could be multiple signatures if the key has changed.|
286
287### Delta vs. Full Update Payloads
288
289There are two types of payload: Full and Delta. A full payload is generated
290solely from the target image (the image we want to update to) and has all the
291data necessary to update the inactive partition. Hence, full payloads can be
292quite large in size. A delta payload, on the other hand, is a differential
293update generated by comparing the source image (the active partitions) and the
294target image and producing the diffs between these two images. It is basically a
295differential update similar to applications like `diff` or `bsdiff`. Hence,
296updating the system using the delta payloads requires the system to read parts
297of the active partition in order to update the inactive partition (or
298reconstruct the target partition). The delta payloads are significantly smaller
299than the full payloads. The structure of the payload is equal for both types.
300
301Payload generation is quite resource intensive and its tools are implemented
302with high parallelism.
303
304#### Generating Full Payloads
305
306A full payload is generated by breaking the partition into 2MiB (configurable)
307chunks and either compressing them using bzip2 or XZ algorithms or keeping it as
308raw data depending on which produces smaller data. Full payloads are much larger
309in comparison to delta payloads hence require longer download time if the
310network bandwidth is limited. On the other hand, full payloads are a bit faster
311to apply because the system doesn’t need to read data from the source partition.
312
313#### Generating Delta Payloads
314
315Delta payloads are generated by looking at both the source and target images
316data on a file and metadata basis (more precisely, the file system level on each
317appropriate partition). The reason we can generate delta payloads is that Chrome
318OS partitions are read only. So with high certainty we can assume the active
319partitions on the client’s device is bit-by-bit equal to the original partitions
320generated in the image generation/signing phase. The process for generating a
321delta payload is roughly as follows:
322
3231.  Find all the zero-filled blocks on the target partition and produce `ZERO`
324    operation for them. `ZERO` operation basically discards the associated
325    blocks (depending on the implementation).
3262.  Find all the blocks that have not changed between the source and target
327    partitions by directly comparing one-to-one source and target blocks and
328    produce `SOURCE_COPY` operation.
3293.  List all the files (and their associated blocks) in the source and target
330    partitions and remove blocks (and files) which we have already generated
331    operations for in the last two steps. Assign the remaining metadata (inodes,
332    etc) of each partition as a file.
3334.  If a file is new, generate a `REPLACE`, `REPLACE_XZ`, or `REPLACE_BZ`
334    operation for its data blocks depending on which one generates a smaller
335    data blob.
3365.  For each other file, compare the source and target blocks and produce a
337    `SOURCE_BSDIFF` or `PUFFDIFF` operation depending on which one generates a
338    smaller data blob. These two operations produce binary diffs between a
339    source and target data blob. (Look at [bsdiff] and [puffin] for details of
340    such binary differential programs!)
3416.  Sort the operations based on their target partitions’ block offset.
3427.  Optionally merge same or similar operations next to each other into larger
343    operations for better efficiency and potentially smaller payloads.
344
345Full payloads can only contain `REPLACE`, `REPLACE_BZ`, and `REPLACE_XZ`
346operations. Delta payloads can contain any operations.
347
348### Major and Minor versions
349
350The major and minor versions specify the update payload file format and the
351capability of the updater client to accept certain types of update payloads
352respectively. These numbers are [hard coded] in the updater client.
353
354Major version is basically the update payload file version specified in the
355[update payload file specification] above (second field). Each updater client
356supports a range of major versions. Currently, there are only two major
357versions: 1, and 2. And both Chrome OS and Android are on major version 2 (major
358version 1 is being deprecated). Whenever there are new additions that cannot be
359fitted in the [Manifest protobuf], we need to uprev the major version. Upreving
360major version should be done with utmost care because older clients do not know
361how to handle the newer versions. Any major version uprev in Chrome OS should be
362associated with a GoldenEye stepping stone.
363
364Minor version defines the capability of the updater client to accept certain
365operations or perform certain actions. Each updater client supports a range of
366minor versions. For example, the updater client with minor version 4 (or less)
367does not know how to handle a `PUFFDIFF` operation. So when generating a delta
368payload for an image which has an updater client with minor version 4 (or less)
369we cannot produce PUFFDIFF operation for it. The payload generation process
370looks at the source image’s minor version to decide the type of operations it
371supports and only a payload that confirms to those restrictions. Similarly, if
372there is a bug in a client with a specific minor version, an uprev in the minor
373version helps with avoiding to generate payloads that cause that bug to
374manifest. However, upreving minor versions is quite expensive too in terms of
375maintainability and it can be error prone. So one should practice caution when
376making such a change.
377
378Minor versions are irrelevant in full payloads. Full payloads should always be
379able to be applied for very old clients. The reason is that the updater clients
380may not send their current version, so if we had different types of full
381payloads, we would not have known which version to serve to the client.
382
383### Signed vs Unsigned Payloads
384
385Update payloads can be signed (with private/public key pairs) for use in
386production or be kept unsigned for use in testing. Tools like `delta_generator`
387help with generating metadata and payload hashes or signing the payloads given
388private keys.
389
390## update_payload Scripts
391
392[update_payload] contains a set of python scripts used mostly to validate
393payload generation and application. We normally test the update payloads using
394an actual device (live tests). [`brillo_update_payload`] script can be used to
395generate and test applying of a payload on a host device machine. These tests
396can be viewed as dynamic tests without the need for an actual device. Other
397`update_payload` scripts (like [`check_update_payload`]) can be used to
398statically check that a payload is in the correct state and its application
399works correctly. These scripts actually apply the payload statically without
400running the code in payload_consumer.
401
402## Postinstall
403
404[Postinstall] is a process called after the updater client writes the new image
405artifacts to the inactive partitions. One of postinstall's main responsibilities
406is to recreate the dm-verity tree hash at the end of the root partition. Among
407other things, it installs new firmware updates or any board specific
408processes. Postinstall runs in separate chroot inside the newly installed
409partition. So it is quite separated from the rest of the active running
410system. Anything that needs to be done after an update and before the device is
411rebooted, should be implemented inside the postinstall.
412
413## Building Update Engine
414
415You can build `update_engine` the same as other platform applications:
416
417```bash
418(chroot) $ emerge-${BOARD} update_engine
419```
420or to build without the source copy:
421
422```bash
423(chroot) $ cros_workon_make --board=${BOARD} update_engine
424```
425
426After a change in the `update_engine` daemon, either build an image and install
427the image on the device using cros flash, etc. or use `cros deploy` to only
428install the `update_engine` service on the device:
429
430```bash
431(chroot) $ cros deploy update_engine
432```
433
434You need to restart the `update_engine` daemon in order to see the affected
435changes:
436
437```bash
438# SSH into the device.
439restart update-engine # with a dash not underscore.
440```
441
442Other payload generation tools like `delta_generator` are board agnostic and
443only available in the SDK. So in order to make any changes to the
444`delta_generator`, you should build the SDK:
445
446```bash
447# Do it only once to start building the 9999 ebuild from ToT.
448(chroot) $ cros_workon --host start update_engine
449
450(chroot) $ sudo emerge update_engine
451```
452
453If you make any changes to the D-Bus interface make sure `system_api`,
454`update_engine-client`, and `update_engine` packages are marked to build from
4559999 ebuild and then build both packages in that order:
456
457```bash
458(chroot) $ emerge-${BOARD} system_api update_engine-client update_engine
459```
460
461If you make any changes to [`update_engine` protobufs] in the `system_api`,
462build the `system_api` package first.
463
464## Running Unit Tests
465
466[Running unit tests similar to other platforms]:
467
468```bash
469(chroot) $ FEATURES=test emerge-<board> update_engine
470```
471
472or
473
474```bash
475(chroot) $ cros_workon_make --board=<board> --test update_engine
476```
477
478or
479
480```bash
481(chroot) $ cros_run_unit_tests --board ${BOARD} --packages update_engine
482```
483
484The above commands run all the unit tests, but `update_engine` package is quite
485large and it takes a long time to run all the unit tests. To run all unit tests
486in a test class run:
487
488```bash
489(chroot) $ FEATURES=test \
490    P2_TEST_FILTER="*OmahaRequestActionTest.*-*RunAsRoot*" \
491    emerge-amd64-generic update_engine
492```
493
494To run one exact unit test fixture (e.g. `MultiAppUpdateTest`), run:
495
496```bash
497(chroot) $ FEATURES=test \
498    P2_TEST_FILTER="*OmahaRequestActionTest.MultiAppUpdateTest-*RunAsRoot*" \
499    emerge-amd64-generic update_engine
500```
501
502To run `update_payload` unit tests enter `update_engine/scripts` directory and
503run the desired `unittest.p`y files.
504
505## Initiating a Configured Update
506
507There are different methods to initiate an update:
508
509*   Click on the “Check For Update” button in setting’s About page. There is no
510    way to configure this way of update check.
511*   Use the [`update_engine_client`] program. There are a few configurations you
512    can do.
513*   Call `autest` in the crosh. Mainly used by the QA team and is not intended
514    to be used by any other team.
515*   Use [`cros flash`]. It internally uses the update_engine to flash a device
516    with a given image.
517*   Run one of many auto update autotests.
518*   Start a [Dev Server] on your host machine and send a specific HTTP request
519    (look at `cros_au` API in the Dev Server code), that has the information
520    like the IP address of your Chromebook and where the update payloads are
521    located to the Dev Server to start an update on your device (**Warning:**
522    complicated to do, not recommended).
523
524`update_engine_client` is a client application that can help initiate an update
525or get more information about the status of the updater client. It has several
526options like initiating an interactive vs. non-interactive update, changing
527channels, getting the current status of update process, doing a rollback,
528changing the Omaha URL to download the payload (the most important one), etc.
529
530`update_engine` daemon reads the `/etc/lsb-release` file on the device to
531identify different update parameters like the updater server (Omaha) URL, the
532current channel, etc. However, to override any of these parameters, create the
533file `/mnt/stateful_partition/etc/lsb-release` with desired customized
534parameters. For example, this can be used to point to a developer version of the
535update server and allow the update_engine to schedule a periodic update from
536that specific server.
537
538If you have some changes in the protocol that communicates with Omaha, but you
539don’t have those changes in the update server, or you have some specific
540payloads that do not exist on the production update server you can use
541[Nebraska] to help with doing an update.
542
543## Note to Developers and Maintainers
544
545When changing the update engine source code be extra careful about these things:
546
547### Do NOT Break Backward Compatibility
548
549At each release cycle we should be able to generate full and delta payloads that
550can correctly be applied to older devices that run older versions of the update
551engine client. So for example, removing or not passing arguments in the metadata
552proto file might break older clients. Or passing operations that are not
553understood in older clients will break them. Whenever changing anything in the
554payload generation process, ask yourself this question: Would it work on older
555clients? If not, do I need to control it with minor versions or any other means.
556
557Especially regarding enterprise rollback, a newer updater client should be able
558to accept an older update payload. Normally this happens using a full payload,
559but care should be taken in order to not break this compatibility.
560
561### Think About The Future
562
563When creating a change in the update engine, think about 5 years from now:
564
565*   How can the change be implemented that five years from now older clients
566    don’t break?
567*   How is it going to be maintained five years from now?
568*   How can it make it easier for future changes without breaking older clients
569    or incurring heavy maintenance costs?
570
571### Prefer Not To Implement Your Feature In The Updater Client
572If a feature can be implemented from server side, Do NOT implement it in the
573client updater. Because the client updater can be fragile at points and small
574mistakes can have catastrophic consequences. For example, if a bug is introduced
575in the updater client that causes it to crash right before checking for update
576and we can't quite catch this bug early in the release process, then the
577production devices which have already moved to the new buggy system, may no
578longer receive automatic updates anymore. So, always think if the feature is
579being implemented can be done form the server side (with potentially minimal
580changes to the client updater)? Or can the feature be moved to another service
581with minimal interface to the updater client. Answering these questions will pay
582off greatly in the future.
583
584### Be Respectful Of Other Code Bases
585
586The current update engine code base is used in many projects like Android. We
587sync the code base among these two projects frequently. Try to not break Android
588or other systems that share the update engine code. Whenever landing a change,
589always think about whether Android needs that change:
590
591*   How will it affect Android?
592*   Can the change be moved to an interface and stubs implementations be
593    implemented so as not to affect Android?
594*   Can Chrome OS or Android specific code be guarded by macros?
595
596As a basic measure, if adding/removing/renaming code, make sure to change both
597`build.gn` and `Android.bp`. Do not bring Chrome OS specific code (for example
598other libraries that live in `system_api` or `dlcservice`) into the common code
599of update_engine. Try to separate these concerns using best software engineering
600practices.
601
602### Merging from Android (or other code bases)
603
604Chrome OS tracks the Android code as an [upstream branch]. To merge the Android
605code to Chrome OS (or vice versa) just do a `git merge` of that branch into
606Chrome OS, test it using whatever means and upload a merge commit.
607
608```bash
609repo start merge-aosp
610git merge --no-ff --strategy=recursive -X patience cros/upstream
611repo upload --cbr --no-verify .
612```
613
614[Postinstall]: #postinstall
615[update payload file specification]: #update-payload-file-specification
616[OTA]: https://source.android.com/devices/tech/ota
617[DLC]: https://chromium.googlesource.com/chromiumos/platform2/+/master/dlcservice
618[`chromeos-setgoodkernel`]: https://chromium.googlesource.com/chromiumos/platform2/+/master/installer/chromeos-setgoodkernel
619[D-Bus interface]: /dbus_bindings/org.chromium.UpdateEngineInterface.dbus-xml
620[this repository]: /
621[UpdateManager]: /update_manager/update_manager.cc
622[update_manager]: /update_manager/
623[P2P update related code]: https://chromium.googlesource.com/chromiumos/platform2/+/master/p2p/
624[`cros_generate_update_payloads`]: https://chromium.googlesource.com/chromiumos/chromite/+/master/scripts/cros_generate_update_payload.py
625[`chromite/lib/paygen`]: https://chromium.googlesource.com/chromiumos/chromite/+/master/lib/paygen/
626[DeltaArchiveManifest]: /update_metadata.proto#302
627[Signatures]: /update_metadata.proto#122
628[hard coded]: /update_engine.conf
629[Manifest protobuf]: /update_metadata.proto
630[update_payload]: /scripts/
631[Postinstall]: https://chromium.googlesource.com/chromiumos/platform2/+/master/installer/chromeos-postinst
632[`update_engine` protobufs]: https://chromium.googlesource.com/chromiumos/platform2/+/master/system_api/dbus/update_engine/
633[Running unit tests similar to other platforms]: https://chromium.googlesource.com/chromiumos/docs/+/master/testing/running_unit_tests.md
634[Nebraska]: https://chromium.googlesource.com/chromiumos/platform/dev-util/+/master/nebraska/
635[upstream branch]: https://chromium.googlesource.com/aosp/platform/system/update_engine/+/upstream
636[`cros flash`]: https://chromium.googlesource.com/chromiumos/docs/+/master/cros_flash.md
637[bsdiff]: https://android.googlesource.com/platform/external/bsdiff/+/master
638[puffin]: https://android.googlesource.com/platform/external/puffin/+/master
639[`update_engine_client`]: /update_engine_client.cc
640[`brillo_update_payload`]: /scripts/brillo_update_payload
641[`check_update_payload`]: /scripts/paycheck.py
642[Dev Server]: https://chromium.googlesource.com/chromiumos/chromite/+/master/docs/devserver.md
643