page.title=Implementing Security @jd:body

In this document

Introduction

The Android Security Team regularly receive requests for more information about how to prevent potential security issues on Android devices. We also occasionally perform spot-checks of devices and let OEMs and affected partners know of potential issues.

This document provides OEMs and other partners with a number of security best practices based upon our own experiences. This is an extension of the Designing for Security documentation we've provided for developers, including best practices that are unique to those who are building or installing system-level software on devices.

Where possible, the Android Security Team will incorporate tests into the Android Compatibility Test Suite (CTS) and Android Lint to facilitate adoption of these best practices. We encourage partners to contribute tests that can help other Android users. A partial list of security-related tests can be found at: root/cts/tests/tests/security/src/android/security/cts

Development process

Source code security review

Source code review can detect a broad range of security issues, including those identified in this document. Android strongly encourages both manual and automated source code review.

  1. Android Lint should be run on all application code using the Android SDK. Issues that are identified should be corrected.
  2. Native code should be analyzed using an automated tool that can detect memory management issues such as buffer overflows and off-by-one errors.

Automated testing

Automated testing can detect a broad range of security issues, including many of those identified in this document.

  1. CTS is regularly updated with security tests; the most recent version of CTS must be run to verify compatibility.
  2. CTS should be run regularly throughout the development process to detect problems early and reduce time to correction. Android uses CTS as part of continuous integration with our automated build process, which builds multiple times per day.
  3. OEMs should automate security testing of any interfaces including testing with malformed inputs (fuzz testing).

Signing system images

The signature of the system image is critical for determining the integrity of the device. Specifically:

  1. Devices must not be signed with a key that is publicly known.
  2. Keys used to sign devices should be managed in a manner consistent with industry standard practices for handling sensitive keys, including a hardware security module (HSM) that provides limited, auditable access.

Signing applications (APKs)

Application signatures play an important role in device security. They are used for permissions checks as well as software updates. When selecting a key to use for signing applications, it is important to consider whether an application will be available only on a single device or common across multiple devices. Consider:

  1. Applications must not be signed with a key that is publicly known.
  2. Keys used to sign applications should be managed in a manner consistent with industry standard practices for handling sensitive keys, including an HSM that provides limited, auditable access.
  3. Applications should not be signed with the platform key.
  4. Applications with the same package name should not be signed with different keys. This often occurs when creating an application for different devices, especially when using the platform key. If the application is device-independent, then use the same key across devices. If the application is device-specific, create unique package names per device and key.

Apps publishing

Google Play provides OEMs with the ability to update applications without performing a complete system update. This can expedite response to security issues and delivery of new features. This also provides a way to make sure that your application has a unique package name.

  1. Apps should be uploaded to Google Play to allow automated updates without requiring a full OTA. Applications that are uploaded but "unpublished" are not directly downloadable by users, but the apps are still updated. Users who have ever installed such an app can install it again and again on their other devices as well.
  2. To avoid potential confusion, apps should be created with a package name clearly associated with your company, such as by using a company trademark.
  3. Apps published by OEMs should be uploaded on the Google Play store in order to avoid package name impersonation by third-party users.

    If an OEM installs an app on a phone without publishing it on the Play store, another developer could upload that same app, using the same package name,, and change the metadata for the app. When presented to the user, this unrelated metadata could create confusion.

Incident response

External parties must have the ability to contact OEMs about device-specific security issues. We strongly recommend the creation of a publicly accessible email address for managing security incidents.

  1. Create a security@your-company.com or similar address and publicize it.
  2. If you become aware of a security issue affecting Android OS or Android devices from multiple OEMs, you should contact the Android Security Team at security@android.com.

Product implementation

Root processes

Root processes are the most frequent target of privilege escalation attacks, so reducing the number of root processes reduces risk of privilege escalation. CTS has been expanded with an informational test that lists root processes.

  1. The devices should run the minimum necessary code as root. Where possible, use a regular android process rather than a root process. The ICS Galaxy Nexus has only six root processes - vold, inetd, zygote, tf_daemon, ueventd, and init. Please let the Android team know if capabilities are required that are not accessible without root privileges.
  2. Where possible, root code should be isolated from untrusted data and accessed via IPC. For example, reduce root functionality to a small Service accessible via Binder and expose the Service with signature permission to an an application with low or no privileges to handle network traffic.
  3. Root processes must not listen on a network socket.
  4. Root processes must not provide a general-purpose runtime for applications. (e.g. a Java VM)

System apps

In general, apps pre-installed by OEMs should not be running with the shared UID of system. Realistically, however, sometimes this is necessary. If an app is using the shared UID of system or another privileged service (i.e., phone), it should not export any services, broadcast receivers, or content providers that can be accessed by third-party apps installed by users.

  1. The devices should run the minimum necessary code as system. Where possible, use an android process with its own UID rather than reusing the system UID.
  2. Where possible, system code should be isolated from untrusted data and expose IPC only to other trusted processes.
  3. System processes must not listen on a network socket.

Process isolation

The Android Application Sandbox provides applications with an expectation of isolation from other processes on the system, including root processes and debuggers. Unless debugging is specifically enabled by the application and the user, no application should violate that expectation.

  1. Root processes must not access data within individual application data folders, unless using a documented Android debugging method.
  2. Root processes must not access memory of applications, unless using a documented Android debugging method.
  3. The device must not include any application that accesses data or memory of other applications or processes.

SUID files

New setuid programs should not be accessible by untrusted programs. Setuid programs have frequently been the location of vulnerabilities that can be used to gain root access, and minimizing the availability of the program to untrusted applications is a security best practice.

  1. SUID processes must not provide a shell or backdoor that can be used to circumvent the Android security model.
  2. SUID programs must not be writable by any user.
  3. SUID programs should not be world readable or executable. Create a group, limit access to the SUID binary to members of that group, and place any applications that should be able to execute the SUID program into that group.
  4. SUID programs are a common source of user "rooting" of devices. To reduce this risk, SUID programs should not be executable by the shell user.

The CTS verifier has been expanded with an informational test that lists SUID files. Certain setuid files are not permitted, per CTS tests.

Listening sockets

CTS tests have been expanded to fail when a device is listening on any port, on any interface. In the event of a failure, Google will verify that the following best practices are being used:

  1. There should be no listening ports on the device.
  2. Listening ports must be able to be disabled without an OTA. This can be either a server or user-device configuration change.
  3. Root processes must not listen on any port.
  4. Processes owned by the system UID must not listen on any port.
  5. For local IPC using sockets, applications must use a UNIX Domain Socket with access limited to a group. Create a file descriptor for the IPC and make it +RW for a specific UNIX group. Any client applications must be within that UNIX group.
  6. Some devices with multiple processors (e.g. a radio/modem separate from the application processor) use network sockets to communicate between processors. In those instances, the network socket used for inter-processor communication must use an isolated network interface to prevent access by unauthorized applications on the device. One approach is to use iptables to prevent access by other applications on the device.
  7. Daemons that handle listening ports must be robust against malformed data. Google may conduct fuzz-testing against the port using an unauthorized client, and, where possible, authorized client. Any crashes will be filed as bugs with an appropriate severity.

Logging

Logging of data increases the risk of exposure of that data and reduces system performance. Multiple public security incidents have occurred as the result of logging of sensitive user data by applications installed by default on Android devices.

  1. Applications or system services should not log data provided from third-party applications that might include sensitive information.
  2. Applications must not log any Personally Identifiable Information (PII) as part of normal operation.

CTS has been expanded with a number of tests that check for the presence of potentially sensitive information in the system logs.

Directories

World-writable directories can introduce security weaknesses. Writable directories may enable an application to rename other trusted files, substituting their own files or conducting symlink-based attacks. By creating a symlink to a file, the attacker may trick a trusted program into performing actions it shouldn't.

Writable directories prevent the uninstall of an application from properly cleaning up all files associated with an application. Directories created by the system or root users should not be world writable.

CTS tests help enforce this best practice by testing known directories.

Configuration files

Many drivers and services rely on configuration and data files stored in directories like /system/etc and various other directories in /data. If these files are processed by a privileged process and are world writable, then it could be possible for an app to exploit a vulnerability in the process by crafting malicious contents in the world-writable file.

  1. Configuration files used by privileged processes should not be world readable.
  2. Configuration files used by privileged processes must not be world writable.

Native code libraries

Any code used by privileged OEM processes must be in /vendor or /system; these filesystems are mounted read-only on boot. Any libraries used by system or other highly-privileged apps installed on the phone should also be in these filesystems. This can prevent a security vulnerability that could allow an attacker to control the code that a privileged process executes.

  1. All native code used by privileged OEM processes must be in /vendor or /system.

Device drivers

Only trusted code should have direct access to drivers. Where possible, the preferred architecture is to provide a single-purpose daemon that proxies calls to the driver and restrict access to the driver to that daemon.

Driver device nodes should not be world readable or writable. CTS tests help enforce this best practice by checking for known instances of exposed drivers.

ADB

ADB must be disabled by default and must require the user to turn it on before accepting connections.

Unlockable bootloaders

Unlockable Android devices must securely erase all user data prior to being unlocked. The failure to properly delete all data on unlocking may allow a physically proximate attacker to gain unauthorized access to confidential Android user data. We've seen numerous instances where device manufacturers improperly implemented unlocking.

Many Android devices support unlocking. This allows the device owner to modify the system partition and/or install a custom operating system. Common use cases for this include installing a third-party ROM, and/or doing systems-level development on the device.

For example, on Google Nexus devices, an end user can run fastboot oem unlock to start the unlocking process. When an end user runs this command, the following message is displayed:

Unlock bootloader?

If you unlock the bootloader, you will be able to install custom operating system software on this phone.

A custom OS is not subject to the same testing as the original OS, and can cause your phone and installed applications to stop working properly.

To prevent unauthorized access to your personal data, unlocking the bootloader will also delete all personal data from your phone (a "factory data reset").

Press the Volume Up/Down buttons to select Yes or No. Then press the Power button to continue.

Yes: Unlock bootloader (may void warranty)

No: Do not unlock bootloader and restart phone.

A device that is unlocked may be subsequently relocked, by issuing the fastboot oem lock command. Locking the bootloader provides the same protection of user data with the new custom OS as was available with the original OEM OS. e.g. user data will be wiped if the device is unlocked again in the future.

To prevent the disclosure of user data, a device that supports unlocking needs to implement it properly.

A properly implemented unlocking process will have the following properties:

  1. When the unlocking command is confirmed by the user, the device MUST start an immediate data wipe. The "unlocked" flag MUST NOT be set until after the secure deletion is complete.
  2. If a secure deletion cannot be completed, the device MUST stay in a locked state.
  3. If supported by the underlying block device, ioctl(BLKSECDISCARD) or equivalent SHOULD be used. For eMMC devices, this means using a Secure Erase or Secure Trim command. For eMMC 4.5 and later, this means doing a normal Erase or Trim followed by a Sanitize operation.
  4. If BLKSECDISCARD is NOT supported by the underlying block device, ioctl(BLKDISCARD) MUST be used instead. On eMMC devices, this is a normal Trim operation.
  5. If BLKDISCARD is NOT supported, overwriting the block devices with all zeros is acceptable.
  6. An end user MUST have the option to require that user data be wiped prior to flashing a partition. For example, on Nexus devices, this is done via the fastboot oem lock command.
  7. A device MAY record, via efuses or similar mechanism, whether a device was unlocked and/or relocked.

These requirements ensure that all data is destroyed upon the completion of an unlock operation. Failure to implement these protections is considered a "moderate" level security vulnerability.