page.title=HAL subsystem @jd:body

In this document

Requests

The app framework issues requests for captured results to the camera subsystem. One request corresponds to one set of results. A request encapsulates all configuration information about the capturing and processing of those results. This includes things such as resolution and pixel format; manual sensor, lens, and flash control; 3A operating modes; RAW to YUV processing control; and statistics generation. This allows for much more control over the results' output and processing. Multiple requests can be in flight at once, and submitting requests is non-blocking. And the requests are always processed in the order they are received.
Camera request model

Figure 1. Camera model

The HAL and camera subsystem

The camera subsystem includes the implementations for components in the camera pipeline such as the 3A algorithm and processing controls. The camera HAL provides interfaces for you to implement your versions of these components. To maintain cross-platform compatibility between multiple device manufacturers and Image Signal Processor (ISP, or camera sensor) vendors, the camera pipeline model is virtual and does not directly correspond to any real ISP. However, it is similar enough to real processing pipelines so that you can map it to your hardware efficiently. In addition, it is abstract enough to allow for multiple different algorithms and orders of operation without compromising either quality, efficiency, or cross-device compatibility.
The camera pipeline also supports triggers that the app framework can initiate to turn on things such as auto-focus. It also sends notifications back to the app framework, notifying apps of events such as an auto-focus lock or errors.
Camera hardware abstraction layer

Figure 2. Camera pipeline

Please note, some image processing blocks shown in the diagram above are not well-defined in the initial release.
The camera pipeline makes the following assumptions:

Summary of API use
This is a brief summary of the steps for using the Android camera API. See the Startup and expected operation sequence section for a detailed breakdown of these steps, including API calls.

  1. Listen for and enumerate camera devices.
  2. Open device and connect listeners.
  3. Configure outputs for target use case (such as still capture, recording, etc.).
  4. Create request(s) for target use case.
  5. Capture/repeat requests and bursts.
  6. Receive result metadata and image data.
  7. When switching use cases, return to step 3.

HAL operation summary

Camera HAL overview

Figure 3. Camera HAL overview

Startup and expected operation sequence

This section contains a detailed explanation of the steps expected when using the camera API. Please see platform/hardware/libhardware/include/hardware/camera3.h for definitions of these structures and methods.

  1. Framework calls camera_module_t->common.open(), which returns a hardware_device_t structure.
  2. Framework inspects the hardware_device_t->version field, and instantiates the appropriate handler for that version of the camera hardware device. In case the version is CAMERA_DEVICE_API_VERSION_3_0, the device is cast to a camera3_device_t.
  3. Framework calls camera3_device_t->ops->initialize() with the framework callback function pointers. This will only be called this one time after open(), before any other functions in the ops structure are called.
  4. The framework calls camera3_device_t->ops->configure_streams() with a list of input/output streams to the HAL device.
  5. The framework allocates gralloc buffers and calls camera3_device_t->ops->register_stream_buffers() for at least one of the output streams listed in configure_streams. The same stream is registered only once.
  6. The framework requests default settings for some number of use cases with calls to camera3_device_t->ops->construct_default_request_settings(). This may occur any time after step 3.
  7. The framework constructs and sends the first capture request to the HAL with settings based on one of the sets of default settings, and with at least one output stream that has been registered earlier by the framework. This is sent to the HAL with camera3_device_t->ops->process_capture_request(). The HAL must block the return of this call until it is ready for the next request to be sent.
  8. The framework continues to submit requests, and possibly call register_stream_buffers() for not-yet-registered streams, and call construct_default_request_settings to get default settings buffers for other use cases.
  9. When the capture of a request begins (sensor starts exposing for the capture), the HAL calls camera3_callback_ops_t->notify() with the SHUTTER event, including the frame number and the timestamp for start of exposure. This notify call must be made before the first call to process_capture_result() for that frame number.
  10. After some pipeline delay, the HAL begins to return completed captures to the framework with camera3_callback_ops_t->process_capture_result(). These are returned in the same order as the requests were submitted. Multiple requests can be in flight at once, depending on the pipeline depth of the camera HAL device.
  11. After some time, the framework may stop submitting new requests, wait for the existing captures to complete (all buffers filled, all results returned), and then call configure_streams() again. This resets the camera hardware and pipeline for a new set of input/output streams. Some streams may be reused from the previous configuration; if these streams' buffers had already been registered with the HAL, they will not be registered again. The framework then continues from step 7, if at least one registered output stream remains. (Otherwise, step 5 is required first.)
  12. Alternatively, the framework may call camera3_device_t->common->close() to end the camera session. This may be called at any time when no other calls from the framework are active, although the call may block until all in-flight captures have completed (all results returned, all buffers filled). After the close call returns, no more calls to the camera3_callback_ops_t functions are allowed from the HAL. Once the close() call is underway, the framework may not call any other HAL device functions.
  13. In case of an error or other asynchronous event, the HAL must call camera3_callback_ops_t->notify() with the appropriate error/event message. After returning from a fatal device-wide error notification, the HAL should act as if close() had been called on it. However, the HAL must either cancel or complete all outstanding captures before calling notify(), so that once notify() is called with a fatal error, the framework will not receive further callbacks from the device. Methods besides close() should return -ENODEV or NULL after the notify() method returns from a fatal error message.
Camera operations flow

Figure 4. Camera operational flow

Operational modes

The camera 3 HAL device can implement one of two possible operational modes: limited and full. Full support is expected from new higher-end devices. Limited mode has hardware requirements roughly in line with those for a camera HAL device v1 implementation, and is expected from older or inexpensive devices. Full is a strict superset of limited, and they share the same essential operational flow, as documented above.

The HAL must indicate its level of support with the android.info.supportedHardwareLevel static metadata entry, with 0 indicating limited mode, and 1 indicating full mode support.

Roughly speaking, limited-mode devices do not allow for application control of capture settings (3A control only), high-rate capture of high-resolution images, raw sensor readout, or support for YUV output streams above maximum recording resolution (JPEG only for large images).
Here are the details of limited-mode behavior:

Interaction between the application capture request, 3A control, and the processing pipeline

Depending on the settings in the 3A control block, the camera pipeline ignores some of the parameters in the application's capture request and uses the values provided by the 3A control routines instead. For example, when auto-exposure is active, the exposure time, frame duration, and sensitivity parameters of the sensor are controlled by the platform 3A algorithm, and any app-specified values are ignored. The values chosen for the frame by the 3A routines must be reported in the output metadata. The following table describes the different modes of the 3A control block and the properties that are controlled by these modes. See the platform/system/media/camera/docs/docs.html file for definitions of these properties.

Parameter State Properties controlled
android.control.aeMode OFF None
ON android.sensor.exposureTime android.sensor.frameDuration android.sensor.sensitivity android.lens.aperture (if supported) android.lens.filterDensity (if supported)
ON_AUTO_FLASH Everything is ON, plus android.flash.firingPower, android.flash.firingTime, and android.flash.mode
ON_ALWAYS_FLASH Same as ON_AUTO_FLASH
ON_AUTO_FLASH_RED_EYE Same as ON_AUTO_FLASH
android.control.awbMode OFF None
WHITE_BALANCE_* android.colorCorrection.transform. Platform-specific adjustments if android.colorCorrection.mode is FAST or HIGH_QUALITY.
android.control.afMode OFF None
FOCUS_MODE_* android.lens.focusDistance
android.control.videoStabilization OFF None
ON Can adjust android.scaler.cropRegion to implement video stabilization
android.control.mode OFF AE, AWB, and AF are disabled
AUTO Individual AE, AWB, and AF settings are used
SCENE_MODE_* Can override all parameters listed above. Individual 3A controls are disabled.

The controls exposed for the 3A algorithm mostly map 1:1 to the old API's parameters (such as exposure compensation, scene mode, or white balance mode).
The controls in the Image Processing block in Figure 2 all operate on a similar principle, and generally each block has three modes:

The maximum frame rate that can be supported by a camera subsystem is a function of many factors:

Since these factors can vary greatly between different ISPs and sensors, the camera HAL interface tries to abstract the bandwidth restrictions into as simple model as possible. The model presented has the following characteristics: