1page.title=Audio Terminology 2@jd:body 3 4<!-- 5 Copyright 2015 The Android Open Source Project 6 7 Licensed under the Apache License, Version 2.0 (the "License"); 8 you may not use this file except in compliance with the License. 9 You may obtain a copy of the License at 10 11 http://www.apache.org/licenses/LICENSE-2.0 12 13 Unless required by applicable law or agreed to in writing, software 14 distributed under the License is distributed on an "AS IS" BASIS, 15 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16 See the License for the specific language governing permissions and 17 limitations under the License. 18--> 19<div id="qv-wrapper"> 20 <div id="qv"> 21 <h2>In this document</h2> 22 <ol id="auto-toc"> 23 </ol> 24 </div> 25</div> 26 27<p> 28This glossary of audio-related terminology includes widely-used generic terms 29and Android-specific terms. 30</p> 31 32<h2 id="genericTerm">Generic Terms</h2> 33 34<p> 35Generic audio-related terms have conventional meanings. 36</p> 37 38<h3 id="digitalAudioTerms">Digital Audio</h3> 39<p> 40Digital audio terms relate to handling sound using audio signals encoded 41in digital form. For details, refer to 42<a href="http://en.wikipedia.org/wiki/Digital_audio">Digital Audio</a>. 43</p> 44 45<dl> 46 47<dt>acoustics</dt> 48<dd> 49Study of the mechanical properties of sound, such as how the physical 50placement of transducers (speakers, microphones, etc.) on a device affects 51perceived audio quality. 52</dd> 53 54<dt>attenuation</dt> 55<dd> 56Multiplicative factor less than or equal to 1.0, applied to an audio signal 57to decrease the signal level. Compare to <em>gain</em>. 58</dd> 59 60<dt>audiophile</dt> 61<dd> 62Person concerned with a superior music reproduction experience, especially 63willing to make substantial tradeoffs (expense, component size, room design, 64etc.) for sound quality. For details, refer to 65<a href="http://en.wikipedia.org/wiki/Audiophile">audiophile</a>. 66</dd> 67 68<dt>bits per sample or bit depth</dt> 69<dd> 70Number of bits of information per sample. 71</dd> 72 73<dt>channel</dt> 74<dd> 75Single stream of audio information, usually corresponding to one location of 76recording or playback. 77</dd> 78 79<dt>downmixing</dt> 80<dd> 81Decrease the number of channels, such as from stereo to mono or from 5.1 to 82stereo. Accomplished by dropping channels, mixing channels, or more advanced 83signal processing. Simple mixing without attenuation or limiting has the 84potential for overflow and clipping. Compare to <em>upmixing</em>. 85</dd> 86 87<dt>DSD</dt> 88<dd> 89Direct Stream Digital. Proprietary audio encoding based on 90<a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">pulse-density 91modulation</a>. While Pulse Code Modulation (PCM) encodes a waveform as a 92sequence of individual audio samples of multiple bits, DSD encodes a waveform as 93a sequence of bits at a very high sample rate (without the concept of samples). 94Both PCM and DSD represent multiple channels by independent sequences. DSD is 95better suited to content distribution than as an internal representation for 96processing as it can be difficult to apply traditional digital signal processing 97(DSP) algorithms to DSD. DSD is used in <a href="http://en.wikipedia.org/wiki/Super_Audio_CD">Super Audio CD (SACD)</a> and in DSD over PCM (DoP) for USB. For details, refer 98to <a href="http://en.wikipedia.org/wiki/Direct_Stream_Digital">Digital Stream 99Digital</a>. 100</dd> 101 102<dt>duck</dt> 103<dd> 104Temporarily reduce the volume of a stream when another stream becomes active. 105For example, if music is playing when a notification arrives, the music ducks 106while the notification plays. Compare to <em>mute</em>. 107</dd> 108 109<dt>FIFO</dt> 110<dd> 111First In, First Out. Hardware module or software data structure that implements 112<a href="http://en.wikipedia.org/wiki/FIFO">First In, First Out</a> 113queueing of data. In an audio context, the data stored in the queue are 114typically audio frames. FIFO can be implemented by a 115<a href="http://en.wikipedia.org/wiki/Circular_buffer">circular buffer</a>. 116</dd> 117 118<dt>frame</dt> 119<dd> 120Set of samples, one per channel, at a point in time. 121</dd> 122 123<dt>frames per buffer</dt> 124<dd> 125Number of frames handed from one module to the next at one time. The audio HAL 126interface uses the concept of frames per buffer. 127</dd> 128 129<dt>gain</dt> 130<dd> 131Multiplicative factor greater than or equal to 1.0, applied to an audio signal 132to increase the signal level. Compare to <em>attenuation</em>. 133</dd> 134 135<dt>HD audio</dt> 136<dd> 137High-Definition audio. Synonym for high-resolution audio (but different than 138Intel High Definition Audio). 139</dd> 140 141<dt>Hz</dt> 142<dd> 143Units for sample rate or frame rate. 144</dd> 145 146<dt>high-resolution audio</dt> 147<dd> 148Representation with greater bit-depth and sample rate than CDs (stereo 16-bit 149PCM at 44.1 kHz) and without lossy data compression. Equivalent to HD audio. 150For details, refer to 151<a href="http://en.wikipedia.org/wiki/High-resolution_audio">high-resolution 152audio</a>. 153</dd> 154 155<dt>latency</dt> 156<dd> 157Time delay as a signal passes through a system. 158</dd> 159 160<dt>lossless</dt> 161<dd> 162A <a href="http://en.wikipedia.org/wiki/Lossless_compression">lossless data 163compression algorithm</a> that preserves bit accuracy across encoding and 164decoding, where the result of decoding previously encoded data is equivalent 165to the original data. Examples of lossless audio content distribution formats 166include <a href="http://en.wikipedia.org/wiki/Compact_disc">CDs</a>, PCM within 167<a href="http://en.wikipedia.org/wiki/WAV">WAV</a>, and 168<a href="http://en.wikipedia.org/wiki/FLAC">FLAC</a>. 169The authoring process may reduce the bit depth or sample rate from that of the 170<a href="http://en.wikipedia.org/wiki/Audio_mastering">masters</a>; distribution 171formats that preserve the resolution and bit accuracy of masters are the subject 172of high-resolution audio. 173</dd> 174 175<dt>lossy</dt> 176<dd> 177A <a href="http://en.wikipedia.org/wiki/Lossy_compression">lossy data 178compression algorithm</a> that attempts to preserve the most important features 179of media across encoding and decoding where the result of decoding previously 180encoded data is perceptually similar to the original data but not identical. 181Examples of lossy audio compression algorithms include MP3 and AAC. As analog 182values are from a continuous domain and digital values are discrete, ADC and DAC 183are lossy conversions with respect to amplitude. See also <em>transparency</em>. 184</dd> 185 186<dt>mono</dt> 187<dd> 188One channel. 189</dd> 190 191<dt>multichannel</dt> 192<dd> 193See <em>surround sound</em>. In strict terms, <em>stereo</em> is more than one 194channel and could be considered multichannel; however, such usage is confusing 195and thus avoided. 196</dd> 197 198<dt>mute</dt> 199<dd> 200Temporarily force volume to be zero, independent from the usual volume controls. 201</dd> 202 203<dt>overrun</dt> 204<dd> 205Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by 206failure to accept supplied data in sufficient time. For details, refer to 207<a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>. 208Compare to <em>underrun</em>. 209</dd> 210 211<dt>panning</dt> 212<dd> 213Direct a signal to a desired position within a stereo or multichannel field. 214</dd> 215 216<dt>PCM</dt> 217<dd> 218Pulse Code Modulation. Most common low-level encoding of digital audio. The 219audio signal is sampled at a regular interval, called the sample rate, then 220quantized to discrete values within a particular range depending on the bit 221depth. For example, for 16-bit PCM the sample values are integers between 222-32768 and +32767. 223</dd> 224 225<dt>ramp</dt> 226<dd> 227Gradually increase or decrease the level of a particular audio parameter, such 228as the volume or the strength of an effect. A volume ramp is commonly applied 229when pausing and resuming music to avoid a hard audible transition. 230</dd> 231 232<dt>sample</dt> 233<dd> 234Number representing the audio value for a single channel at a point in time. 235</dd> 236 237<dt>sample rate or frame rate</dt> 238<dd> 239Number of frames per second. While <em>frame rate</em> is more accurate, 240<em>sample rate</em> is conventionally used to mean frame rate. 241</dd> 242 243<dt>sonification</dt> 244<dd> 245Use of sound to express feedback or information, such as touch sounds and 246keyboard sounds. 247</dd> 248 249<dt>stereo</dt> 250<dd> 251Two channels. 252</dd> 253 254<dt>stereo widening</dt> 255<dd> 256Effect applied to a stereo signal to make another stereo signal that sounds 257fuller and richer. The effect can also be applied to a mono signal, where it is 258a type of upmixing. 259</dd> 260 261<dt>surround sound</dt> 262<dd> 263Techniques for increasing the ability of a listener to perceive sound position 264beyond stereo left and right. 265</dd> 266 267<dt>transparency</dt> 268<dd> 269Ideal result of lossy data compression. Lossy data conversion is transparent if 270it is perceptually indistinguishable from the original by a human subject. For 271details, refer to 272<a href="http://en.wikipedia.org/wiki/Transparency_%28data_compression%29">Transparency</a>. 273 274</dd> 275 276<dt>underrun</dt> 277<dd> 278Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by 279failure to supply needed data in sufficient time. For details, refer to 280<a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>. 281Compare to <em>overrun</em>. 282</dd> 283 284<dt>upmixing</dt> 285<dd> 286Increase the number of channels, such as from mono to stereo or from stereo to 287surround sound. Accomplished by duplication, panning, or more advanced signal 288processing. Compare to <em>downmixing</em>. 289</dd> 290 291<dt>virtualizer</dt> 292<dd> 293Effect that attempts to spatialize audio channels, such as trying to simulate 294more speakers or give the illusion that sound sources have position. 295</dd> 296 297<dt>volume</dt> 298<dd> 299Loudness, the subjective strength of an audio signal. 300</dd> 301 302</dl> 303 304<h3 id="interDeviceTerms">Inter-device interconnect</h3> 305 306<p> 307Inter-device interconnection technologies connect audio and video components 308between devices and are readily visible at the external connectors. The HAL 309implementer and end user should be aware of these terms. 310</p> 311 312<dl> 313 314<dt>Bluetooth</dt> 315<dd> 316Short range wireless technology. For details on the audio-related 317<a href="http://en.wikipedia.org/wiki/Bluetooth_profile">Bluetooth profiles</a> 318and 319<a href="http://en.wikipedia.org/wiki/Bluetooth_protocols">Bluetooth protocols</a>, 320refer to <a href="http://en.wikipedia.org/wiki/Bluetooth_profile#Advanced_Audio_Distribution_Profile_.28A2DP.29">A2DP</a> for 321music, <a href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Synchronous_connection-oriented_.28SCO.29_link">SCO</a> for telephony, and <a href="http://en.wikipedia.org/wiki/List_of_Bluetooth_profiles#Audio.2FVideo_Remote_Control_Profile_.28AVRCP.29">Audio/Video Remote Control Profile (AVRCP)</a>. 322</dd> 323 324<dt>DisplayPort</dt> 325<dd> 326Digital display interface by the Video Electronics Standards Association (VESA). 327</dd> 328 329<dt>HDMI</dt> 330<dd> 331High-Definition Multimedia Interface. Interface for transferring audio and 332video data. For mobile devices, a micro-HDMI (type D) or MHL connector is used. 333</dd> 334 335<dt>Intel HDA</dt> 336<dd> 337Intel High Definition Audio (do not confuse with generic <em>high-definition 338audio</em> or <em>high-resolution audio</em>). Specification for a front-panel 339connector. For details, refer to 340<a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel High 341Definition Audio</a>. 342</dd> 343 344<dt>MHL</dt> 345<dd> 346Mobile High-Definition Link. Mobile audio/video interface, often over micro-USB 347connector. 348</dd> 349 350<dt>phone connector</dt> 351<dd> 352Mini or sub-mini component that connects a device to wired headphones, headset, 353or line-level amplifier. 354</dd> 355 356<dt>SlimPort</dt> 357<dd> 358Adapter from micro-USB to HDMI. 359</dd> 360 361<dt>S/PDIF</dt> 362<dd> 363Sony/Philips Digital Interface Format. Interconnect for uncompressed PCM. For 364details, refer to <a href="http://en.wikipedia.org/wiki/S/PDIF">S/PDIF</a>. 365</dd> 366 367<dt>Thunderbolt</dt> 368<dd> 369Multimedia interface that competes with USB and HDMI for connecting to high-end 370peripherals. For details, refer to <a href="http://en.wikipedia.org/wiki/Thunderbolt_%28interface%29">Thunderbolt</a>. 371</dd> 372 373<dt>USB</dt> 374<dd> 375Universal Serial Bus. For details, refer to 376<a href="http://en.wikipedia.org/wiki/USB">USB</a>. 377</dd> 378 379</dl> 380 381<h3 id="intraDeviceTerms">Intra-device interconnect</h3> 382 383<p> 384Intra-device interconnection technologies connect internal audio components 385within a given device and are not visible without disassembling the device. The 386HAL implementer may need to be aware of these, but not the end user. For details 387on intra-device interconnections, refer to the following articles: 388</p> 389<ul> 390<li><a href="http://en.wikipedia.org/wiki/General-purpose_input/output">GPIO</a></li> 391<li><a href="http://en.wikipedia.org/wiki/I%C2%B2C">I²C</a>, for control channel</li> 392<li><a href="http://en.wikipedia.org/wiki/I%C2%B2S">I²S</a>, for audio data</li> 393<li><a href="http://en.wikipedia.org/wiki/McASP">McASP</a></li> 394<li><a href="http://en.wikipedia.org/wiki/SLIMbus">SLIMbus</a></li> 395<li><a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus">SPI</a></li> 396</ul> 397 398<h3 id="signalTerms">Audio Signal Path</h3> 399 400<p> 401Audio signal path terms relate to the signal path that audio data follows from 402an application to the transducer or vice-versa. 403</p> 404 405<dl> 406 407<dt>ADC</dt> 408<dd> 409Analog-to-digital converter. Module that converts an analog signal (continuous 410in time and amplitude) to a digital signal (discrete in time and amplitude). 411Conceptually, an ADC consists of a periodic sample-and-hold followed by a 412quantizer, although it does not have to be implemented that way. An ADC is 413usually preceded by a low-pass filter to remove any high frequency components 414that are not representable using the desired sample rate. For details, refer to 415<a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter">Analog-to-digital 416converter</a>. 417</dd> 418 419<dt>AP</dt> 420<dd> 421Application processor. Main general-purpose computer on a mobile device. 422</dd> 423 424<dt>codec</dt> 425<dd> 426Coder-decoder. Module that encodes and/or decodes an audio signal from one 427representation to another (typically analog to PCM or PCM to analog). In strict 428terms, <em>codec</em> is reserved for modules that both encode and decode but 429can be used loosely to refer to only one of these. For details, refer to 430<a href="http://en.wikipedia.org/wiki/Audio_codec">Audio codec</a>. 431</dd> 432 433<dt>DAC</dt> 434<dd> 435Digital-to-analog converter. Module that converts a digital signal (discrete in 436time and amplitude) to an analog signal (continuous in time and amplitude). 437Often followed by a low-pass filter to remove high-frequency components 438introduced by digital quantization. For details, refer to 439<a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter">Digital-to-analog 440converter</a>. 441</dd> 442 443<dt>DSP</dt> 444<dd> 445Digital Signal Processor. Optional component typically located after the 446application processor (for output) or before the application processor (for 447input). Primary purpose is to off-load the application processor and provide 448signal processing features at a lower power cost. 449</dd> 450 451<dt>PDM</dt> 452<dd> 453Pulse-density modulation. Form of modulation used to represent an analog signal 454by a digital signal, where the relative density of 1s versus 0s indicates the 455signal level. Commonly used by digital to analog converters. For details, refer 456to <a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">Pulse-density 457modulation</a>. 458</dd> 459 460<dt>PWM</dt> 461<dd> 462Pulse-width modulation. Form of modulation used to represent an analog signal by 463a digital signal, where the relative width of a digital pulse indicates the 464signal level. Commonly used by analog-to-digital converters. For details, refer 465to <a href="http://en.wikipedia.org/wiki/Pulse-width_modulation">Pulse-width 466modulation</a>. 467</dd> 468 469<dt>transducer</dt> 470<dd> 471Converts variations in physical real-world quantities to electrical signals. In 472audio, the physical quantity is sound pressure, and the transducers are the 473loudspeaker and microphone. For details, refer to 474<a href="http://en.wikipedia.org/wiki/Transducer">Transducer</a>. 475</dd> 476 477</dl> 478 479<h3 id="srcTerms">Sample Rate Conversion</h3> 480<p> 481Sample rate conversion terms relate to the process of converting from one 482sampling rate to another. 483</p> 484 485<dl> 486 487<dt>downsample</dt> 488<dd>Resample, where sink sample rate < source sample rate.</dd> 489 490<dt>Nyquist frequency</dt> 491<dd> 492Maximum frequency component that can be represented by a discretized signal at 4931/2 of a given sample rate. For example, the human hearing range extends to 494approximately 20 kHz, so a digital audio signal must have a sample rate of at 495least 40 kHz to represent that range. In practice, sample rates of 44.1 kHz and 49648 kHz are commonly used, with Nyquist frequencies of 22.05 kHz and 24 kHz 497respectively. For details, refer to 498<a href="http://en.wikipedia.org/wiki/Nyquist_frequency">Nyquist frequency</a> 499and 500<a href="http://en.wikipedia.org/wiki/Hearing_range">Hearing range</a>. 501</dd> 502 503<dt>resampler</dt> 504<dd>Synonym for sample rate converter.</dd> 505 506<dt>resampling</dt> 507<dd>Process of converting sample rate.</dd> 508 509<dt>sample rate converter</dt> 510<dd>Module that resamples.</dd> 511 512<dt>sink</dt> 513<dd>Output of a resampler.</dd> 514 515<dt>source</dt> 516<dd>Input to a resampler.</dd> 517 518<dt>upsample</dt> 519<dd>Resample, where sink sample rate > source sample rate.</dd> 520 521</dl> 522 523<h2 id="androidSpecificTerms">Android-Specific Terms</h2> 524 525<p> 526Android-specific terms include terms used only in the Android audio framework 527and generic terms that have special meaning within Android. 528</p> 529 530<dl> 531 532<dt>ALSA</dt> 533<dd> 534Advanced Linux Sound Architecture. An audio framework for Linux that has also 535influenced other systems. For a generic definition, refer to 536<a href="http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture">ALSA</a>. 537In Android, ALSA refers to the kernel audio framework and drivers and not to the 538user-mode API. See also <em>tinyalsa</em>. 539</dd> 540 541<dt>audio device</dt> 542<dd> 543Audio I/O endpoint backed by a HAL implementation. 544</dd> 545 546<dt>AudioEffect</dt> 547<dd> 548API and implementation framework for output (post-processing) effects and input 549(pre-processing) effects. The API is defined at 550<a href="http://developer.android.com/reference/android/media/audiofx/AudioEffect.html">android.media.audiofx.AudioEffect</a>. 551</dd> 552 553<dt>AudioFlinger</dt> 554<dd> 555Android sound server implementation. AudioFlinger runs within the mediaserver 556process. For a generic definition, refer to 557<a href="http://en.wikipedia.org/wiki/Sound_server">Sound server</a>. 558</dd> 559 560<dt>audio focus</dt> 561<dd> 562Set of APIs for managing audio interactions across multiple independent apps. 563For details, see <a href="http://developer.android.com/training/managing-audio/audio-focus.html">Managing Audio Focus</a> and the focus-related methods and constants of 564<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>. 565</dd> 566 567<dt>AudioMixer</dt> 568<dd> 569Module in AudioFlinger responsible for combining multiple tracks and applying 570attenuation (volume) and effects. For a generic definition, refer to 571<a href="http://en.wikipedia.org/wiki/Audio_mixing_(recorded_music)">Audio mixing (recorded music)</a> (discusses a mixer as a hardware device or software application, rather 572than a software module within a system). 573</dd> 574 575<dt>audio policy</dt> 576<dd> 577Service responsible for all actions that require a policy decision to be made 578first, such as opening a new I/O stream, re-routing after a change, and stream 579volume management. 580</dd> 581 582<dt>AudioRecord</dt> 583<dd> 584Primary low-level client API for receiving data from an audio input device such 585as a microphone. The data is usually PCM format. The API is defined at 586<a href="http://developer.android.com/reference/android/media/AudioRecord.html">android.media.AudioRecord</a>. 587</dd> 588 589<dt>AudioResampler</dt> 590<dd> 591Module in AudioFlinger responsible for <a href="src.html">sample rate conversion</a>. 592</dd> 593 594<dt>audio source</dt> 595<dd> 596An enumeration of constants that indicates the desired use case for capturing 597audio input. For details, see <a href="http://developer.android.com/reference/android/media/MediaRecorder.AudioSource.html">audio source</a>. As of API level 21 and above, 598<a href="attributes.html">audio attributes</a> are preferred. 599</dd> 600 601<dt>AudioTrack</dt> 602<dd> 603Primary low-level client API for sending data to an audio output device such as 604a speaker. The data is usually in PCM format. The API is defined at 605<a href="http://developer.android.com/reference/android/media/AudioTrack.html">android.media.AudioTrack</a>. 606</dd> 607 608<dt>audio_utils</dt> 609<dd> 610Audio utility library for features such as PCM format conversion, WAV file I/O, 611and 612<a href="avoiding_pi.html#nonBlockingAlgorithms">non-blocking FIFO</a>, which is 613largely independent of the Android platform. 614</dd> 615 616<dt>client</dt> 617<dd> 618Usually an application or app client. However, an AudioFlinger client can be a 619thread running within the mediaserver system process, such as when playing media 620decoded by a MediaPlayer object. 621</dd> 622 623<dt>HAL</dt> 624<dd> 625Hardware Abstraction Layer. HAL is a generic term in Android; in audio, it is a 626layer between AudioFlinger and the kernel device driver with a C API (which 627replaces the C++ libaudio). 628</dd> 629 630<dt>FastCapture</dt> 631<dd> 632Thread within AudioFlinger that sends audio data to lower latency fast tracks 633and drives the input device when configured for reduced latency. 634</dd> 635 636<dt>FastMixer</dt> 637<dd> 638Thread within AudioFlinger that receives and mixes audio data from lower latency 639fast tracks and drives the primary output device when configured for reduced 640latency. 641</dd> 642 643<dt>fast track</dt> 644<dd> 645AudioTrack or AudioRecord client with lower latency but fewer features on some 646devices and routes. 647</dd> 648 649<dt>MediaPlayer</dt> 650<dd> 651Higher-level client API than AudioTrack. Plays encoded content or content that 652includes multimedia audio and video tracks. 653</dd> 654 655<dt>media.log</dt> 656<dd> 657AudioFlinger debugging feature available in custom builds only. Used for logging 658audio events to a circular buffer where they can then be retroactively dumped 659when needed. 660</dd> 661 662<dt>mediaserver</dt> 663<dd> 664Android system process that contains media-related services, including 665AudioFlinger. 666</dd> 667 668<dt>NBAIO</dt> 669<dd> 670Non-blocking audio input/output. Abstraction for AudioFlinger ports. The term 671can be misleading as some implementations of the NBAIO API support blocking. The 672key implementations of NBAIO are for different types of pipes. 673</dd> 674 675<dt>normal mixer</dt> 676<dd> 677Thread within AudioFlinger that services most full-featured AudioTrack clients. 678Directly drives an output device or feeds its sub-mix into FastMixer via a pipe. 679</dd> 680 681<dt>OpenSL ES</dt> 682<dd> 683Audio API standard by 684<a href="http://www.khronos.org/">The Khronos Group</a>. Android versions since 685API level 9 support a native audio API that is based on a subset of 686<a href="http://www.khronos.org/opensles/">OpenSL ES 1.0.1</a>. 687</dd> 688 689<dt>silent mode</dt> 690<dd> 691User-settable feature to mute the phone ringer and notifications without 692affecting media playback (music, videos, games) or alarms. 693</dd> 694 695<dt>SoundPool</dt> 696<dd> 697Higher-level client API than AudioTrack. Plays sampled audio clips. Useful for 698triggering UI feedback, game sounds, etc. The API is defined at 699<a href="http://developer.android.com/reference/android/media/SoundPool.html">android.media.SoundPool</a>. 700</dd> 701 702<dt>Stagefright</dt> 703<dd> 704See <a href="{@docRoot}devices/media.html">Media</a>. 705</dd> 706 707<dt>StateQueue</dt> 708<dd> 709Module within AudioFlinger responsible for synchronizing state among threads. 710Whereas NBAIO is used to pass data, StateQueue is used to pass control 711information. 712</dd> 713 714<dt>strategy</dt> 715<dd> 716Group of stream types with similar behavior. Used by the audio policy service. 717</dd> 718 719<dt>stream type</dt> 720<dd> 721Enumeration that expresses a use case for audio output. The audio policy 722implementation uses the stream type, along with other parameters, to determine 723volume and routing decisions. For a list of stream types, see 724<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>. 725</dd> 726 727<dt>tee sink</dt> 728<dd> 729See <a href="debugging.html#teeSink">Audio Debugging</a>. 730</dd> 731 732<dt>tinyalsa</dt> 733<dd> 734Small user-mode API above ALSA kernel with BSD license. Recommended for HAL 735implementations. 736</dd> 737 738<dt>ToneGenerator</dt> 739<dd> 740Higher-level client API than AudioTrack. Plays dual-tone multi-frequency (DTMF) 741signals. For details, refer to 742<a href="http://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling">Dual-tone 743multi-frequency signaling</a> and the API definition at 744<a href="http://developer.android.com/reference/android/media/ToneGenerator.html">android.media.ToneGenerator</a>. 745</dd> 746 747<dt>track</dt> 748<dd> 749Audio stream. Controlled by the AudioTrack or AudioRecord API. 750</dd> 751 752<dt>volume attenuation curve</dt> 753<dd> 754Device-specific mapping from a generic volume index to a specific attenuation 755factor for a given output. 756</dd> 757 758<dt>volume index</dt> 759<dd> 760Unitless integer that expresses the desired relative volume of a stream. The 761volume-related APIs of 762<a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a> 763operate in volume indices rather than absolute attenuation factors. 764</dd> 765 766</dl>