


This is at the cost of writing new runtime features or development tools. However, in the case of UE4 where we support a large number of platforms, juggling differences between each platform-specific API, hunting platform-specific bugs, and striving for platform feature parity can easily dominate the audio engine development time. This paradigm works well when there is a small number of target platforms to support, and when there is a long lead time to ramp up new platforms. For example, features might hook into a scripting engine (such as Blueprint) or game-specific components and systems (such as audio components, ambient Actors, and audio volumes), or do a lot of work to determine which sounds to actually play (such as Sound Concurrency) and with what parameters (such as Sound Classes, Sound Mixes, and Sound Cues). Game engines write additional functionality on top of these platform-level features. In addition to codecs, platform audio APIs provide all the other features that an audio engine might need, including volume control, pitch control (realtime sample-rate conversion), spatialization, and DSP processing. Many platforms provide hardware decoders to improve runtime performance. These APIs often provide platform-specific codecs and platform-specific encoder and decoder APIs. Typically, each hardware platform provides at least one full-featured, high-level, audio-rendering, C++ API. Audio renderers widely vary in their architecture and feature set, but for games, where interactivity and real-time performance characteristics are key, they must support real-time decoding, dynamic consuming and processing of sound parameters, real-time sample-rate conversion, and a wide variety of other audio rendering features, like per-source digital signal processing (DSP) effects, spatialization, submixing, and post-mix DSP effects such as reverb. This document describes the structure of the Audio Mixer as a whole, and provides a point of reference for deeper discussions.īackground and Motivation Audio RenderingĪudio rendering is the process by which sound sources are decoded and mixed together, fed to an audio hardware endpoint (called a digital-to-analog converter, or DAC), and ultimately played on one or more speakers. It enables feature parity across all platforms, provides backward compatibility for most legacy audio engine features, and extends UE4 functionality into new domains. The Audio Mixer is a multiplatform audio renderer that lives in its own module. The Problems with Platform-Specific Audio Rendering APIsĪdditional Submix Features: Analysis, Recording, and Listening

Platform-Level Features: Audio-Rendering APIs
