🎧 Creating Audio Experiences for Meta Quest 2 and 3: Best Practices and Technical Limitations
- Konga Sounds
- Apr 14
- 4 min read
Updated: Jun 17
The role of audio in virtual reality is integral. It is not just an accessory but a crucial component of immersion. Creating audio experiences for headsets like the Meta Quest 2 and Meta Quest 3 requires a focus on performance, spatialization, and compatibility.
If you are an interactive experience creator using Unity or Unreal Engine, this article provides a technical overview of audio possibilities and limitations in standalone VR.
🧠 Audio Processing Capability on Quest 2 and 3
Both headsets rely on mobile chipsets (Snapdragon XR2 in Quest 2 and XR2 Gen 2 in Quest 3). While these chipsets are powerful, they do not offer the same processing capabilities as a PC or console.
This limitation affects:
Maximum number of simultaneous 3D audio sources
Use of effects like reverb, delay, occlusion
Complexity of spatialization algorithms
On average:
It is safe to work with 20 to 30 simultaneous 3D sources.
Beyond 50 sources, there is a risk of performance drops if optimization is not applied.
🎮 The Challenge of Audio in Standalone VR
Standalone headsets like the Meta Quest 2 and 3 have limited CPU, GPU, and RAM resources. This means the audio system needs to be optimized without compromising immersion.
🎧 Creating Spatial Audio: Unreal vs Unity
✅ Unreal Engine
Unreal's audio system is modular and powerful. It has native support for:
3D and binaural audio.
Spatialization plugins (like Steam Audio or Oculus Spatializer Plugin).
Sound Concurrency, Sound Classes, and Mixers system.
✅ Unity
Unity offers 3D audio support via Audio Source and Audio Listener. However, it typically requires additional plugins for advanced spatialization. Common plugins include Resonance Audio and the Oculus Spatializer Plugin (recommended by Meta for new applications, replacing the discontinued Oculus Audio SDK).
🔊 Audio Channels: Technical Limitations
Although the Quest 2 and 3 support spatial audio, critical limits exist.
🎚️ Unreal Engine – Channel Management
| Parameter | Details |
|----------------------------|--------------------------------------------------------|
| Max Channels | Default value: 32 simultaneous voices (can be adjusted) |
| Spatially Aware Voices | Recommended: up to 10 simultaneous to maintain performance |
| Total Simultaneous Voices | Safe to keep between 20 and 24 voices maximum |
| Where to Configure | Project Settings → Audio → Max Channels |
🔧 Tip: Avoid using spatialization for all sounds. Prioritize sounds that are close to the player, interactive, or narrative.
🗂️ Organization and Optimization in Unreal
🎛️ Sound Class
You can create categories like:
SFX
Ambience
Dialogue
UI
This organization facilitates global volume adjustments and dynamic mixing.
🚦 Sound Concurrency
Sound Concurrency prevents multiple sounds of the same type from stacking up. For example, use it to limit the number of simultaneous footsteps or gunfire sounds.
📀 Streaming vs Memory
Long sounds (like ambient music or loops) should be streamed to save memory. In contrast, short sounds (effects or quick actions) should be preloaded to reduce latency.
🧠 Spatialization and Binaural in VR
To create more realistic audio in VR:
Use plugins like Steam Audio (available for both Unreal and Unity) or the Oculus Spatializer Plugin.
Prioritize head-locked audio (UI, inner voice) over world-spatialized audio (3D objects).
Head-Related Transfer Functions (HRTF) are crucial for accurate spatial immersion. They simulate how sound interacts with our head and ears.
🔇 Avoid distant 3D sounds playing simultaneously, as they can strain the CPU and add little value to the experience.
🚀 Final Tips for High-Quality Audio on Meta Quest
Test constantly on the actual hardware (Quest 2 or 3).
Limit simultaneous voices; in VR, less is often more.
Use the Oculus Profiler (Meta's tool for analyzing performance on Quest) or stats audio in Unreal to monitor audio load.
Modify dynamic mixing according to the scene. In intense moments, reduce background sounds to highlight important elements.
Use Head-Related Transfer Functions (HRTF) whenever possible for maximum immersion.
📌 Conclusion
Working with audio in VR requires balancing quality and performance. The Meta Quest 2 and 3 provide powerful tools, but it is essential to follow best practices and be aware of hardware limitations. With thoughtful sound planning, you can create memorable and immersive sound experiences.
If you are developing a VR experience in Unity or Unreal and need help optimizing audio, please get in touch. We can assist you with pipeline, spatialization, and complete sound design for your project.
🌐 Audio Ambisonics: A Powerful Ally for Ambiance
When creating immersive environments (like forests, cities, or festivals), utilizing audio ambisonics is an excellent practice.
✅ Advantages:
Performance savings: Instead of using multiple 3D audio sources, you use a single ambisonic file.
Natural spatialization: The sound moves with the player's head rotation, providing a realistic presence.
Compatibility: Natively supported via Oculus Spatializer, Google Resonance Audio, or Steam Audio for Quest 2 and 3.
🎧 What is Audio Ambisonics?
It encodes a spherical sound scene into a multi-channel recording.
The most common format is Ambisonics B-format, up to 4 channels for the first-order.
It is decoded in real-time to the player's HRTF within the engine.
📁 How to Use in Unreal:
Import audio with Ambisonic Sound Format enabled.
Mark the sound as Binaural.
Use as background ambience, avoiding multiple independent sources.
⚙️ Best Practices:
Ideal for persistent background sounds like:
- Forest ambience
- Distant crowds
- Urban traffic
Not recommended for interactive or dynamic sounds, like footsteps or gameplay effects.
🛠️ Recommended Tools
Unreal Engine:
- Audio Mixer
- Sound Concurrency
- Meta XR Plugin
- Oculus Audio SDK
Unity:
- Oculus Spatializer
- Resonance Audio
- Steam Audio
FMOD (for more advanced projects):
- Integration with Unreal and Unity
- Precise control over parameters and mixes
- Low overhead if configured well
💡 Final Tips
Prefer mono sounds for spatialization. Stereo sounds can lead to inconsistencies.
Avoid real-time reverb from multiple sources. Use pre-rendered early reflections or ambisonic audio.
Test with the actual headset: What sounds good in the editor may differ on the Quest.
Use simple occlusion: Skip heavy physics calculations; choose basic volumes and lowpass filters.
Utilize audio as a gameplay guide: In VR, sound is perception; it can inform, scare, alert, and evoke more than visuals.
Comments