Hangzhou Dianzi University

Neural Audio Codec
With Sparse Quantization.

Xiangbo Wang Wenbin Jiang Jin Wang Yubo You Sheng Fang Fei Wen
Affiliation IIPL, Hangzhou Dianzi University
Conference Accepted by ICASSP 2026
Recent neural audio compression models often rely on residual vector quantization for high-fidelity coding, but using a fixed number of per-frame codebooks is suboptimal for the wide variability of audio content—especially for signals that are either very simple or highly complex. To address this limitation, we propose SwitchCodec, a neural audio codec based on Residual Experts Vector Quantization (REVQ). REVQ combines a shared quantizer with dynamically routed expert quantizers that are activated according to the input audio, decoupling bitrate from codebook capacity and improving compression efficiency. This design ensures full training and utilization of each quantizer. In addition, a variable-bitrate mechanism adjusts the number of active expert quantizers at inference, enabling multi-bitrate operation without retraining. Experiments demonstrate that SwitchCodec surpasses existing baselines on both objective metrics and subjective listening tests.
2.87 PESQ Score @ 2.6kbps
4.04 ViSQOL Metric
REVQ Core Architecture
Ground Truth / Reference
ID-01
ID-02
ID-03
ID-04
ID-05
SwitchCodec (Proposed)
ID-01
ID-02
ID-03
ID-04
ID-05
DAC (Baseline)
ID-01
ID-02
ID-03
ID-04
ID-05
Encodec (Baseline)
ID-01
ID-02
ID-03
ID-04
ID-05
Ground Truth / Reference
ID-01
ID-02
ID-03
ID-04
ID-05
SwitchCodec (Proposed)
ID-01
ID-02
ID-03
ID-04
ID-05
DAC (Baseline)
ID-01
ID-02
ID-03
ID-04
ID-05
Encodec (Baseline)
ID-01
ID-02
ID-03
ID-04
ID-05