GAN Vocoder:
Multi-Resolution Discriminator Is All You Need

Authors: Jaeseong You, Dalhyun Kim, Gyuhyeon Nam, Geumbyeol Hwang, Gyeongsu Chae
ArXiv: arXiv:2103.05236

Abstract

Several of the latest GAN-based vocoders show remarkable achievements, outperforming autoregressive and flow-based competitors in both qualitative and quantitative measures while synthesizing orders of magnitude faster. In this work, we hypothesize that the common factor underlying their success is the multi-resolution discriminating framework, not the minute details in architecture, loss function, or training strategy. We experimentally test the hypothesis by evaluating six different generators paired with one shared multi-resolution discriminating framework. For all evaluative measures with respect to text-to-speech syntheses and for all perceptual metrics, their performances are not distinguishable from one another, which supports our hypothesis.

Audio Samples

Note: Different rows correspond to different speech contents. Please refer to the paper for experimental details.

Ground truth mel spectrogram reconstruction

Ground
Truth
Ours HiFi-GAN MelGAN Parallel
WaveGAN
Universal
MelGAN
VocGAN

Text-to-speech syntheses

Ground
Truth
Ours HiFi-GAN MelGAN Parallel
WaveGAN
Universal
MelGAN
VocGAN

Citation

@misc{you2021gan,
      title={GAN Vocoder: Multi-Resolution Discriminator Is All You Need},
      author={Jaeseong You and Dalhyun Kim and Gyuhyeon Nam and Geumbyeol Hwang and Gyeongsu Chae},
      year={2021},
      eprint={2103.05236},
      archivePrefix={arXiv},
      primaryClass={cs.SD}
}