Most current zero-shot voice conversion methods rely on externally supervised components, particularly speaker encoders, for training. To explore alternatives that eliminate this dependency, this paper introduces GenVC, a novel framework that disentangles speaker identity and linguistic content from speech signals in a self-supervised manner. GenVC leverages speech tokenizers and an autoregressive, Transformer-based language model as its backbone for speech generation. This design supports large-scale training while enhancing both source speaker privacy protection and target speaker cloning fidelity. Experimental results demonstrate that GenVC achieves notably higher speaker similarity, with naturalness on par with leading zero-shot approaches. Moreover, due to its autoregressive formulation, GenVC introduces flexibility in temporal alignment, reducing the preservation of source prosody and speaker-specific traits, and making it highly effective for voice anonymization.
Source Utterance | Target Prompt | Leading VC Approaches | GenVC | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
||||||||||||
|
|
|
|
||||||||||||
|
|
|
|
||||||||||||
|
|
|
|
||||||||||||
|
|
|
|
||||||||||||
|
|
|
|
@misc{cai2025genvcselfsupervisedzeroshotvoice,
title={GenVC: Self-Supervised Zero-Shot Voice Conversion},
author={Zexin Cai and Henry Li Xinyuan and Ashi Garg and Leibny Paola GarcĂa-Perera and Kevin Duh and Sanjeev Khudanpur and Matthew Wiesner and Nicholas Andrews},
booktitle={IEEE Workshop on Automatic Speech Recognition and Understanding},
year={2025},
volume={},
number={},
pages={},
}