publications

publications by categories in reversed chronological order. generated by jekyll-scholar.

2025

  1. Preprint
    TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling
    Yuancheng Wang, Dekun Chen, Xueyao Zhang, Junan Zhang, Jiaqi Li, and Zhizheng Wu
    2025
    TL;DR: We propose a text-aware speech tokenizer with a single codebook and a frame rate of 6.25 Hz for speech language modeling.
  2. ACL 2025
    Advancing Zero-shot Text-to-Speech Intelligibility across Diverse Domains via Preference Alignment
    Xueyao Zhang*Yuancheng Wang*, Chaoren Wang, Ziniu Li, Zhuo Chen, and Zhizheng Wu
    2025
    TL;DR: We propose the INTP dataset and extend preference alignment to enhance the intelligibility and overall quality of TTS systems in challenging scenarios.
  3. Preprint
    Metis: A Foundation Speech Generation Model with Masked Generative Pre-training
    Yuancheng Wang, Jiachen Zheng, Junan Zhang, Xueyao Zhang, Huan Liao, and Zhizheng Wu
    2025
    TL;DR: We propose a foundation speech generation model with masked generative pre-training.
  4. ICLR 2025
    MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer
    Yuancheng Wang, Haoyue Zhan, Liwei Liu, Ruihong Zeng, Haotian Guo, Jiachen Zheng, Qiang Zhang, Xueyao Zhang, Shunsi Zhang, and Zhizheng Wu
    2025
    TL;DR: A fully non-autoregressive large-scale zero-shot TTS model eliminates the need for phone-level duration prediction.

2024

  1. ICML 2024 Oral
    Naturalspeech 3: Zero-Shot Speech Synthesis with Factorized Codec and Diffusion Models
    Zeqian Ju*Yuancheng Wang*, Kai Shen*, Xu Tan*, Detai Xin, Dongchao Yang, Yanqing Liu, Yichong Leng, Kaitao Song, Siliang Tang, Zhizheng Wu, Tao Qin, Xiang-Yang Li, Wei Ye, Shikun Zhang, Jiang Bian, Lei He, Jinyu Li, and Sheng Zhao
    2024
    TL;DR: A large-scale zero-shot TTS model achieves on-par quality with human recordings.
  2. NeurIPS 2024
    SD-Eval: A Benchmark Dataset for Spoken Dialogue Understanding Beyond Words
    Junyi Ao*Yuancheng Wang*, Xiaohai Tian, Dekun Chen, Jun Zhang, Lu Lu, Yuxuan Wang, Haizhou Li, and Zhizheng Wu
    2024
    TL;DR: We propose a benchmark dataset to evaluate spoken dialogue understanding and generation.
  3. IEEE SLT 2024
    Amphion: an Open-Source Audio, Music, and Speech Generation Toolkit
    Xueyao Zhang*, Liumeng Xue*, Yicheng Gu*Yuancheng Wang*, Jiaqi Li, Haorui He, Chaoren Wang, Songting Liu, Xi Chen, Junan Zhang, Tze Ying Tang, Lexiao Zou, Mingxuan Wang, Jun Han, Kai Chen, Haizhou Li, and Zhizheng Wu
    2024
    TL;DR: We develop a unified toolkit for audio, music, and speech generation.
  4. IEEE SLT 2024
    Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation
    Haorui He, Zengqiang Shang, Chaoren Wang, Xuyuan Li, Yicheng Gu, Hua Hua, Liwei Liu, Chen Yang, Jiaqi Li, Peiyang Shi, Yuancheng Wang, Kai Chen, Pengyuan Zhang, and Zhizheng Wu
    2024
    TL;DR: We collect a 10w hours in-the-wild speech dataset for speech generation.
  5. Preprint
    FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds
    Yiming Zhang, Yicheng Gu, Yanhong Zeng, Zhening Xing, Yuancheng Wang, Zhizheng Wu, and Kai Chen
    2024
  6. Preprint
    Debatts: Zero-Shot Debating Text-to-Speech Synthesis
    Yiqiao Huang, Yuancheng Wang, Jiaqi Li, Haotian Guo, Haorui He, Shunsi Zhang, and Zhizheng Wu
    2024
  7. Preprint
    RALL-E: Robust Audio Language Modeling with Chain-of-Thought Prompting for Text-to-Speech Synthesis
    Detai Xin, Xu Tan, Kai Shen, Zeqian Ju, Dongchao Yang, Yuancheng Wang, Shinnosuke Takamichi, Hiroshi Saruwatari, Shujie Liu, Jinyu Li, and  others
    2024
  8. Preprint
    Noro: A Noise-Robust One-shot Voice Conversion System with Hidden Speaker Representation Capabilities
    Haorui He, Yuchen Song, Yuancheng Wang, Haoyang Li, Xueyao Zhang, Li Wang, Gongping Huang, Eng Siong Chng, and Zhizheng Wu
    2024

2023

  1. NeurIPS 2023
    AUDIT: Audio Editing by following Instructions with Latent Diffusion Models
    Yuancheng Wang, Zeqian Ju, Xu Tan, Lei He, Zhizheng Wu, and Jiang Bian
    2023
    TL;DR: The first audio editing model that can follow natural language instructions.