https://arxiv.org/abs/2502.06490
Recent Advances in Discrete Speech Tokens: A Review
Yiwei Guo, Zhihan Li, Hankun Wang, Bohan Li, Chongtian Shao, Hanglei Zhang, Chenpeng Du, Xie Chen, Shujie Liu, Kai Yu
The rapid advancement of speech generation technologies in the era of large language models (LLMs) has established discrete speech tokens as a foundational paradigm for speech representation. These tokens, characterized by their discrete, compact, and concise nature, are not only advantageous for efficient transmission and storage, but also inherently compatible with the language modeling framework, enabling seamless integration of speech into text-dominated LLM architectures. Current research categorizes discrete speech tokens into two principal classes: acoustic tokens and semantic tokens, each of which has evolved into a rich research domain characterized by unique design philosophies and methodological approaches. This survey systematically synthesizes the existing taxonomy and recent innovations in discrete speech tokenization, conducts a critical examination of the strengths and limitations of each paradigm, and presents systematic experimental comparisons across token types. Furthermore, we identify persistent challenges in the field and propose potential research directions, aiming to offer actionable insights to inspire future advancements in the development and application of discrete speech tokens.
Somewhat interesting in-depth thing on optimizing Whisper. Another bit of whisperology
https://github.com/efeslab/LiteASR
Multimodal LLM Phi-4 from Microsoft, good benchmarks on speech
https://huggingface.co/microsoft/Phi-4-multimodal-instruct
We all know that distillation is more efficient than training from scratch, so the paper is not very insightful, but it is interesting where it all goes.
https://pages.cs.huji.ac.il/adiyoss-lab/slamming/
https://arxiv.org/abs/2502.15814
Slamming: Training a Speech Language Model on One GPU in a Day
Gallil Maimon, Avishai Elmakies, Yossi Adi
We introduce Slam, a recipe for training high-quality Speech Language Models (SLMs) on a single academic GPU in 24 hours. We do so through empirical analysis of model initialisation and architecture, synthetic training data, preference optimisation with synthetic data and tweaking all other components.
...
https://huggingface.co/datasets/KBLab/rixvox-v2
23k hours of Swedish speech. These guys also release Whisper tunes
https://huggingface.co/KBLab
https://github.com/JusperLee/TIGER
Demos are pretty nice (video part)
https://cslikai.cn/TIGER/
https://arxiv.org/abs/2502.05232
Aligner-Encoders: Self-Attention Transformers Can Be Self-Transducers
Adam Stooke, Rohit Prabhavalkar, Khe Chai Sim, Pedro Moreno Mengibar
Modern systems for automatic speech recognition, including the RNN-Transducer and Attention-based Encoder-Decoder (AED), are designed so that the encoder is not required to alter the time-position of information from the audio sequence into the embedding; alignment to the final text output is processed during decoding. We discover that the transformer-based encoder adopted in recent years is actually capable of performing the alignment internally during the forward pass, prior to decoding. This new phenomenon enables a simpler and more efficient model, the "Aligner-Encoder". To train it, we discard the dynamic programming of RNN-T in favor of the frame-wise cross-entropy loss of AED, while the decoder employs the lighter text-only recurrence of RNN-T without learned cross-attention -- it simply scans embedding frames in order from the beginning, producing one token each until predicting the end-of-message. We conduct experiments demonstrating performance remarkably close to the state of the art, including a special inference configuration enabling long-form recognition. In a representative comparison, we measure the total inference time for our model to be 2x faster than RNN-T and 16x faster than AED. Lastly, we find that the audio-text alignment is clearly visible in the self-attention weights of a certain layer, which could be said to perform "self-transduction".
Great speeds
https://arxiv.org/abs/2406.08835
EffectiveASR: A Single-Step Non-Autoregressive Mandarin Speech Recognition Architecture with High Accuracy and Inference Speed
Ziyang Zhuang, Chenfeng Miao, Kun Zou, Ming Fang, Tao Wei, Zijian Li, Ning Cheng, Wei Hu, Shaojun Wang, Jing Xiao
Non-autoregressive (NAR) automatic speech recognition (ASR) models predict tokens independently and simultaneously, bringing high inference speed. However, there is still a gap in the accuracy of the NAR models compared to the autoregressive (AR) models. In this paper, we propose a single-step NAR ASR architecture with high accuracy and inference speed, called EffectiveASR. It uses an Index Mapping Vector (IMV) based alignment generator to generate alignments during training, and an alignment predictor to learn the alignments for inference. It can be trained end-to-end (E2E) with cross-entropy loss combined with alignment loss. The proposed EffectiveASR achieves competitive results on the AISHELL-1 and AISHELL-2 Mandarin benchmarks compared to the leading models. Specifically, it achieves character error rates (CER) of 4.26%/4.62% on the AISHELL-1 dev/test dataset, which outperforms the AR Conformer with about 30x inference speedup.
https://github.com/FireRedTeam/FireRedASR
FireRedASR is a family of large-scale automatic speech recognition (ASR) models supporting Mandarin, Chinese dialects and English, while also offering singing lyrics recognition capability, achieving a new state-of-the-art on public Mandarin ASR benchmarks.
FireRedASR is designed to meet diverse requirements in superior performance and optimal efficiency across various applications. It comprises two variants:
FireRedASR-LLM: Designed to achieve state-of-the-art (SOTA) performance and to enable seamless end-to-end speech interaction. It adopts an Encoder-Adapter-LLM framework leveraging large language model (LLM) capabilities.
FireRedASR-AED: Designed to balance high performance and computational efficiency and to serve as an effective speech representation module in LLM-based speech models. It utilizes an Attention-based Encoder-Decoder (AED) architecture.
https://arxiv.org/pdf/2501.14350
Exactly 20 years ago we started our first project in speech, a voice for Festival TTS. Many things happened since then but it was a great story. Looking for the next 20 years now.
https://www.linux.org.ru/news/linux-general/775065?cid=776417
We tried discrete loss for duration from StyleTTS2 in MatchaTTS, it is really good
https://alphacephei.com/nsh/2025/01/12/discrete-units.html
The dataset comprises of 5000 hours speech corpus in Akan, Ewe, Dagbani, Daagare, and Ikposo. Each language includes 1000 hours of audio speech from indigenous speakers of the language and 100 hours of transcription.
https://github.com/HCI-LAB-UGSPEECHDATA/speech_data_ghana_ug
Diarization-conditioned Whisper for target speaker recognition
https://github.com/BUTSpeechFIT/DiCoW
A paper from respected people. Between, testing on books (librispeech and MLS) with LLama is usually a bad idea. The thing is that Llama already seen all the books many times.
https://arxiv.org/abs/2412.16464
Transducer-Llama: Integrating LLMs into Streamable Transducer-based Speech Recognition
Keqi Deng, Jinxi Guo, Yingyi Ma, Niko Moritz, Philip C. Woodland, Ozlem Kalinli, Mike Seltzer
While large language models (LLMs) have been applied to automatic speech recognition (ASR), the task of making the model streamable remains a challenge. This paper proposes a novel model architecture, Transducer-Llama, that integrates LLMs into a Factorized Transducer (FT) model, naturally enabling streaming capabilities. Furthermore, given that the large vocabulary of LLMs can cause data sparsity issue and increased training costs for spoken language systems, this paper introduces an efficient vocabulary adaptation technique to align LLMs with speech system vocabularies. The results show that directly optimizing the FT model with a strong pre-trained LLM-based predictor using the RNN-T loss yields some but limited improvements over a smaller pre-trained LM predictor. Therefore, this paper proposes a weak-to-strong LM swap strategy, using a weak LM predictor during RNN-T loss training and then replacing it with a strong LLM. After LM replacement, the minimum word error rate (MWER) loss is employed to finetune the integration of the LLM predictor with the Transducer-Llama model. Experiments on the LibriSpeech and large-scale multi-lingual LibriSpeech corpora show that the proposed streaming Transducer-Llama approach gave a 17% relative WER reduction (WERR) over a strong FT baseline and a 32% WERR over an RNN-T baseline.
Speech talks from MILA
https://poonehmousavi.github.io/rg
CONVAI_RG" rel="nofollow">https://www.youtube.com/@CONVAI_RG
Recent one Discrete Audio Tokens for Multimodal LLMs by Mirco Ravanelli
https://www.youtube.com/watch?v=2-Dqzg3fuVE
Upcoming ones are also interesting
Some real benchmarks for speech LLMs. ASR + text LLM still wins
https://github.com/MatthewCYM/VoiceBench
No paper yet but samples sound nice
https://sparkaudio.github.io/spark-tts/
Spark-TTS, a novel system built upon our proposed BiCodec, a single-stream speech codec that strategically decomposes speech into two complementary token types: low-bitrate semantic tokens for linguistic content and fixed-length global tokens for speaker-specific attributes. This disentangled representation, combined with the Qwen2.5 LLM and a chain-of-thought (CoT) generation approach, enables both coarse-grained attribute control (e.g., gender, pitch level) and fine-grained parameter adjustment (e.g., precise pitch values, speaking rate).
Introducing Emilia-Large: 200K+ Hours of Open-Source Speech Data!
We’re excited to release Emilia-Large, the largest TTS pretraining datasets! With 200K+ hours of multilingual speech data, fully open-source. It is ready to use for #TTS and #SpeechLM.
https://x.com/realamphion/status/1894719602816393295
KAD: No More FAD! An Effective and Efficient Evaluation Metric for Audio Generation
https://arxiv.org/abs/2502.15602
Although being widely adopted for evaluating generated audio signals, the Fréchet Audio Distance (FAD) suffers from significant limitations, including reliance on Gaussian assumptions, sensitivity to sample size, and high computational complexity. As an alternative, we introduce the Kernel Audio Distance (KAD), a novel, distribution-free, unbiased, and computationally efficient metric based on Maximum Mean Discrepancy (MMD). Through analysis and empirical validation, we demonstrate KAD's advantages: (1) faster convergence with smaller sample sizes, enabling reliable evaluation with limited data; (2) lower computational cost, with scalable GPU acceleration; and (3) stronger alignment with human perceptual judgments. By leveraging advanced embeddings and characteristic kernels, KAD captures nuanced differences between real and generated audio. Open-sourced in the kadtk toolkit, KAD provides an efficient, reliable, and perceptually aligned benchmark for evaluating generative audio models.
https://github.com/YoonjinXD/kadtk
This thing was recently releases, somehow missed it before
Sortformer diarizer: an open-source, end-to-end neural model for speaker diarization.
- Integration with ASR and LLM Models: Sortformer is designed to be integrated with ASR or LLM models as a Transformer Encoder. It can be used to inject token-level speaker ID info into the encoder parts of ASR models and LLMs.
- Train/Fine-tune via Token-level Labels: Sortformer resolves the permutation problem using arrival-time sort-loss-based training, enabling speaker IDs for words to be trained via token-level labels. No more timestamp-based training for speaker diarization!
https://arxiv.org/abs/2409.06656
https://huggingface.co/nvidia/diar_sortformer_4spk-v1
https://huggingface.co/spaces/Speech-Arena-2025/Speech-DF-Arena
Читать полностью…1M hours 48TB
https://mlcommons.org/2025/01/new-unsupervised-peoples-speech/
Once again (third time) https://github.com/KdaiP/StableTTS is really good.
It is all about conditioning. Many words in the paper, but this picture is the main one.
Guided sampling helps to reduce artifacts and improve clarity. It also significantly reduces expressiveness. However, one can see that simply reducing temperature has similar effect with less compute.
https://alphacephei.com/nsh/2025/01/17/guidance.html
Here are the histograms to illustrate VITS duration issues. The model is simply orthogonal.
Читать полностью…Maybe notes are somewhat scattered, but I'd better not use ChatGPT to fix them. Please check our recent experiments, I'd be happy to hear your comments.
https://alphacephei.com/nsh/2025/01/03/matcha-tts-notes.html
Big ASR from Wenet team
TouchASP: Elastic Automatic Speech Perception that Everyone Can Touch
Xingchen Song, Chengdong Liang, Binbin Zhang, Pengshen Zhang, ZiYu Wang, Youcheng Ma, Menglong Xu, Lin Wang, Di Wu, Fuping Pan, Dinghao Zhou, Zhendong Peng
Large Automatic Speech Recognition (ASR) models demand a vast number of parameters, copious amounts of data, and significant computational resources during the training process. However, such models can merely be deployed on high-compute cloud platforms and are only capable of performing speech recognition tasks. This leads to high costs and restricted capabilities. In this report, we initially propose the elastic mixture of the expert (eMoE) model. This model can be trained just once and then be elastically scaled in accordance with deployment requirements. Secondly, we devise an unsupervised data creation and validation procedure and gather millions of hours of audio data from diverse domains for training. Using these two techniques, our system achieves elastic deployment capabilities while reducing the Character Error Rate (CER) on the SpeechIO testsets from 4.98\% to 2.45\%. Thirdly, our model is not only competent in Mandarin speech recognition but also proficient in multilingual, multi-dialect, emotion, gender, and sound event perception. We refer to this as Automatic Speech Perception (ASP), and the perception results are presented in the experimental section.
https://huggingface.co/blog/big-bench-audio-release
Читать полностью…