https://twitter.com/RoshanSSharma2/status/1678523240472358912
Interested in Spoken Language? Our new paper "SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Tasks" at #ACL2023 introduces open-source data, tools, and benchmarks for 4 SLU tasks.
https://lnkd.in/ePiUjTiU
Presentation: 11AM on July 11
See you there!
https://arxiv.org/abs/2212.10525
SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Tasks
Suwon Shon, Siddhant Arora, Chyi-Jiunn Lin, Ankita Pasad, Felix Wu, Roshan Sharma, Wei-Lun Wu, Hung-Yi Lee, Karen Livescu, Shinji Watanabe
Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.
Cambridge team is always doing nice research
https://arxiv.org/abs/2307.03088
Label-Synchronous Neural Transducer for End-to-End ASR
Keqi Deng, Philip C. Woodland
Neural transducers provide a natural approach to streaming ASR. However, they augment output sequences with blank tokens which leads to challenges for domain adaptation using text data. This paper proposes a label-synchronous neural transducer (LS-Transducer), which extracts a label-level encoder representation before combining it with the prediction network output. Hence blank tokens are no longer needed and the prediction network can be easily adapted using text data. An Auto-regressive Integrate-and-Fire (AIF) mechanism is proposed to generate the label-level encoder representation while retaining the streaming property. In addition, a streaming joint decoding method is designed to improve ASR accuracy. Experiments show that compared to standard neural transducers, the proposed LS-Transducer gave a 10% relative WER reduction (WERR) for intra-domain Librispeech-100h data, as well as 17% and 19% relative WERRs on cross-domain TED-LIUM 2 and AESRC2020 data with an adapted prediction network.
The approach is reasonable at least
https://github.com/akashmjn/tinydiarize
From
https://arxiv.org/abs/2306.17103
LyricWhiz: Robust Multilingual Zero-shot Lyrics Transcription by Whispering to ChatGPT
Le Zhuo, Ruibin Yuan, Jiahao Pan, Yinghao Ma, Yizhi LI, Ge Zhang, Si Liu, Roger Dannenberg, Jie Fu, Chenghua Lin, Emmanouil Benetos, Wenhu Chen, Wei Xue, Yike Guo
We introduce LyricWhiz, a robust, multilingual, and zero-shot automatic lyrics transcription method achieving state-of-the-art performance on various lyrics transcription datasets, even in challenging genres such as rock and metal. Our novel, training-free approach utilizes Whisper, a weakly supervised robust speech recognition model, and GPT-4, today's most performant chat-based large language model. In the proposed method, Whisper functions as the "ear" by transcribing the audio, while GPT-4 serves as the "brain," acting as an annotator with a strong performance for contextualized output selection and correction. Our experiments show that LyricWhiz significantly reduces Word Error Rate compared to existing methods in English and can effectively transcribe lyrics across multiple languages. Furthermore, we use LyricWhiz to create the first publicly available, large-scale, multilingual lyrics transcription dataset with a CC-BY-NC-SA copyright license, based on MTG-Jamendo, and offer a human-annotated subset for noise level estimation and evaluation. We anticipate that our proposed method and dataset will advance the development of multilingual lyrics transcription, a challenging and emerging task.
Prompt to combine ASR results with GPT-4
Task: As a GPT-4 based lyrics transcription post-processor, your task is to analyze multiple ASR model-generated versions of a song’s lyrics and determine the most accurate version closest to the true lyrics. Also filter out invalid lyrics when all predictions are nonsense.
Input: The input is in JSON format:
{“prediction_1”: “line1;line2;...”, ...}
Output: Your output must be strictly in readable JSON format without any extra text:
{
“reasons”: “reason1;reason2;...”,
“closest_prediction”: <key_of_prediction>
“output”: “line1;line2...”
}
Requirements: For the "reasons" field, you have to provide a reason for the choice of the "closest_prediction" field. For the "closest_prediction" field, choose the prediction key that is closest to the true lyrics. Only when all predictions greatly differ from each other or are completely nonsense or meaningless, which means that none of the predictions is valid, fill in "None" in this field. For the "output" field, you need to output the final lyrics of closest_prediction. If the "closest_prediction" field is "None", you should also output "None" in this field. The language of the input lyrics is English.
On Interspeech 2023 program Daniel Povey has Johns Hopkins University affilation (again)
https://interspeech2023.org/wp-content/uploads/2023/06/INTERSPEECH_2023_Booklet_v1.pdf
IWSLT 2023 program is available
https://iwslt.org/2023/program
https://arxiv.org/abs/2306.13114
https://github.com/aixplain/NoRefER
A Reference-less Quality Metric for Automatic Speech Recognition via Contrastive-Learning of a Multi-Language Model with Self-Supervision
Kamer Ali Yuksel, Thiago Ferreira, Ahmet Gunduz, Mohamed Al-Badrashiny, Golara Javadi
The common standard for quality evaluation of automatic speech recognition (ASR) systems is reference-based metrics such as the Word Error Rate (WER), computed using manual ground-truth transcriptions that are time-consuming and expensive to obtain. This work proposes a multi-language referenceless quality metric, which allows comparing the performance of different ASR models on a speech dataset without ground truth transcriptions. To estimate the quality of ASR hypotheses, a pre-trained language model (LM) is fine-tuned with contrastive learning in a self-supervised learning manner. In experiments conducted on several unseen test datasets consisting of outputs from top commercial ASR engines in various languages, the proposed referenceless metric obtains a much higher correlation with WER scores and their ranks than the perplexity metric from the state-of-art multi-lingual LM in all experiments, and also reduces WER by more than 7% when used for ensembling hypotheses. The fine-tuned model and experiments are made available for the reproducibility: this https URL
https://twitter.com/forthshinji/status/1672082306239176706
demo: https://aria-k-alethia.github.io/2023laughter-demo/
corpus: https://sites.google.com/site/shinnosuketakamichi/research-topics/laughter_corpus
source: https://github.com/Aria-K-Alethia/laughter-synthesis/
A device to track human activity from Meta/Facebook
https://ariatutorial2023.github.io/
https://arxiv.org/abs/2306.07691
https://styletts2.github.io/
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
Yinghao Aaron Li, Cong Han, Vinay S. Raghavan, Gavin Mischler, Nima Mesgarani
In this paper, we present StyleTTS 2, a text-to-speech (TTS) model that leverages style diffusion and adversarial training with large speech language models (SLMs) to achieve human-level TTS synthesis. StyleTTS 2 differs from its predecessor by modeling styles as a latent random variable through diffusion models to generate the most suitable style for the text without requiring reference speech, achieving efficient latent diffusion while benefiting from the diverse speech synthesis offered by diffusion models. Furthermore, we employ large pre-trained SLMs, such as WavLM, as discriminators with our novel differentiable duration modeling for end-to-end training, resulting in improved speech naturalness. StyleTTS 2 surpasses human recordings on the single-speaker LJSpeech dataset and matches it on the multispeaker VCTK dataset as judged by native English speakers. Moreover, when trained on the LibriTTS dataset, our model outperforms previous publicly available models for zero-shot speaker adaptation. This work achieves the first human-level TTS on both single and multispeaker datasets, showcasing the potential of style diffusion and adversarial training with large SLMs. The audio demos and source code are available at this https URL.
Speech-to-Text Adapter and Speech-to-Entity Retriever Augmented LLMs for Speech Understanding
paper page: https://huggingface.co/papers/2306.07944
Large Language Models (LLMs) have been applied in the speech domain, often incurring a performance drop due to misaligned between speech and language representations. To bridge this gap, we propose a joint speech and language model (SLM) using a Speech2Text adapter, which maps speech into text token embedding space without speech information loss. Additionally, using a CTC-based blank-filtering, we can reduce the speech sequence length to that of text. In speech MultiWoz dataset (DSTC11 challenge), SLM largely improves the dialog state tracking (DST) performance (24.7% to 28.4% accuracy). Further to address errors on rare entities, we augment SLM with a Speech2Entity retriever, which uses speech to retrieve relevant entities, and then adds them to the original SLM input as a prefix. With this retrieval-augmented SLM (ReSLM), the DST performance jumps to 34.6% accuracy. Moreover, augmenting the ASR task with the dialog understanding task improves the ASR performance from 9.4% to 8.5% WER.
Reenforcement learning in speech from Google
Edit Distance based RL for RNNT decoding
https://arxiv.org/abs/2306.01789
Dongseong Hwang, Changwan Ryu, Khe Chai Sim
RNN-T is currently considered the industry standard in ASR due to its exceptional WERs in various benchmark tests and its ability to support seamless streaming and longform transcription. However, its biggest drawback lies in the significant discrepancy between its training and inference objectives. During training, RNN-T maximizes all alignment probabilities by teacher forcing, while during inference, it uses beam search which may not necessarily find the maximum probable alignment. Additionally, RNN-T's inability to experience mistakes during teacher forcing training makes it more problematic when a mistake occurs in inference. To address this issue, this paper proposes a Reinforcement Learning method that minimizes the gap between training and inference time. Our Edit Distance based RL (EDRL) approach computes rewards based on the edit distance, and trains the network at every action level. The proposed approach yielded SoTA WERs on LibriSpeech for the 600M Conformer RNN-T model.
Spoken dataset of books read in French, initially collected from audiocite.net by the GETALP team for the LeBenchmark project.
http://openslr.org/139/
Audiocite.net is a corpus of read French speech downloaded in November 2021 from the Audiocite.net website.
With a total duration of 6682 hours of audio recording, this corpus is the result of the voluntary work of 130 speakers. The metadata is divided into 4 .jsons files (all(100%), train(80%), dev(10%) and test(10%)) to be used in NLP models.
The corpus and its metadata were uploaded through a script distributing the information in a .csv file. The use of these audio and metadata files is intended for pre-trained speech models.
Text-to-speech synthesis from dark data with evaluation-in-the-loop data selection
Kentaro Seki, Shinnosuke Takamichi, Takaaki Saeki, Hiroshi Saruwatari
This paper proposes a method for selecting training data for text-to-speech (TTS) synthesis from dark data. TTS models are typically trained on high-quality speech corpora that cost much time and money for data collection, which makes it very challenging to increase speaker variation. In contrast, there is a large amount of data whose availability is unknown (a.k.a, "dark data"), such as YouTube videos. To utilize data other than TTS corpora, previous studies have selected speech data from the corpora on the basis of acoustic quality. However, considering that TTS models robust to data noise have been proposed, we should select data on the basis of its importance as training data to the given TTS model, not the quality of speech itself. Our method with a loop of training and evaluation selects training data on the basis of the automatically predicted quality of synthetic speech of a given TTS model. Results of evaluations using YouTube data reveal that our method outperforms the conventional acoustic-quality-based method.
https://arxiv.org/abs/2210.14850
https://github.com/Takaaki-Saeki/zm-text-tts
https://arxiv.org/abs/2301.12596
While neural text-to-speech (TTS) has achieved human-like natural synthetic speech, multilingual TTS systems are limited to resource-rich languages due to the need for paired text and studio-quality audio data. This paper proposes a method for zero-shot multilingual TTS using text-only data for the target language. The use of text-only data allows the development of TTS systems for low-resource languages for which only textual resources are available, making TTS accessible to thousands of languages. Inspired by the strong cross-lingual transferability of multilingual language models, our framework first performs masked language model pretraining with multilingual text-only data. Then we train this model with a paired data in a supervised manner, while freezing a language-aware embedding layer. This allows inference even for languages not included in the paired data but present in the text-only data. Evaluation results demonstrate highly intelligible zero-shot TTS with a character error rate of less than 12% for an unseen language. All experiments were conducted using public datasets and the implementation will be made available for reproducibility.
Beside diarization with tinydiarize, whisper can do audio tagging well
https://arxiv.org/abs/2307.03183
Whisper-AT: Noise-Robust Automatic Speech Recognizers are Also Strong General Audio Event Taggers
Yuan Gong, Sameer Khurana, Leonid Karlinsky, James Glass
In this paper, we focus on Whisper, a recent automatic speech recognition model trained with a massive 680k hour labeled speech corpus recorded in diverse conditions. We first show an interesting finding that while Whisper is very robust against real-world background sounds (e.g., music), its audio representation is actually not noise-invariant, but is instead highly correlated to non-speech sounds, indicating that Whisper recognizes speech conditioned on the noise type. With this finding, we build a unified audio tagging and speech recognition model Whisper-AT by freezing the backbone of Whisper, and training a lightweight audio tagging model on top of it. With <1% extra computational cost, Whisper-AT can recognize audio events, in addition to spoken text, in a single forward pass.
Are you skilled at generating synthesized or converted speech samples? Are you concerned about the potential implications of deepfake speech? Are you interested to contribute to advancing technology for detecting such 'fake' speech using machine learning?
If yes, you are warmly invited to contribute to the fifth edition of the ASVspoof (Automatic Speaker Verification and Spoofing Countermeasures) challenge! ASVspoof is centered around the challenges to design spoofing-robust automatic speaker verification solutions and application-agnostic speech deepfake detectors.
You may join us either as a data provider (phase 1) or as a challenge participant (phase 2). We are now inviting expressions of interest from potential data contributors.
For further details, please refer to the ASVspoof 5 Evaluation Plan which can be downloaded from our website at: https://www.asvspoof.org/
Kind regards,
On behalf of the ASVspoof 5 organising committee
organisers@lists.asvspoof.org be
July 1, 2023 Phase 1 - registration opens
July 1, 2023 - training and development data available
July 1, 2023 - TTS/VC adaptation and input data available
July 1, 2023 - surrogate ASV/CM available
July 15, 2023 - Phase 1 CodaLab platform opens
July 15 to September 15, 2023 - submit TTS/VC spoofed data
Another similar one with LLAMA
https://arxiv.org/abs/2306.16007
Prompting Large Language Models for Zero-Shot Domain Adaptation in Speech Recognition
Yuang Li, Yu Wu, Jinyu Li, Shujie Liu
The integration of Language Models (LMs) has proven to be an effective way to address domain shifts in speech recognition. However, these approaches usually require a significant amount of target domain text data for the training of LMs. Different from these methods, in this work, with only a domain-specific text prompt, we propose two zero-shot ASR domain adaptation methods using LLaMA, a 7-billion-parameter large language model (LLM). LLM is used in two ways: 1) second-pass rescoring: reranking N-best hypotheses of a given ASR system with LLaMA; 2) deep LLM-fusion: incorporating LLM into the decoder of an encoder-decoder based ASR system. Experiments show that, with only one domain prompt, both methods can effectively reduce word error rates (WER) on out-of-domain TedLium-2 and SPGISpeech datasets. Especially, the deep LLM-fusion has the advantage of better recall of entity and out-of-vocabulary words.
A useful effort to collect interspeech paper repos by https://github.com/DmitryRyumin
Please start/share and help to fill the remaining parts, it is a huge effort
https://github.com/DmitryRyumin/INTERSPEECH-2023-Papers
one can automate it probably
UnitSpeech: Speaker-adaptive Speech Synthesis with Untranscribed Data (INTERSPEECH 2023)
https://github.com/gmltmd789/UnitSpeech
Demo
https://unitspeech.github.io/
Another semisup thing from Google, better ensembling than ROVER
https://arxiv.org/abs/2306.12012
Learning When to Trust Which Teacher for Weakly Supervised ASR
Aakriti Agrawal, Milind Rao, Anit Kumar Sahu, Gopinath Chennupati, Andreas Stolcke
Automatic speech recognition (ASR) training can utilize multiple experts as teacher models, each trained on a specific domain or accent. Teacher models may be opaque in nature since their architecture may be not be known or their training cadence is different from that of the student ASR model. Still, the student models are updated incrementally using the pseudo-labels generated independently by the expert teachers. In this paper, we exploit supervision from multiple domain experts in training student ASR models. This training strategy is especially useful in scenarios where few or no human transcriptions are available. To that end, we propose a Smart-Weighter mechanism that selects an appropriate expert based on the input audio, and then trains the student model in an unsupervised setting. We show the efficacy of our approach using LibriSpeech and LibriLight benchmarks and find an improvement of 4 to 25\% over baselines that uniformly weight all the experts, use a single expert model, or combine experts using ROVER.
https://twitter.com/unilightwf/status/1673522880053940224
Читать полностью…GPT-4 is an ensemble
https://twitter.com/soumithchintala/status/1671267150101721090
we shall see llama ensembles soon
Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
https://ai.facebook.com/blog/voicebox-generative-ai-model-speech/
https://github.com/gweltou/vosk-br
Implemented nice Breton model for Vosk. Very valuable contribution! Please don't hesitate to add a star to that project!
Nice paper on Whisper adaptation to word lists
Code: https://github.com/BriansIDP/WhisperBiasing
https://arxiv.org/abs/2306.01942
Can Contextual Biasing Remain Effective with Whisper and GPT-2?
Guangzhi Sun, Xianrui Zheng, Chao Zhang, Philip C. Woodland
End-to-end automatic speech recognition (ASR) and large language models, such as Whisper and GPT-2, have recently been scaled to use vast amounts of training data. Despite the large amount of training data, infrequent content words that occur in a particular task may still exhibit poor ASR performance, with contextual biasing a possible remedy. This paper investigates the effectiveness of neural contextual biasing for Whisper combined with GPT-2. Specifically, this paper proposes integrating an adapted tree-constrained pointer generator (TCPGen) component for Whisper and a dedicated training scheme to dynamically adjust the final output without modifying any Whisper model parameters. Experiments across three datasets show a considerable reduction in errors on biasing words with a biasing list of 1000 words. Contextual biasing was more effective when applied to domain-specific data and can boost the performance of Whisper and GPT-2 without losing their generality.
ICASSP begins today
https://twitter.com/ieeeICASSP/status/1665942845147029506
New studio-quality & large-scale speech dataset🎙️
LibriTTS-R is a sound quality improved LibriTTS.
Dataset is freely available: http://openslr.org/141/
Speech samples and TTS outputs in our demo page: https://google.github.io/df-conformer/librittsr/index.html
Paper: https://arxiv.org/abs/2305.18802
New paper from @GoogleResearch & @GoogleDeepMind
Translatotron 3: Unsupervised Speech-to-Speech Translation
Paper: https://arxiv.org/abs/2305.17547
Audio Samples: https://google-research.github.io/lingvo-lab/translatotron3
VioLA: Unified Codec Language Models for Speech Recognition, Synthesis, and Translation
Is one decoder-only generative model all you need for speech recognition, synthesis, and translation?
https://arxiv.org/abs/2305.16107