mishig HF Staff commited on
Commit
78bc34a
·
verified ·
1 Parent(s): f972c4a

Add 1 files

Browse files
Files changed (1) hide show
  1. 2505/2505.14470.md +198 -0
2505/2505.14470.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: PAST: Phonetic-Acoustic Speech Tokenizer
2
+
3
+ URL Source: https://arxiv.org/html/2505.14470
4
+
5
+ Markdown Content:
6
+ \interspeechcameraready
7
+
8
+ Har-Tuv Tal Adi
9
+
10
+ Or Yossi The School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel [nadav.har-tuv1@mail.huji.ac.il](mailto:nadav.har-tuv1@mail.huji.ac.il)
11
+
12
+ ###### Abstract
13
+
14
+ We present PAST, a novel end-to-end framework that jointly models phonetic information alongside signal reconstruction, eliminating the need for external pretrained models. Unlike previous approaches that rely on pretrained self-supervised models, PAST employs supervised phonetic data, directly integrating domain knowledge into the tokenization process via auxiliary tasks. Additionally, we introduce a streamable, causal variant of PAST, enabling real-time speech applications. Results demonstrate that PAST surpasses existing evaluated baseline tokenizers across common evaluation metrics, including phonetic representation and speech reconstruction. Notably, PAST also achieves superior performance when serving as a speech representation for speech language models, further highlighting its effectiveness as a foundation for spoken language generation. To foster further research, we release the full implementation. Code, model checkpoints, and samples see [pages.cs.huji.ac.il/adiyoss-lab/PAST](https://arxiv.org/html/2505.14470v2/pages.cs.huji.ac.il/adiyoss-lab/PAST).
15
+
16
+ ###### keywords:
17
+
18
+ Speech Tokenization, Phonetic and Acoustic Tokens, Speech Language Models
19
+
20
+ 1 Introduction
21
+ --------------
22
+
23
+ Speech and audio language models have recently attracted significant attention in the research community, showcasing remarkable performance across a range of tasks[[1](https://arxiv.org/html/2505.14470v2#bib.bib1), [2](https://arxiv.org/html/2505.14470v2#bib.bib2), [3](https://arxiv.org/html/2505.14470v2#bib.bib3), [4](https://arxiv.org/html/2505.14470v2#bib.bib4), [5](https://arxiv.org/html/2505.14470v2#bib.bib5), [6](https://arxiv.org/html/2505.14470v2#bib.bib6)]. These models usually operate over acoustic tokens or phonetic speech tokens (also known as semantic tokens). Acoustic tokenizers, such as EnCodec[[7](https://arxiv.org/html/2505.14470v2#bib.bib7)] and SoundStream[[8](https://arxiv.org/html/2505.14470v2#bib.bib8)], are designed for high-fidelity waveform reconstruction but are less ideal for language modeling without an external text supervision. In contrast, phonetic tokenizers, such as those derived from quantizing wav2vec 2.0 [[9](https://arxiv.org/html/2505.14470v2#bib.bib9)] and HuBERT [[10](https://arxiv.org/html/2505.14470v2#bib.bib10)] latent representations, primarily capture linguistic information[[11](https://arxiv.org/html/2505.14470v2#bib.bib11)]. This makes them more suitable for sequential modeling. However, they require an additional vocoder module [[12](https://arxiv.org/html/2505.14470v2#bib.bib12)] for speech synthesis, which increases complexity and can often degrade reconstruction quality.
24
+
25
+ Hybrid tokenizers, such as [[13](https://arxiv.org/html/2505.14470v2#bib.bib13), [14](https://arxiv.org/html/2505.14470v2#bib.bib14), [15](https://arxiv.org/html/2505.14470v2#bib.bib15)], integrate phonetic and acoustic information into a unified representation. This is achieved by extracting and distilling phonetic features from pretrained self-supervised models (SSL models), e.g. WavLM[[16](https://arxiv.org/html/2505.14470v2#bib.bib16)]. Self-supervised representations exhibit correlation to phonetic content, e.g. measured by phoneme mutual information[[10](https://arxiv.org/html/2505.14470v2#bib.bib10), [11](https://arxiv.org/html/2505.14470v2#bib.bib11)], but they are not explicitly optimized to capture it. Despite showing promise, due to their dependence on pretrained SSL models, such hybrid approaches are potentially limited in their ability to fully capture the phonetic richness of the input. That being said, explicit phonetic supervision could not only benefit the tokenizer’s ability to capture phonetic content, it could also replace the dependence on pretrained SSL models and lower the computational cost.
26
+
27
+ In this work, we introduce Phonetic-Acoustic Speech Tokenizer (PAST), a novel end-to-end tokenizer that jointly captures both phonetic and acoustic representations without requiring external pretrained models or vocoders (see Figure[1](https://arxiv.org/html/2505.14470v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ PAST: Phonetic-Acoustic Speech Tokenizer")). In addition to the standard model, we present a variant of PAST designed for streaming applications, which operates causally by relying only on past information. PAST achieves this by leveraging supervised auxiliary tasks, such as phoneme classification and automatic speech recognition, to incorporate phonetic information directly into the quantization process. This approach allows PAST to outperform existing methods in phonetic and acoustic benchmarks while maintaining a simpler pipeline.
28
+
29
+ Our contributions are as follows:
30
+
31
+ 1. 1.We propose a novel approach for jointly learning phonetic and acoustic representations using supervised data, eliminating the need for pretrained models or external vocoders.
32
+ 2. 2.We achieve superior performance compared to hybrid tokenizers across both phonetic and acoustic benchmarks., demonstrating the effectiveness of our approach.
33
+ 3. 3.We introduce a streaming-compatible variant of PAST that operates causally, ensuring that it relies only on previous context, making it suitable for real-time speech applications.
34
+ 4. 4.We open-source our implementation, including training and inference pipelines in addition to model checkpoints.
35
+
36
+ ![Image 1: Refer to caption](https://arxiv.org/html/2505.14470v2/x1.png)
37
+
38
+ Figure 1: Schematic of the PAST pipeline. The auxiliary heads use the output of the first vector quantization module as input.
39
+
40
+ 2 Related Work
41
+ --------------
42
+
43
+ Approaches to tokenization can be roughly categorized into phonetic tokenizers, acoustic tokenizers, and hybrid-tokenizers.
44
+
45
+ Acoustic tokenizers aim to compress speech into discrete representations optimized for high-fidelity reconstruction. EnCodec[[7](https://arxiv.org/html/2505.14470v2#bib.bib7)] and SoundStream[[8](https://arxiv.org/html/2505.14470v2#bib.bib8)] have set the foundation for streamable high-fidelity audio compression by introducing Residual Vector Quantization (RVQ). Subsequent works focused on enhancing fidelity through (i) improved quantization[[17](https://arxiv.org/html/2505.14470v2#bib.bib17), [18](https://arxiv.org/html/2505.14470v2#bib.bib18)] (ii) improving latent representation structure[[19](https://arxiv.org/html/2505.14470v2#bib.bib19)] (iii) increasing training stability[[20](https://arxiv.org/html/2505.14470v2#bib.bib20)] or (iv) utilizing spectral representations[[21](https://arxiv.org/html/2505.14470v2#bib.bib21), [22](https://arxiv.org/html/2505.14470v2#bib.bib22)].
46
+
47
+ Phonetic tokenizers, on the other hand, quantize latent representations obtained by speech encoder model such as HuBERT[[10](https://arxiv.org/html/2505.14470v2#bib.bib10)], wav2vec 2.0[[9](https://arxiv.org/html/2505.14470v2#bib.bib9)], or Whisper encoder[[23](https://arxiv.org/html/2505.14470v2#bib.bib23), [24](https://arxiv.org/html/2505.14470v2#bib.bib24)] are trained to maximize mutual information across latent sequences of vectors, yet without explicit reconstruction constraints. These types of models usually operate over continuous latent spaces, and hence require some discretization method like k-means, or learned clustering[[25](https://arxiv.org/html/2505.14470v2#bib.bib25), [26](https://arxiv.org/html/2505.14470v2#bib.bib26)], to tokenize continuous embeddings into discrete tokens, enabling downstream modeling. Many studies have demonstrated that these tokens effectively capture phonetic information[[1](https://arxiv.org/html/2505.14470v2#bib.bib1), [4](https://arxiv.org/html/2505.14470v2#bib.bib4), [2](https://arxiv.org/html/2505.14470v2#bib.bib2), [6](https://arxiv.org/html/2505.14470v2#bib.bib6)]. They exhibit strong correlations with phonemes[[11](https://arxiv.org/html/2505.14470v2#bib.bib11)], and their representations have been successfully used to improve performance in speech applications[[27](https://arxiv.org/html/2505.14470v2#bib.bib27)] and to train Speech Language Models (SLM), with a vocoder component employed to synthesize speech from generated tokens.
48
+
49
+ Hybrid tokenizers aimed at merging phonetic and acoustic tokens. The common approach nowadays is to leverage a pretrained phonetic teacher model as auxiliary objective for distillation guidance. These approaches aim to jointly capture phonetic information while maintaining signal reconstruction capabilities. SpeechTokenizer[[13](https://arxiv.org/html/2505.14470v2#bib.bib13)] applies phonetic distillation to the first RVQ codebook, treating its indices as phonetic tokens while later codebooks encode acoustic details. This hierarchical approach enhances phonetic representation while maintaining signal quality. X-Codec[[14](https://arxiv.org/html/2505.14470v2#bib.bib14)] utilizes a pretrained SSL model during training and inference. It concatenate a low rank projection of the latent representation to the acoustic encoder output, in addition to applying an auxiliary distillation objective over the post-quantization latent. Mimi, the tokenizer used by Moshi[[15](https://arxiv.org/html/2505.14470v2#bib.bib15)], is designed to condense phonetic information into a single quantized stream, which is then combined with the encoded acoustic quantized latent representation.
50
+
51
+ Despite their strong performance, approaches relying on pseudo-labels from SSL models have several limitations. First, by relying on pseudo-supervision instead of supervised data, these approaches fail to fully utilize domain knowledge and may not align with explicit phonetic structures. Second, these models require vast amounts of unlabeled speech data, making their training computationally expensive and less feasible for low-resource languages. Finally, the inherent distillation process may encode cross-vector properties that are not directly relevant to phonetic representation, leading to redundancy and inefficiencies in the learned token space.
52
+
53
+ 3 Method
54
+ --------
55
+
56
+ ### 3.1 Problem Setup
57
+
58
+ Our model is composed of three main components: Encoder, Quantizer, and Decoder. Given a waveform signal 𝒙∈ℝ f s⋅t 𝒙 superscript ℝ⋅subscript 𝑓 𝑠 𝑡\bm{x}\in\mathbb{R}^{f_{s}\cdot t}bold_italic_x ∈ blackboard_R start_POSTSUPERSCRIPT italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ⋅ italic_t end_POSTSUPERSCRIPT of duration t 𝑡 t italic_t[sec], sampled at f s subscript 𝑓 𝑠 f_{s}italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT[Hz], the encoder transforms 𝒙 𝒙\bm{x}bold_italic_x into a dense latent representation 𝒛∈ℝ D×T 𝒛 superscript ℝ 𝐷 𝑇\bm{z}\in\mathbb{R}^{D\times T}bold_italic_z ∈ blackboard_R start_POSTSUPERSCRIPT italic_D × italic_T end_POSTSUPERSCRIPT. Here, T=f r⋅t 𝑇⋅subscript 𝑓 𝑟 𝑡 T=f_{r}\cdot t italic_T = italic_f start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ⋅ italic_t denotes the temporal resolution of the latent space, determined by the frame rate f r subscript 𝑓 𝑟 f_{r}italic_f start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT, and D 𝐷 D italic_D represents the latent dimension. The Quantizer module then processes 𝒛 𝒛\bm{z}bold_italic_z, producing a quantized latent representation 𝒛^∈ℝ D×T^𝒛 superscript ℝ 𝐷 𝑇\hat{\bm{z}}\in\mathbb{R}^{D\times T}over^ start_ARG bold_italic_z end_ARG ∈ blackboard_R start_POSTSUPERSCRIPT italic_D × italic_T end_POSTSUPERSCRIPT. Finally, the Decoder reconstructs the original signal, yielding 𝒙^∈ℝ f s⋅t^𝒙 superscript ℝ⋅subscript 𝑓 𝑠 𝑡\hat{\bm{x}}\in\mathbb{R}^{f_{s}\cdot t}over^ start_ARG bold_italic_x end_ARG ∈ blackboard_R start_POSTSUPERSCRIPT italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ⋅ italic_t end_POSTSUPERSCRIPT. To encourage capturing of phonetic content in the encoded latent representation we use paired phoneme and character-level transcription supervision with a set of auxiliary losses as described in subsection[3.3](https://arxiv.org/html/2505.14470v2#S3.SS3 "3.3 Auxiliary Heads ‣ 3 Method ‣ PAST: Phonetic-Acoustic Speech Tokenizer"). Our goal is then to (i) minimize the reconstruction error between 𝒙 𝒙\bm{x}bold_italic_x and 𝒙^^𝒙\hat{\bm{x}}over^ start_ARG bold_italic_x end_ARG, and (ii) to ensure that the encoded latent representation 𝒛 𝒛\bm{z}bold_italic_z captures meaningful phonetic information.
59
+
60
+ ### 3.2 Model Architecture
61
+
62
+ Our model is built on top of EnCodec[[7](https://arxiv.org/html/2505.14470v2#bib.bib7)] with an addition of a transformer encoder prior to the quantization module as depicted in Figure[1](https://arxiv.org/html/2505.14470v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ PAST: Phonetic-Acoustic Speech Tokenizer"). The Encoder block is comprised from a convolutional encoder module, same as EnCodec’s encoder, followed by a transformer encoder module. Experiments with and without the transformer encoder module are presented in Table[4](https://arxiv.org/html/2505.14470v2#S5.T4 "Table 4 ‣ 5.3 Component Analysis ‣ 5 Results ‣ PAST: Phonetic-Acoustic Speech Tokenizer") and discussed in Section[5.3](https://arxiv.org/html/2505.14470v2#S5.SS3 "5.3 Component Analysis ‣ 5 Results ‣ PAST: Phonetic-Acoustic Speech Tokenizer").
63
+
64
+ To enhance training stability, during training, the input to the quantization module is chosen from one of three modes of operations: (i) the output of the transformer block, with probability p trns.-only subscript 𝑝 trns.-only p_{\text{trns.-only}}italic_p start_POSTSUBSCRIPT trns.-only end_POSTSUBSCRIPT; (ii) the output of the encoder (skip connection), with probability p skip-only subscript 𝑝 skip-only p_{\text{skip-only}}italic_p start_POSTSUBSCRIPT skip-only end_POSTSUBSCRIPT; or (iii) the average of (i) and (ii). During inference, the model is using the averaged representation only. For the quantization module we employ Residual Vector Quantization. The RVQ component contains N q subscript 𝑁 𝑞 N_{q}italic_N start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT sequential vector quantization (VQ) layers, which iteratively quantize 𝒛 𝒛\bm{z}bold_italic_z along with its residuals. Specifically, given 𝒛∈ℝ D×T 𝒛 superscript ℝ 𝐷 𝑇\bm{z}\in\mathbb{R}^{D\times T}bold_italic_z ∈ blackboard_R start_POSTSUPERSCRIPT italic_D × italic_T end_POSTSUPERSCRIPT, for every τ∈T 𝜏 𝑇\tau\in T italic_τ ∈ italic_T the first VQ module replaces 𝒛 τ subscript 𝒛 𝜏\bm{z}_{\tau}bold_italic_z start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT with the closest entry in a learned embedding table yielding 𝒛^1 subscript^𝒛 1\hat{\bm{z}}_{1}over^ start_ARG bold_italic_z end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Then the process is repeated in the next VQ layers over the residue, i.e. for i∈{2,…,N q}:VQ i⁢(𝒛−∑j∈[i−1]𝒛^j)=𝒛^i:𝑖 2…subscript 𝑁 𝑞 subscript VQ 𝑖 𝒛 subscript 𝑗 delimited-[]𝑖 1 subscript^𝒛 𝑗 subscript^𝒛 𝑖 i\in\{2,...,N_{q}\}:\text{VQ}_{i}\big{(}\bm{z}-\sum_{j\in[i-1]}\hat{\bm{z}}_{j% }\big{)}=\hat{\bm{z}}_{i}italic_i ∈ { 2 , … , italic_N start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT } : VQ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( bold_italic_z - ∑ start_POSTSUBSCRIPT italic_j ∈ [ italic_i - 1 ] end_POSTSUBSCRIPT over^ start_ARG bold_italic_z end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = over^ start_ARG bold_italic_z end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The RVQ module outputs N q subscript 𝑁 𝑞 N_{q}italic_N start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT quantized streams, which can be represented either as the quantized vectors 𝒛^i subscript^𝒛 𝑖\hat{\bm{z}}_{i}over^ start_ARG bold_italic_z end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT or as their corresponding indices 𝒒 i∈ℕ T subscript 𝒒 𝑖 superscript ℕ 𝑇\bm{q}_{i}\in\mathbb{N}^{T}bold_italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_N start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT in the embedding table of each VQ i subscript VQ 𝑖\text{VQ}_{i}VQ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The Decoder component mirrors the convolutional encoder module, replacing the strided convolutions with transposed convolution layers. The input to the decoder is 𝒛^=∑i∈[N q]𝒛^i^𝒛 subscript 𝑖 delimited-[]subscript 𝑁 𝑞 subscript^𝒛 𝑖\hat{\bm{z}}=\sum_{i\in[N_{q}]}\hat{\bm{z}}_{i}over^ start_ARG bold_italic_z end_ARG = ∑ start_POSTSUBSCRIPT italic_i ∈ [ italic_N start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT ] end_POSTSUBSCRIPT over^ start_ARG bold_italic_z end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
65
+
66
+ Streamable configuration. We introduce a streamable variant of our model with three key modifications: (i) causal convolutions utilizing left-only padding following[[7](https://arxiv.org/html/2505.14470v2#bib.bib7)]; (ii) unidirectional LSTM and (iii) causal attention. This setup requires a 20 20 20 20 ms look-ahead window over the audio signal.
67
+
68
+ ### 3.3 Auxiliary Heads
69
+
70
+ To encourage the encoding of phonetic information, we incorporate auxiliary heads and training objectives that operate over the first quantized output stream 𝒛^1 subscript^𝒛 1\hat{\bm{z}}_{1}over^ start_ARG bold_italic_z end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. This approach aims to replace the distillation of pseudo-phonetic tokens with direct supervision using target character transcriptions and phonemes.
71
+
72
+ CTC character match. The CTC auxiliary head takes 𝒛^1∈ℝ D×T subscript^𝒛 1 superscript ℝ 𝐷 𝑇\hat{\bm{z}}_{1}\in\mathbb{R}^{D\times T}over^ start_ARG bold_italic_z end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_D × italic_T end_POSTSUPERSCRIPT as input and outputs a distribution over the set of all characters M 𝑀 M italic_M at each entry 𝒚∈ℝ|M|×T 𝒚 superscript ℝ 𝑀 𝑇\bm{y}\in\mathbb{R}^{|M|\times T}bold_italic_y ∈ blackboard_R start_POSTSUPERSCRIPT | italic_M | × italic_T end_POSTSUPERSCRIPT. The module is constructed from a linear projection from D 𝐷 D italic_D to a hidden dimension h ℎ h italic_h followed by a single-layer BiLSTM, and a linear projection from h ℎ h italic_h to |M|𝑀|M|| italic_M |. A connectionist temporal classification (CTC) [[28](https://arxiv.org/html/2505.14470v2#bib.bib28)] loss, denoted as ℒ ctc=CTC⁢(𝒚|chars)subscript ℒ ctc CTC conditional 𝒚 chars\mathcal{L}_{\text{ctc}}=\text{CTC}(\bm{y}|\text{chars})caligraphic_L start_POSTSUBSCRIPT ctc end_POSTSUBSCRIPT = CTC ( bold_italic_y | chars ), is then applied to align the predicted sequence with the transcription target.
73
+
74
+ Phoneme classification. The second auxiliary head is a simple linear projection that takes 𝒛^1 subscript^𝒛 1\hat{\bm{z}}_{1}over^ start_ARG bold_italic_z end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT as input and outputs a distribution over the set of all phonemes P 𝑃 P italic_P at each entry 𝒑^∈ℝ|P|×T^𝒑 superscript ℝ 𝑃 𝑇\hat{\bm{p}}\in\mathbb{R}^{|P|\times T}over^ start_ARG bold_italic_p end_ARG ∈ blackboard_R start_POSTSUPERSCRIPT | italic_P | × italic_T end_POSTSUPERSCRIPT. The auxiliary head is trained using a cross-entropy objective, denoted as ℒ phn=CE⁢(𝒑^,𝒑)subscript ℒ phn CE^𝒑 𝒑\mathcal{L}_{\text{phn}}=\text{CE}(\hat{\bm{p}},\bm{p})caligraphic_L start_POSTSUBSCRIPT phn end_POSTSUBSCRIPT = CE ( over^ start_ARG bold_italic_p end_ARG , bold_italic_p ).
75
+
76
+ ### 3.4 Training Objective
77
+
78
+ We use the reconstruction training objective as defined in EnCodec’s[[7](https://arxiv.org/html/2505.14470v2#bib.bib7)] paper, with the addition of two auxiliary terms. EnCodec contains multiple training objectives, working in a weighted combination to optimize signal reconstruction. As we do not change the recommended weighted configuration of the involved objectives, we will refer to the overall EnCodec objective term as ℒ EnCodec subscript ℒ EnCodec\mathcal{L}_{\text{EnCodec}}caligraphic_L start_POSTSUBSCRIPT EnCodec end_POSTSUBSCRIPT and refrain from defining all of the components and notations here. For further details, please refer to[[7](https://arxiv.org/html/2505.14470v2#bib.bib7)]. The overall training objective minimizes the following:
79
+
80
+ ℒ=λ ctc⁢ℒ ctc+λ phn⁢ℒ phn+ℒ EnCodec ℒ subscript 𝜆 ctc subscript ℒ ctc subscript 𝜆 phn subscript ℒ phn subscript ℒ EnCodec\mathcal{L}=\lambda_{\text{ctc}}\mathcal{L}_{\text{ctc}}+\lambda_{\text{phn}}% \mathcal{L}_{\text{phn}}+\mathcal{L}_{\text{EnCodec}}caligraphic_L = italic_λ start_POSTSUBSCRIPT ctc end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT ctc end_POSTSUBSCRIPT + italic_λ start_POSTSUBSCRIPT phn end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT phn end_POSTSUBSCRIPT + caligraphic_L start_POSTSUBSCRIPT EnCodec end_POSTSUBSCRIPT(1)
81
+
82
+ where λ ctc subscript 𝜆 ctc\lambda_{\text{ctc}}italic_λ start_POSTSUBSCRIPT ctc end_POSTSUBSCRIPT and λ phn subscript 𝜆 phn\lambda_{\text{phn}}italic_λ start_POSTSUBSCRIPT phn end_POSTSUBSCRIPT control the weights of the CTC loss ℒ ctc subscript ℒ ctc\mathcal{L}_{\text{ctc}}caligraphic_L start_POSTSUBSCRIPT ctc end_POSTSUBSCRIPT and phoneme loss ℒ phn subscript ℒ phn\mathcal{L}_{\text{phn}}caligraphic_L start_POSTSUBSCRIPT phn end_POSTSUBSCRIPT, respectively.
83
+
84
+ Table 1: Comparison based on phonetic information.
85
+
86
+ 4 Experimental Setup
87
+ --------------------
88
+
89
+ ### 4.1 Data
90
+
91
+ We use all training subsets of LibriSpeech[[29](https://arxiv.org/html/2505.14470v2#bib.bib29)] and TIMIT[[30](https://arxiv.org/html/2505.14470v2#bib.bib30)] for our training set, yielding a total of 965 965 965 965 hours of raw audio. To improve training efficiency and avoid redundant padding, we sample 3 3 3 3-second audio segments from each data sample. This segmentation is done purely for efficiency and is not an inherent part of our approach. To achieve this, we obtain character-level alignment for paired text transcriptions using pretrained Wav2Vec2[[9](https://arxiv.org/html/2505.14470v2#bib.bib9)] model. For each sampled audio segment, Wav2Vec2 outputs a character distribution per temporal entry, from which we select the most probable alignment based on log-likelihood. For the phoneme classification, we utilize the phonetic transcriptions provided by the TIMIT dataset. We sample instances in a 9:1 ratio (LS:T), resulting in 10%percent 10 10\%10 % of batch samples having phoneme supervision.
92
+
93
+ ### 4.2 Model Configuration
94
+
95
+ The encoder and decoder architectures follow the configuration from EnCodec[[7](https://arxiv.org/html/2505.14470v2#bib.bib7)] considering temporal downscaling with ratios [8,5,4,2]8 5 4 2[8,5,4,2][ 8 , 5 , 4 , 2 ], resulting in a frame rate of 50 50 50 50 Hz. The latent dimensionality D 𝐷 D italic_D is set to 128 128 128 128. The transformer module includes 8 8 8 8 layers with a hidden size of 768 768 768 768, 16 16 16 16 attention heads, and a feed-forward size of 2048 2048 2048 2048. During training, the sampling probabilities for the transformer skip-connection are p trns.-only=0.3 subscript 𝑝 trns.-only 0.3 p_{\text{trns.-only}}=0.3 italic_p start_POSTSUBSCRIPT trns.-only end_POSTSUBSCRIPT = 0.3 and p skip-only=0.1 subscript 𝑝 skip-only 0.1 p_{\text{skip-only}}=0.1 italic_p start_POSTSUBSCRIPT skip-only end_POSTSUBSCRIPT = 0.1. The transformer processes up to 150 150 150 150 frames (3 3 3 3 seconds), and longer sequences are divided into chunks with 1 1 1 1-second overlap, averaging the results over the overlapping regions. The auxiliary loss weights are λ ctc=12 subscript 𝜆 ctc 12\lambda_{\text{ctc}}=12 italic_λ start_POSTSUBSCRIPT ctc end_POSTSUBSCRIPT = 12 and λ phn=5 subscript 𝜆 phn 5\lambda_{\text{phn}}=5 italic_λ start_POSTSUBSCRIPT phn end_POSTSUBSCRIPT = 5, while the reconstruction loss weights follow the configuration defined in[[7](https://arxiv.org/html/2505.14470v2#bib.bib7)]. The RVQ contains 8 8 8 8 codebooks, each with 1024 1024 1024 1024 entries. The dimension of the CTC auxiliary head is h=512 ℎ 512 h=512 italic_h = 512. The model has 185 185 185 185 M parameters, with 68 68 68 68 M allocated to the transformer and 4 4 4 4 M to the auxiliary heads.
96
+
97
+ ### 4.3 Training Configuration
98
+
99
+ Training is performed using two NVIDIA A100 GPUs with a batch size of 80 80 80 80 for a total of 400,000 400 000 400,000 400 , 000 steps, using the ADAM optimizer with [0.5,0.9]0.5 0.9[0.5,0.9][ 0.5 , 0.9 ] betas and no weight decay. The learning rate was managed using a cosine decay scheduler, starting at 3⋅10−4⋅3 superscript 10 4 3\cdot 10^{-4}3 ⋅ 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT and decreasing gradually to zero with a warm-up phase of 4,000 4 000 4,000 4 , 000 steps.
100
+
101
+ ### 4.4 Evaluation Metrics
102
+
103
+ #### 4.4.1 Signal Reconstruction Quality Metrics
104
+
105
+ The following are computed on the LibriSpeech clean-test set.
106
+
107
+ Virtual Speech Quality Objective Listener (ViSQOL)[[31](https://arxiv.org/html/2505.14470v2#bib.bib31)] evaluates the reconstruction quality by comparing the spectral and temporal features of the reconstructed signal to the source signal. It produces an approximation of Mean Opinion Score.
108
+
109
+ Scale-Invariant Signal-to-Noise Ratio (SISNR) measures the similarity between the original and reconstructed signals by quantifying the ratio of target signal energy to residual noise.
110
+
111
+ PESQ (Perceptual Evaluation of Speech Quality) [[32](https://arxiv.org/html/2505.14470v2#bib.bib32)] assesses the perceptual degradation of reconstructed signals, following the ITU-T P.862.2 wideband recommendation.
112
+
113
+ #### 4.4.2 Phonetic Information Evaluation
114
+
115
+ Phone-Normalized Mutual Information (PNMI)[[10](https://arxiv.org/html/2505.14470v2#bib.bib10)] quantifies the percentage of uncertainty about a given phone label Y 𝑌 Y italic_Y eliminated after observing a token X 𝑋 X italic_X. It measures the mutual information I⁢(X;Y)𝐼 𝑋 𝑌 I(X;Y)italic_I ( italic_X ; italic_Y ), normalized by the entropy of Y 𝑌 Y italic_Y. Higher values correspond to better phonetic encoding. In our evaluation, PNMI is computed on the tokenizer’s first RVQ codebook token 𝒒 1 subscript 𝒒 1\bm{q}_{1}bold_italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT using the TIMIT test-set.
116
+
117
+ ABX metric [[33](https://arxiv.org/html/2505.14470v2#bib.bib33)] measures the model’s ability to discriminate phonetic contrasts by comparing triplets of sounds A 𝐴 A italic_A, B 𝐵 B italic_B, and X 𝑋 X italic_X. Lower ABX error rates reflect better phonetic preservation. We compute ABX on the reconstructed representation after the RVQ, evaluating both within-speaker and across-speaker contexts. This evaluates how well the tokenizer retains phonetic distinctions through the quantization process.
118
+
119
+ Word Error Rate (WER) measures the accuracy of generated transcriptions with respect to a reference transcription, where lower values indicate better performance. WER is computed using the DASB benchmark[[34](https://arxiv.org/html/2505.14470v2#bib.bib34)] on the discrete tokens from all codebooks 𝒒 𝒒\bm{q}bold_italic_q. Training and validation were performed on LibriSpeech train-clean-100 and dev-clean subsets, while testing was conducted on the test-clean and test-other subsets.
120
+
121
+ Table 2: Signal reconstruction evaluation.
122
+
123
+ #### 4.4.3 Speech Language Modeling Evaluation
124
+
125
+ In order to compare the observed methods, we train an identical backbone language model (LM) for each of the observed tokenizers. We leverage the base architecture of the AudioGen model [[35](https://arxiv.org/html/2505.14470v2#bib.bib35)] using the delay pattern, as suggested in MusicGen[[36](https://arxiv.org/html/2505.14470v2#bib.bib36)]. We use a 300 300 300 300 M parameter model configuration and train on 10 10 10 10-second audio segments for 300 300 300 300 k update steps with batch size of 256 256 256 256, using all train subsets of LibriSpeech without any additional conditioning, text or prompts. Equipped with this LM, we measure the SWUGGY metric[[37](https://arxiv.org/html/2505.14470v2#bib.bib37)]. This metric evaluates the model’s ability to assign higher likelihood to valid words over pseudo-words by testing both Inter (within-vocabulary) and OOV (out-of-vocabulary) categories. The probabilities are derived solely from the distribution of the first codebook, as it encapsulates the majority of the phonetic information in the output.
126
+
127
+ 5 Results
128
+ ---------
129
+
130
+ ### 5.1 Baseline Comparison
131
+
132
+ We compare PAST with two baseline hybrid models, SpeechTokenizer and X-Codec, on both reconstruction and phonetic information metrics. For context, we use EnCodec as an acoustic topline, since its objective is purely signal fidelity without phonetic supervision. We have discarded Mimi[[15](https://arxiv.org/html/2505.14470v2#bib.bib15)] from this comparison as our reproduced results, using the published model, were far from being on par with the observed baselines. We also compare PAST with two top-line models: k-means over HuBERT for phonetic representation, and EnCodec for acoustic reconstruction. First, we evaluate the observed methods over a set of phonetic metrics. Results depicted in Table[1](https://arxiv.org/html/2505.14470v2#S3.T1 "Table 1 ‣ 3.4 Training Objective ‣ 3 Method ‣ PAST: Phonetic-Acoustic Speech Tokenizer") suggests that PAST outperforms the observed baselines across all observed metrics, even surpassing the topline on several key aspects. These results demonstrate that direct supervision effectively captures phonetic information, eliminating the need for distillation from pretrained SSL models.
133
+
134
+ Next, we evaluate PAST’s signal generation capabilities. Table[2](https://arxiv.org/html/2505.14470v2#S4.T2 "Table 2 ‣ 4.4.2 Phonetic Information Evaluation ‣ 4.4 Evaluation Metrics ‣ 4 Experimental Setup ‣ PAST: Phonetic-Acoustic Speech Tokenizer") presents the results for reconstruction quality evaluation. PAST significantly surpasses the observed baselines, further showcasing its ability to balance phonetic richness with acoustic quality. We speculate that X-Codec’s notably low SISNR is due to the absence of a point-wise metric loss objective over the reconstructed waveform, leading to misalignment in signal reconstruction. Moreover, Tables[1](https://arxiv.org/html/2505.14470v2#S3.T1 "Table 1 ‣ 3.4 Training Objective ‣ 3 Method ‣ PAST: Phonetic-Acoustic Speech Tokenizer"),[2](https://arxiv.org/html/2505.14470v2#S4.T2 "Table 2 ‣ 4.4.2 Phonetic Information Evaluation ‣ 4.4 Evaluation Metrics ‣ 4 Experimental Setup ‣ PAST: Phonetic-Acoustic Speech Tokenizer") also highlight PAST’s superiority while utilizing a causal modeling configuration which allows for streaming capabilities.
135
+
136
+ Table 3: Evaluation of SLM performance across tokenizers.
137
+
138
+ ### 5.2 Speech Language Modeling
139
+
140
+ The following experiment revolves around the evaluation of the observed methods as speech tokenizers, where we train an identical LM as specified in subsection[4.4.3](https://arxiv.org/html/2505.14470v2#S4.SS4.SSS3 "4.4.3 Speech Language Modeling Evaluation ‣ 4.4 Evaluation Metrics ‣ 4 Experimental Setup ‣ PAST: Phonetic-Acoustic Speech Tokenizer") using each method to tokenize the audio signals to discrete token sequences. Results summarized in Table[3](https://arxiv.org/html/2505.14470v2#S5.T3 "Table 3 ‣ 5.1 Baseline Comparison ‣ 5 Results ‣ PAST: Phonetic-Acoustic Speech Tokenizer") shows that PAST notably surpasses all other observed models, further emphasizing the advantages of our data-driven approach.
141
+
142
+ ### 5.3 Component Analysis
143
+
144
+ Table[4](https://arxiv.org/html/2505.14470v2#S5.T4 "Table 4 ‣ 5.3 Component Analysis ‣ 5 Results ‣ PAST: Phonetic-Acoustic Speech Tokenizer") highlights the impact of the different components that compose PAST and reveals two main trends. First, including the suggested auxiliary heads and their corresponding training objectives is crucial for encapsulating phonetic information in the learned latent space, with CTC being the most crucial of the two, as reflected in the ABX scores. Since the CTC objective optimizes all possible alignments between the latent representation and the character transcription, we hypothesize that it encourages the latent space to encode phonetic information in a structured manner across time steps. Second, considering both auxiliary objectives without the transformer module leads to a significant difference across all metrics. However, including it improves sequence modeling and signal reconstruction, as reflected in the ABX and SISNR trends.
145
+
146
+ Practically, adding the transformer module and naively cascading it after the encoder leads to vanishing gradients, causing divergence during training. Table[5](https://arxiv.org/html/2505.14470v2#S5.T5 "Table 5 ‣ 5.3 Component Analysis ‣ 5 Results ‣ PAST: Phonetic-Acoustic Speech Tokenizer") highlights the importance of including the suggested skip connection dropout during training. By design, we utilize small auxiliary heads with limited sequence modeling capacity to ensure that certain modeling capabilities are captured at the latent representation level. Our motivation for adding the transformer component is to enhance the model’s sequence modeling capacity. Without dropout on the skip connections, the model effectively bypassed the transformer encoder, leading to similar performance as the configuration where the transformer is entirely omitted, as shown in Table[4](https://arxiv.org/html/2505.14470v2#S5.T4 "Table 4 ‣ 5.3 Component Analysis ‣ 5 Results ‣ PAST: Phonetic-Acoustic Speech Tokenizer"). Hence, imposing the dropout constraint empirically shows notable improvement and prevents this phenomenon.
147
+
148
+ Table 4: Ablation study over model components.
149
+
150
+ Table 5: Impact of transformer skip-connection dropout.
151
+
152
+ 6 Discussion
153
+ ------------
154
+
155
+ We introduce PAST, a unified phonetic-acoustic novel speech tokenizer that integrates supervised phonetic information into the tokenization process while maintaining high-fidelity reconstruction. Unlike existing approaches that rely on SSL models and external vocoders, PAST directly incorporates phonetic supervision, producing a representation that is both semantically meaningful and acoustically precise. Our results demonstrate that PAST surpasses SOTA tokenizers in phonetic representation, speech reconstruction, and speech language modeling. Notably, its supervised design eliminates the need for pretrained SSL models. Furthermore, the proposed streamable variant extends PAST ’s applicability to real-time speech applications. Although PAST offers several benefits, its reliance on labeled phonetic data limits its scalability to a multilingual setting. Future work will focus on adapting PAST to a multilingual setting.
156
+
157
+ Acknowledgments. This research work was supported by ISF grant 2049/22.
158
+
159
+ References
160
+ ----------
161
+
162
+ * [1] R.Algayres _et al._, “Generative spoken language model based on continuous word-sized audio tokens,” in _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing_.Association for Computational Linguistics, 2023, pp. 3008–3028.
163
+ * [2] M.-J. Hwang _et al._, “Textless acoustic model with self-supervised distillation for noise-robust expressive speech-to-speech translation,” _Findings of the Association for Computational Linguistics: ACL 2024_, 2024.
164
+ * [3] Z.Borsos _et al._, “Audiolm: a language modeling approach to audio generation,” _IEEE/ACM transactions on audio, speech, and language processing_, vol.31, pp. 2523–2533, 2023.
165
+ * [4] E.Kharitonov _et al._, “Text-free prosody-aware generative spoken language modeling,” _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pp. 8666–8681, 2022.
166
+ * [5] K.Lakhotia _et al._, “On generative spoken language modeling from raw audio,” _Transactions of the Association for Computational Linguistics_, vol.9, pp. 1336–1354, 2021.
167
+ * [6] M.Hassid _et al._, “Textually pretrained speech language models,” _Advances in Neural Information Processing Systems_, vol.36, 2024.
168
+ * [7] A.Défossez, J.Copet, G.Synnaeve, and Y.Adi, “High fidelity neural audio compression,” _arXiv preprint: 2210.13438_, 2022.
169
+ * [8] N.Zeghidour, A.Luebs, A.Omran, J.Skoglund, and M.Tagliasacchi, “Soundstream: An end-to-end neural audio codec,” _IEEE/ACM Transactions on Audio, Speech, and Language Processing_, vol.30, pp. 495–507, 2021.
170
+ * [9] A.Baevski, Y.Zhou, A.Mohamed, and M.Auli, “wav2vec 2.0: A framework for self-supervised learning of speech representations,” _Advances in neural information processing systems_, vol.33, pp. 12 449–12 460, 2020.
171
+ * [10] W.-N. Hsu, B.Bolte, Y.-H.H. Tsai, K.Lakhotia, R.Salakhutdinov, and A.Mohamed, “Hubert: Self-supervised speech representation learning by masked prediction of hidden units,” _IEEE/ACM transactions on audio, speech, and language processing_, vol.29, pp. 3451–3460, 2021.
172
+ * [11] A.Sicherman and Y.Adi, “Analysing discrete self supervised speech representation for spoken language modeling,” in _ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_.IEEE, 2023, p. 1–5.
173
+ * [12] J.Kong, J.Kim, and J.Bae, “Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis,” _Advances in neural information processing systems_, vol.33, pp. 17 022–17 033, 2020.
174
+ * [13] X.Zhang, D.Zhang, S.Li, Y.Zhou, and X.Qiu, “Speechtokenizer: Unified speech tokenizer for speech language models,” in _The Twelfth International Conference on Learning Representations_, 2024.
175
+ * [14] Z.Ye _et al._, “Codec does matter: Exploring the semantic shortcoming of codec for audio language model,” _arXiv preprint: 2408.17175_, 2024.
176
+ * [15] A.Défossez, L.Mazaré, M.Orsini, A.Royer, P.Pérez, H.Jégou, E.Grave, and N.Zeghidour, “Moshi: a speech-text foundation model for real-time dialogue,” _Technical report, Kyutai_, 2024.
177
+ * [16] S.Chen _et al._, “Wavlm: Large-scale self-supervised pre-training for full stack speech processing,” _IEEE Journal of Selected Topics in Signal Processing_, vol.16, no.6, p. 1505–1518, Oct. 2022.
178
+ * [17] D.Yang, S.Liu, R.Huang, J.Tian, C.Weng, and Y.Zou, “Hifi-codec: Group-residual vector quantization for high fidelity audio codec,” _arXiv preprint: 2305.02765_, 2023.
179
+ * [18] A.Défossez, J.Copet, G.Synnaeve, and Y.Adi, “High fidelity neural audio compression,” _Transactions on Machine Learning Research_, 2023, featured Certification, Reproducibility Certification.
180
+ * [19] J.Chen _et al._, “Pyramidcodec: Hierarchical codec for long-form music generation in audio domain,” in _Findings of the Association for Computational Linguistics: EMNLP 2024_, 2024, pp. 4253–4263.
181
+ * [20] Y.-C. Wu _et al._, “Audiodec: An open-source streaming high-fidelity neural audio codec,” in _ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_.IEEE, 2023, pp. 1–5.
182
+ * [21] Z.Du, S.Zhang, K.Hu, and S.Zheng, “Funcodec: A fundamental, reproducible and integrable open-source toolkit for neural speech codec,” in _ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_.IEEE, 2024, pp. 591–595.
183
+ * [22] S.Ji _et al._, “Wavtokenizer: an efficient acoustic discrete codec tokenizer for audio language modeling,” 2024.
184
+ * [23] A.Zeng, Z.Du, M.Liu, L.Zhang, S.Jiang, Y.Dong, and J.Tang, “Scaling speech-text pre-training with synthetic interleaved data,” _arXiv preprint: 2411.17607_, 2024.
185
+ * [24] A.Radford, J.W. Kim, T.Xu, G.Brockman, C.McLeavey, and I.Sutskever, “Robust speech recognition via large-scale weak supervision,” in _International conference on machine learning_.PMLR, 2023, pp. 28 492–28 518.
186
+ * [25] A.Turetzky and Y.Adi, “Last: Language model aware speech tokenization,” _arXiv preprint: 2409.03701_, 2024.
187
+ * [26] S.Messica and Y.Adi, “Nast: Noise aware speech tokenization for speech language models,” _arXiv preprint: 2406.11037_, 2024.
188
+ * [27] O.Tal, M.Mandel, F.Kreuk, and Y.Adi, “A systematic comparison of phonetic aware techniques for speech enhancement,” in _Interspeech 2022_, 2022, pp. 1193–1197.
189
+ * [28] A.Graves, S.Fernández, F.Gomez, and J.Schmidhuber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in _Proceedings of the 23rd international conference on Machine learning_, 2006, pp. 369–376.
190
+ * [29] V.Panayotov, G.Chen, D.Povey, and S.Khudanpur, “Librispeech: An asr corpus based on public domain audio books,” in _2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, 2015, pp. 5206–5210.
191
+ * [30] J.Garofolo, L.Lamel, W.Fisher, J.Fiscus, D.Pallett, N.Dahlgren, and V.Zue, “Timit acoustic-phonetic continuous speech corpus,” _Linguistic Data Consortium_, 11 1992.
192
+ * [31] M.Chinen, F.S. Lim, J.Skoglund, N.Gureev, F.O’Gorman, and A.Hines, “Visqol v3: An open source production ready objective speech and audio metric,” in _2020 twelfth international conference on quality of multimedia experience (QoMEX)_.IEEE, 2020, pp. 1–6.
193
+ * [32] A.Rix, J.Beerends, M.Hollier, and A.Hekstra, “Perceptual evaluation of speech quality (pesq)-a new method for speech quality assessment of telephone networks and codecs,” in _2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221)_, vol.2, 2001, pp. 749–752 vol.2.
194
+ * [33] T.Schatz _et al._, “Evaluating speech features with the minimal-pair abx task: analysis of the classical mfc/plp pipeline,” in _Interspeech 2013_, 2013, pp. 1781–1785.
195
+ * [34] P.Mousavi, L.D. Libera, J.Duret, A.Ploujnikov, C.Subakan, and M.Ravanelli, “Dasb - discrete audio and speech benchmark,” _arXiv preprint: 2406.14294_, 2024.
196
+ * [35] F.Kreuk, G.Synnaeve, A.Polyak, U.Singer, A.Défossez, J.Copet, D.Parikh, Y.Taigman, and Y.Adi, “Audiogen: Textually guided audio generation,” _arXiv preprint: 2209.15352_, 2023.
197
+ * [36] J.Copet, F.Kreuk, I.Gat, T.Remez, D.Kant, G.Synnaeve, Y.Adi, and A.Défossez, “Simple and controllable music generation,” _Advances in Neural Information Processing Systems_, vol.36, 2024.
198
+ * [37] T.A. Nguyen _et al._, “The zero resource speech benchmark 2021: Metrics and baselines for unsupervised spoken language modeling,” _Self-Supervised Learning for Speech and Audio Processing Workshop @ NeurIPS_, 2020.