Update README.md
Browse files
README.md
CHANGED
|
@@ -1,199 +1,159 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Model Card for Model ID
|
| 7 |
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
## Model Details
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
-
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
-
- **Developed by:** [More Information Needed]
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
|
| 31 |
|
| 32 |
-
- **Repository:**
|
| 33 |
-
- **Paper
|
| 34 |
-
- **Demo [optional]:** [More Information Needed]
|
| 35 |
|
| 36 |
## Uses
|
| 37 |
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
### Direct Use
|
| 41 |
-
|
| 42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 43 |
-
|
| 44 |
-
[More Information Needed]
|
| 45 |
-
|
| 46 |
-
### Downstream Use [optional]
|
| 47 |
-
|
| 48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 49 |
-
|
| 50 |
-
[More Information Needed]
|
| 51 |
-
|
| 52 |
-
### Out-of-Scope Use
|
| 53 |
-
|
| 54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
| 55 |
-
|
| 56 |
-
[More Information Needed]
|
| 57 |
-
|
| 58 |
-
## Bias, Risks, and Limitations
|
| 59 |
|
| 60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 61 |
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
### Recommendations
|
| 65 |
|
| 66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 67 |
|
| 68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 69 |
|
| 70 |
-
|
|
|
|
| 71 |
|
| 72 |
-
|
|
|
|
| 73 |
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
## Training Details
|
| 77 |
|
| 78 |
-
### Training Data
|
| 79 |
|
| 80 |
-
|
| 81 |
|
| 82 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
### Training Procedure
|
| 85 |
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
|
|
|
|
|
|
|
|
|
| 89 |
|
| 90 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
|
|
|
|
| 92 |
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
| 96 |
|
| 97 |
-
#### Speeds, Sizes, Times [optional]
|
| 98 |
|
| 99 |
-
|
| 100 |
|
| 101 |
-
|
| 102 |
|
| 103 |
## Evaluation
|
| 104 |
|
| 105 |
-
|
| 106 |
-
|
| 107 |
### Testing Data, Factors & Metrics
|
| 108 |
|
| 109 |
#### Testing Data
|
| 110 |
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
#### Factors
|
| 116 |
-
|
| 117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 118 |
-
|
| 119 |
-
[More Information Needed]
|
| 120 |
-
|
| 121 |
-
#### Metrics
|
| 122 |
|
| 123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 124 |
-
|
| 125 |
-
[More Information Needed]
|
| 126 |
|
| 127 |
### Results
|
| 128 |
|
| 129 |
-
[More Information Needed]
|
| 130 |
-
|
| 131 |
-
#### Summary
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
-
|
| 137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
-
## Environmental Impact
|
| 142 |
-
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
-
|
| 147 |
-
- **Hardware Type:** [More Information Needed]
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
-
|
| 153 |
-
## Technical Specifications [optional]
|
| 154 |
-
|
| 155 |
-
### Model Architecture and Objective
|
| 156 |
-
|
| 157 |
-
[More Information Needed]
|
| 158 |
|
| 159 |
-
|
| 160 |
|
| 161 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 162 |
|
| 163 |
-
#### Hardware
|
| 164 |
|
| 165 |
-
[More Information Needed]
|
| 166 |
|
| 167 |
-
####
|
| 168 |
-
|
| 169 |
-
[More Information Needed]
|
| 170 |
-
|
| 171 |
-
## Citation [optional]
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
-
|
| 175 |
-
**BibTeX:**
|
| 176 |
-
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
**APA:**
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
-
|
| 183 |
-
## Glossary [optional]
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
-
|
| 193 |
-
## Model Card Authors [optional]
|
| 194 |
|
| 195 |
-
|
|
|
|
|
|
|
| 196 |
|
| 197 |
## Model Card Contact
|
| 198 |
|
| 199 |
-
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
+
tags:
|
| 4 |
+
- audio
|
| 5 |
+
- speech
|
| 6 |
+
- waveform
|
| 7 |
+
license: mit
|
| 8 |
+
datasets:
|
| 9 |
+
- agkphysics/AudioSet
|
| 10 |
+
metrics:
|
| 11 |
+
- accuracy
|
| 12 |
+
pipeline_tag: feature-extraction
|
| 13 |
---
|
| 14 |
|
| 15 |
# Model Card for Model ID
|
| 16 |
|
| 17 |
+
WavJEPANat, a waveform-based version of the Joint-Embedding Predictive Architecture. WavJEPANat leverages high-level semantic representation learning to tackle the shortcomings of representation learning at the speech unit or token level. We show that
|
| 18 |
+
this approach substantially outperforms state-of-the-art time-domain audio foundation models across a wide variety of downstream benchmark tasks, while requiring considerably fewer computational resources. Additionally, WavJEPA-Nat overcomes the
|
| 19 |
+
performance drop that time-domain models typically exhibit in noisy and reverberant real-world acoustic environments
|
|
|
|
| 20 |
## Model Details
|
| 21 |
|
| 22 |
+
The WavJEPANat framework comprises a waveform encoder, context encoder, target encoder and a predictor. WavJEPANat’s objective is to predict latent
|
| 23 |
+
representation of various targets blocks based on a single context block extracted from the same
|
| 24 |
+
sound wave. As waveform encoder, we use the feature encoder of Wav2Vec 2.0, which is composed
|
| 25 |
+
of stacked temporal convolution layers (Baevski et al., 2020). Similar to the original I-JEPA architecture (Assran et al., 2023), a Vision Transformer (ViT) (Dosovitskiy et al., 2021) is used for the
|
| 26 |
+
target encoder, context encoder and predictor.
|
| 27 |
|
| 28 |
+
### Model Description
|
| 29 |
|
| 30 |
+
WavJEPANat is the first framework applying semantic learning to general-purpose audio representations in the time domain, surpassing state-of-the-art time-domain approaches on the HEAR (Turian
|
| 31 |
+
et al., 2022) benchmark suite while requiring only a fraction
|
| 32 |
+
of the computational resources. WavJEPANat leverages high-level semantic representation learning to tackle the shortcomings of representation learning at the speech unit or token level. We show that
|
| 33 |
+
this approach substantially outperforms state-of-the-art time-domain audio foundation models across a wide variety of downstream benchmark tasks, while requiring considerably fewer computational resources.
|
| 34 |
+
Additionally, we address the degraded performance of time-domain
|
| 35 |
+
models in real-world sound scenes with WavJEPANat, a multi-channel extension of the WavJEPA
|
| 36 |
+
framework trained on simulated real-world sound scenes. Evaluation on Nat-HEAR (Yuksel et al.,
|
| 37 |
+
2025), a naturalistic version of the HEAR benchmark suite, demonstrates that WavJEPA-Nat exceeds the robustness of other time-domain foundation models to noise and reverberation.
|
| 38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
+
- **Developed by:** Goksenin Yuksel, goksenin.yuksel@ru.nl
|
| 41 |
+
- **Model type:** Transformers, Audio Foundation Models, Raw Waveform Models
|
| 42 |
+
- **Language(s) (NLP):** WavJEPA and WavJEPA-Nat support all languages, but mainly English.
|
| 43 |
+
- **License:** MIT
|
| 44 |
|
| 45 |
+
### Model Sources
|
| 46 |
|
| 47 |
+
- **Repository:** https://github.com/labhamlet/wavjepa
|
| 48 |
+
- **Paper:** https://arxiv.org/abs/2509.23238
|
|
|
|
| 49 |
|
| 50 |
## Uses
|
| 51 |
|
| 52 |
+
WavJEPANat can be used as a powerful feature extractor for downstream tasks such as enviromental sound classification, speech recognition, speaker counting etc on adverse conditions.
|
| 53 |
+
Later, training a linear head on top of these extracted features would yield a fine-tuned audio scene analysis model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
|
|
|
| 55 |
|
| 56 |
+
## How to Get Started with the Model
|
|
|
|
|
|
|
| 57 |
|
|
|
|
| 58 |
|
|
|
|
| 59 |
|
| 60 |
+
~~~python
|
| 61 |
+
from transformers import AutoModel, AutoFeatureExtractor
|
| 62 |
|
| 63 |
+
model = AutoModel.from_pretrained("labhamlet/wavjepa-nat-base", trust_remote_code=True)
|
| 64 |
+
extractor = AutoFeatureExtractor.from_pretrained("labhamlet/wavjepa-nat-base", trust_remote_code=True)
|
| 65 |
|
| 66 |
+
audio = torch.zeros([1,2,160000])
|
| 67 |
+
extracted = extractor(audio, return_tensors="pt")
|
| 68 |
+
audio_feature = extracted['input_values']
|
| 69 |
+
print(model(audio_feature).shape)
|
| 70 |
+
~~~
|
| 71 |
|
| 72 |
## Training Details
|
| 73 |
|
|
|
|
| 74 |
|
| 75 |
+
### Training Data
|
| 76 |
|
| 77 |
+
We train WavJEPANat on the unbalanced training set of AudioSet, which consists of 1.74 million 10-second sound clips scraped from YouTube (Gemmeke
|
| 78 |
+
et al., 2017), and 70,000 naturalistic scenes (corresponding to 70 Matterport3D houses).
|
| 79 |
+
We used the 70,000 naturalistic scenes in the train set to generate naturalistic scenes for all audio clips in the unbalanced training set of AudioSet (10-second sound tracks of 1.74 million YouTube videos (Gemmeke et al., 2017)).
|
| 80 |
+
Specifically, during training we randomly paired an AudioSet clip with a noise sound clip from the WHAMR! background noise database (Maciejewski et al., 2020).
|
| 81 |
+
WHAMR! noise clips longer than 10 s were trimmed to 10 s duration and a linear fade-in/fade-out of 200 ms was applied to every noise clip prior to mixing of the sound scene. To create a naturalistic sound scene, we then convolved the AudioSet clip with RIR(s, r, θ).
|
| 82 |
|
| 83 |
### Training Procedure
|
| 84 |
|
| 85 |
+
Each sound clip was resampled to 16 kHz and mean centered to enforce equal loudness
|
| 86 |
+
across sound clips. We then randomly sampled 8 sections of 2 s from each sound clip, effectively increasing the batch size by a factor of 8 in a computationally efficient manner. Finally, each instance
|
| 87 |
+
is instance normalized (Ulyanov et al., 2017). The waveform encoder converts each 2 s instance into
|
| 88 |
+
an embedding w
|
| 89 |
+
200×768, effectively resampling the audio to 100 Hz with a stride of 10 ms and a
|
| 90 |
+
receptive field size of 12.5 ms
|
| 91 |
|
| 92 |
+
We sampled starting indices for the context block with p = 0.065 and for target blocks
|
| 93 |
+
with p = 0.025. We set M to 10 for both context block and target block . To update the target encoder
|
| 94 |
+
parameters ∆, we linearly increased τ from τ0 = 0.999 to τe = 0.99999 over the first 100,000 steps,
|
| 95 |
+
after which τ was kept constant. We used K = 8 for the top K averaging.
|
| 96 |
+
We trained WavJEPANat for 375,000 steps using a batch size of 16 on two NVIDIA H100 94 GB
|
| 97 |
+
GPUs. Given our in-batch sampling factor of 8, we boost our effective batch size to 256. We use
|
| 98 |
+
the AdamW optimizer (Loshchilov & Hutter, 2019) with a weight decay coefficient λw = 0.04. The
|
| 99 |
+
learning rate schedule follows a cosine decay with linear warm-up over 100,000 steps, reaching a
|
| 100 |
+
peak learning rate of 2 × 10−4 before decaying to zero
|
| 101 |
|
| 102 |
+
#### Preprocessing
|
| 103 |
|
| 104 |
+
RMS Normalization was applied to audio clips to get all of them in the same loudness levels, and later instance normalization is applied.
|
|
|
|
|
|
|
| 105 |
|
|
|
|
| 106 |
|
| 107 |
+
#### Training Hyperparameters
|
| 108 |
|
| 109 |
+
- **Training regime:**: WavJEPA-Nat were trained with mixed precision, torch.compile and flash attention.
|
| 110 |
|
| 111 |
## Evaluation
|
| 112 |
|
| 113 |
+
We evaluate WavJEPANat and other state-of-the-art models on the HEAR task suite, which presents a wide range of tasks to evaluate the downstream performance of audio
|
| 114 |
+
representation models (Turian et al., 2022).
|
| 115 |
### Testing Data, Factors & Metrics
|
| 116 |
|
| 117 |
#### Testing Data
|
| 118 |
|
| 119 |
+
**HEAR**: The aim of the HEAR benchmark is to develop a general-purpose audio representation that provides a strong basis for learning in a wide variety of tasks and scenarios.
|
| 120 |
+
HEAR evaluates audio representations using a benchmark suite across a variety of domains, including speech, environmental sound, and music.
|
| 121 |
+
HEAR was launched as a NeurIPS 2021 shared challenge. It still remains an open question whether one single general-purpose audio representation can perform as holistically as the human ear.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 122 |
|
|
|
|
|
|
|
|
|
|
| 123 |
|
| 124 |
### Results
|
| 125 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
+
**HEAR**
|
| 128 |
|
| 129 |
+
| Model | Size | DCASE | FSD50K | LC | ESC-50 | CD | VL | SC-5 | NS | BO | Mri-S | Mri-T | s(m) |
|
| 130 |
+
|-------|------|-------|--------|-----|--------|-----|-----|------|-----|-----|-------|-------|------|
|
| 131 |
+
| **Baseline** |
|
| 132 |
+
| HEAR-Naive | N/A | 7.6 | 12.5 | 40.3 ± 1.2 | 27.4 ± 3.3 | 36.7 ± 2.5 | 16.0 ± 3.4 | 13.3 | 89.2 | 97.1 ± 3.2 | 94.2 ± 1.1 | 93.7 ± 0.3 | 0.0 |
|
| 133 |
+
| **Speech pre-training** |
|
| 134 |
+
| Wav2Vec2.0 | B | 23.5 | 29.4 | 69.9 ± 2.1 | 46.4 ± 1.8 | 57.3 ± 1.1 | 34.9 ± 2.4 | 85.3 | 17.4 | 81.4 ± 4.8 | 90.7 ± 0.8 | 77.0 ± 0.9 | 30.9 |
|
| 135 |
+
| HuBERT | B | 78.0 | 32.8 | 63.3 ± 1.2 | 58.6 ± 2.8 | 71.2 ± 1.2 | 65.2 ± 2.9 | 94.0 | 19.8 | 93.2 ± 5.9 | 94.6 ± 0.4 | 85.0 ± 2.5 | 47.3 |
|
| 136 |
+
| WavLM | B | 27.0 | 25.7 | 61.3 ± 2.3 | 49.5 ± 3.8 | 64.3 ± 1.3 | 60.1 ± 3.2 | 93.6 | 16.0 | 84.3 ± 6.3 | 88.8 ± 1.0 | 76.8 ± 0.5 | 35.1 |
|
| 137 |
+
| Data2Vec | B | 46.5 | 15.2 | 47.9 ± 1.2 | 28.0 ± 2.8 | 55.7 ± 1.0 | 44.9 ± 3.1 | 88.5 | 14.0 | 78.4 ± 4.1 | 85.1 ± 0.7 | 70.5 ± 3.3 | 23.6 |
|
| 138 |
+
| Wav2Vec2.0 | L | 66.0 | 34.8 | 64.6 ± 1.9 | 59.8 ± 1.5 | 65.7 ± 0.8 | 53.3 ± 6.3 | 75.8 | 40.6 | 93.6 ± 2.6 | 94.8 ± 0.5 | 82.4 ± 3.0 | 42.5 |
|
| 139 |
+
| HuBERT | L | 34.8 | 31.4 | 63.8 ± 1.3 | 60.4 ± 3.0 | 71.0 ± 1.2 | 69.0 ± 2.8 | 84.8 | 20.4 | 93.6 ± 3.0 | 95.3 ± 0.8 | 82.5 ± 2.0 | 44.3 |
|
| 140 |
+
| WavLM | L | 77.4 | 40.1 | 69.4 ± 2.1 | 66.6 ± 2.5 | 76.3 ± 2.2 | 79.2 ± 3.9 | 93.8 | 18.2 | 93.6 ± 5.4 | 95.8 ± 0.8 | 90.1 ± 1.0 | 58.1 |
|
| 141 |
+
| Data2Vec | L | 40.8 | 18.7 | 50.9 ± 1.7 | 34.4 ± 2.5 | 62.8 ± 1.6 | 60.0 ± 4.9 | 86.1 | 14.4 | 80.1 ± 8.5 | 84.7 ± 2.6 | 65.6 ± 3.1 | 29.0 |
|
| 142 |
+
| **AudioSet pre-training** |
|
| 143 |
+
| Wav2Vec2.0 | B | 52.0 | 34.7 | 60.4 ± 1.7 | 58.9 ± 1.9 | 56.3 ± 1.3 | 27.9 ± 4.6 | 72.1 | 42.0 | 86.0 ± 9.6 | 92.9 ± 1.4 | 77.3 ± 0.5 | 31.9 |
|
| 144 |
+
| HuBERT | B | 86.2 | 41.1 | 63.5 ± 3.4 | 69.1 ± 1.6 | 69.5 ± 1.2 | 53.3 ± 3.1 | 83.5 | 38.8 | 91.5 ± 8.8 | 95.6 ± 0.5 | 90.4 ± 0.8 | 51.1 |
|
| 145 |
+
| Wav2Vec2.0 | L | 82.6 | 47.8 | 73.6 ± 1.2 | 72.6 ± 2.1 | 68.2 ± 1.7 | 42.2 ± 6.0 | 83.9 | 30.8 | 91.5 ± 5.0 | 96.5 ± 0.3 | 88.7 ± 2.5 | 55.9 |
|
| 146 |
+
| HuBERT | L | 86.2 | 45.4 | 75.2 ± 1.4 | 66.3 ± 4.6 | 70.1 ± 0.8 | 39.6 ± 3.6 | 85.7 | 38.6 | 91.6 ± 9.6 | 97.3 ± 0.5 | 89.6 ± 2.3 | 57.7 |
|
| 147 |
+
| **WavJEPA-Nat** | B| 91.6 | 48.7 | 72.4 ±1.8 | 80.2 ±1.7 | 65.9 ±0.7 | 39.7 ±2.4 | 87.4 | 33.4 | 96.2 ±5.3 | 97.4 ±0.5| 90.4 ±0.8 | 60.0|
|
| 148 |
|
|
|
|
| 149 |
|
|
|
|
| 150 |
|
| 151 |
+
#### Summary
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 152 |
|
| 153 |
+
We presented WavJEPANat, a state-of-the-art audio foundation model that leverages self-supervised semantic learning to obtain robust general-purpose audio representations from raw waveforms.
|
| 154 |
+
WavJEPANat’s results highlight the superior performance of semantic audio representation learning in comparison with representation learning at the speech unit or token level, as is common in existing
|
| 155 |
+
time-domain speech representation learning approaches.
|
| 156 |
|
| 157 |
## Model Card Contact
|
| 158 |
|
| 159 |
+
Goksenin Yuksel; goksenin.yuksel@ru.nl
|