File size: 1,703 Bytes
d057f7f
 
9863b52
d057f7f
9863b52
fa31caf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---

license: mit
pipeline_tag: audio-classification
---

# Model Card for SW2V (60k)

*Reconstruct! Don't Encode: Self-Supervised Representation Reconstruction Loss for High-Intelligibility and Low-Latency Streaming Neural Audio Codec*

SW2V is a pure Transformer decoder based speech representation model. This model is trained via distillation of [W2V-Bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0)

- **GitHub Repository:** [https://github.com/jhcodec843/jhcodec](https://github.com/jhcodec843/jhcodec)
- **Demo:** [https://jhcodec843.github.io/jhcodec/](https://jhcodec843.github.io/jhcodec/)
- **License:** MIT

## Model Details

### Model Description

This is corresponding to the paper's SW2V model (60k).
To ensure the performance Flash-Attention is required.

## Uses

JHCodec can be used for research and practical applications that require lossy audio compression. It is particularly well-suited for streaming speech, compressing large audio datasets, and serving as a neural front-end for speech recognition or synthesis pipelines.

### Intended Use

- Real-time low-latency audio codecs for speech-to-speech models
- Research into neural codecs and generative modeling
- Preprocessing for downstream speech and audio ML models

### Out-of-Scope Use

- Any malicious, deceptive, or privacy-violating applications

## How to Get Started with JHCodec

For programmatic usage, please refer to the [GitHub repository](https://github.com/jhcodec843/jhcodec) for installation, API documentation, and practical examples.

## Training Details

Please refer to the GitHub repository README.

## Authors

Anonymous, Submitted to Interspeech2026