File size: 1,215 Bytes
a29d4f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
license: apache-2.0
tags:
- audio
- speech
- language-model
- auristream
- discrete-diffusion
library_name: transformers
---

# AuriStreamParallel100M_Group4_BigAudioDataset_500k

**AuriStream Parallel** is a discrete diffusion speech language model by **Greta Tuckute** and **Klemen Kotar**.



## Model Details

| Parameter | Value |
|-----------|-------|
| Parameters | ~0.12B |
| Layers | 12 |
| Hidden Size | 768 |
| Attention Heads | 12 |
| Vocab Size | 8193 |
| Group Size | 4 |
| Mask Schedule | linear_text_prime |

## Architecture

- Bidirectional transformer attention
- Grouped token latent projection
- Parallel token heads for group-wise prediction
- Partial masking diffusion objective

## Usage

```python
from transformers import AutoModel

model = AutoModel.from_pretrained(
    "TuKoResearch/AuriStreamParallel100M_Group4_BigAudioDataset_500k",
    trust_remote_code=True,
)
```

## Base Model Code

This checkpoint uses shared model code from [TuKoResearch/AuriStreamParallel-base](https://huggingface.co/TuKoResearch/AuriStreamParallel-base).

## Tokenizer

This model is intended for cochlear tokens, e.g. from [WavCochCausalV8192](https://huggingface.co/TuKoResearch/WavCochCausalV8192).