Add task categories and benchmarking usage

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +41 -5
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: cc-by-4.0
3
  language:
4
  - de
5
  - en
@@ -13,14 +12,21 @@ language:
13
  - ru
14
  - sq
15
  - sv
 
 
 
 
 
 
 
 
 
 
16
  tags:
17
  - speech prompts
18
  - text prompts
19
  - instruction following
20
  - benchmark
21
- size_categories:
22
- - 1K<n<10K
23
-
24
  dataset_info:
25
  features:
26
  - name: text_prompt
@@ -51,6 +57,7 @@ configs:
51
  - split: test
52
  path: data/test-*
53
  ---
 
54
  <p align="center">
55
  <img src="https://github.com/MaikeZuefle/DOWIS/blob/64f807de73bfe1e5ad6e2d07a62c642afb076ad7/dowis_logo.png?raw=true" alt="DOWIS" width="400"/>
56
  </p>
@@ -75,7 +82,7 @@ The dataset contains **1,320 rows**, with up to 4 audio recordings per row (2 fe
75
 
76
  Details can be found in the corresponding paper on [arXiv](https://arxiv.org/abs/2603.09881).
77
 
78
- Code for benchmarking Speech LLMs with different task benchmarks coupled with DOWIS can be found on [GitHub](https://github.com/MaikeZuefle/DOWIS/tree/main).
79
 
80
  ---
81
 
@@ -107,6 +114,35 @@ Code for benchmarking Speech LLMs with different task benchmarks coupled with DO
107
 
108
  ---
109
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
110
  ## Dataset Fields
111
 
112
  | Field | Type | Description |
 
1
  ---
 
2
  language:
3
  - de
4
  - en
 
12
  - ru
13
  - sq
14
  - sv
15
+ license: cc-by-4.0
16
+ size_categories:
17
+ - 1K<n<10K
18
+ task_categories:
19
+ - automatic-speech-recognition
20
+ - text-to-speech
21
+ - translation
22
+ - summarization
23
+ - audio-to-audio
24
+ - question-answering
25
  tags:
26
  - speech prompts
27
  - text prompts
28
  - instruction following
29
  - benchmark
 
 
 
30
  dataset_info:
31
  features:
32
  - name: text_prompt
 
57
  - split: test
58
  path: data/test-*
59
  ---
60
+
61
  <p align="center">
62
  <img src="https://github.com/MaikeZuefle/DOWIS/blob/64f807de73bfe1e5ad6e2d07a62c642afb076ad7/dowis_logo.png?raw=true" alt="DOWIS" width="400"/>
63
  </p>
 
82
 
83
  Details can be found in the corresponding paper on [arXiv](https://arxiv.org/abs/2603.09881).
84
 
85
+ Code for benchmarking Speech LLMs with different task benchmarks coupled with DOWIS can be found on [GitHub](https://github.com/MaikeZuefle/DOWIS).
86
 
87
  ---
88
 
 
114
 
115
  ---
116
 
117
+ ## Benchmarking Usage
118
+
119
+ The following instructions for inference and evaluation are provided in the official GitHub repository.
120
+
121
+ ### Inference
122
+ Run `main.py` with the following arguments to start inference:
123
+
124
+ ```bash
125
+ python main.py \
126
+ --lang de \
127
+ --model phi_multimodal \
128
+ --task ASR \
129
+ --out_folder outputs
130
+ ```
131
+
132
+ ### Evaluation
133
+ The evaluation script `eval_outputs.py` computes metrics on the generated predictions.
134
+
135
+ ```bash
136
+ python eval_outputs.py \
137
+ --lang de \
138
+ --model phi_multimodal \
139
+ --task ASR \
140
+ --predictions_folder outputs \
141
+ --out_folder evaluation_results
142
+ ```
143
+
144
+ ---
145
+
146
  ## Dataset Fields
147
 
148
  | Field | Type | Description |