mandipgoswami commited on
Commit
079f066
·
verified ·
1 Parent(s): f6ac8a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -35
README.md CHANGED
@@ -1,35 +1,35 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- task_categories:
4
- - audio-to-audio
5
- - audio-classification
6
- tags:
7
- - rir
8
- - acoustics
9
- - benchmark
10
- - evaluation
11
- ---
12
-
13
- # RIRMega-Eval
14
-
15
- Official benchmark views and evaluation harness artifacts built from:
16
- - `mandipgoswami/rirmega` :contentReference[oaicite:6]{index=6}
17
- - `mandipgoswami/rir-mega-speech` :contentReference[oaicite:7]{index=7}
18
-
19
- This HF repo stores:
20
- - metadata parquet
21
- - fixed splits
22
- - checksums + manifest
23
- - (optional) a small core audio subset
24
-
25
- ## Tasks
26
- - `v1_param_estimation`: RIR -> RT60, EDT, DRR, C50, C80, Ts
27
- - `v1_auralization_consistency`: (dry + RIR) -> convolved comparisons
28
-
29
- ## How to evaluate
30
- Use the official evaluator:
31
- - Code: https://github.com/mandip42/rirmega-eval
32
- - CLI: `python scripts/evaluate.py ...`
33
-
34
- ## Citation
35
- See `CITATION.cff` in the code repo. Add Zenodo DOI when minted.
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - audio-to-audio
5
+ - audio-classification
6
+ tags:
7
+ - rir
8
+ - acoustics
9
+ - benchmark
10
+ - evaluation
11
+ ---
12
+
13
+ # RIRMega-Eval
14
+
15
+ Official benchmark views and evaluation harness artifacts built from:
16
+ - `mandipgoswami/rirmega` :[https://huggingface.co/datasets/mandipgoswami/rirmega], arxiv: [https://arxiv.org/abs/2510.18917]
17
+ - `mandipgoswami/rir-mega-speech` : [https://huggingface.co/datasets/mandipgoswami/rir-mega-speech], arxiv: [https://arxiv.org/abs/2601.19949]
18
+
19
+ This HF repo stores:
20
+ - metadata parquet
21
+ - fixed splits
22
+ - checksums + manifest
23
+ - (optional) a small core audio subset
24
+
25
+ ## Tasks
26
+ - `v1_param_estimation`: RIR -> RT60, EDT, DRR, C50, C80, Ts
27
+ - `v1_auralization_consistency`: (dry + RIR) -> convolved comparisons
28
+
29
+ ## How to evaluate
30
+ Use the official evaluator:
31
+ - Code: https://github.com/mandip42/rirmega-eval
32
+ - CLI: `python scripts/evaluate.py ...`
33
+
34
+ ## Citation
35
+ See `CITATION.cff` in the code repo. Add Zenodo DOI when minted.