Update README.md
Browse files
README.md
CHANGED
|
@@ -11,11 +11,19 @@ Place the WAV and JSON files in `dev_data`.
|
|
| 11 |
To distinguish the recognition performance of each part, prefix the file names of part1 with `fold-d-`, prefix the file names of part2 with `fold-a-` `fold-b-` `fold-c-`,
|
| 12 |
and prefix the file names of part3 with `fold-e-`.
|
| 13 |
|
| 14 |
-
Download the pre-trained Sentence-BERT model and tokenizer from the following URL
|
|
|
|
| 15 |
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
-
Place the downloaded pre-trained model and tokenizer inside the `../../qwen2_audio_baseline/Bert_pretrain`
|
| 19 |
|
| 20 |
Get the API for qwen2-audio-instruct from the following URL:
|
| 21 |
https://help.aliyun.com/zh/model-studio/get-api-key?spm=a2c4g.11186623.0.0.4eee4823sRBTDW&accounttraceid=3dc788b6beba4fd8a4415f2eff7b4ed5prhq
|
|
|
|
| 11 |
To distinguish the recognition performance of each part, prefix the file names of part1 with `fold-d-`, prefix the file names of part2 with `fold-a-` `fold-b-` `fold-c-`,
|
| 12 |
and prefix the file names of part3 with `fold-e-`.
|
| 13 |
|
| 14 |
+
Download the pre-trained Sentence-BERT model and tokenizer from the following URL
|
| 15 |
+
and Place the downloaded pre-trained model and tokenizer inside the `../../qwen2_audio_baseline/Bert_pretrain`
|
| 16 |
|
| 17 |
+
- Example commands
|
| 18 |
+
|
| 19 |
+
```
|
| 20 |
+
git clone https://huggingface.co/PeacefulData/2025_DCASE_AudioQA_Baselines
|
| 21 |
+
cd 2025_DCASE_AudioQA_Baselines
|
| 22 |
+
mkdir Bert_pretrain
|
| 23 |
+
cd Bert_pretrain
|
| 24 |
+
git clone https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/tree/main
|
| 25 |
+
```
|
| 26 |
|
|
|
|
| 27 |
|
| 28 |
Get the API for qwen2-audio-instruct from the following URL:
|
| 29 |
https://help.aliyun.com/zh/model-studio/get-api-key?spm=a2c4g.11186623.0.0.4eee4823sRBTDW&accounttraceid=3dc788b6beba4fd8a4415f2eff7b4ed5prhq
|