lysanderism commited on
Commit
4dc30a1
·
verified ·
1 Parent(s): 38d9e4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -48,11 +48,11 @@ You need to use the following dependencies:
48
  3. Download [Fine-tuned BEATs_iter3+ (AS2M) (cpt2)](https://valle.blob.core.windows.net/share/BEATs/BEATs_iter3_plus_AS2M_finetuned_on_AS2M_cpt2.pt?sv=2020-08-04&st=2023-03-01T07%3A51%3A05Z&se=2033-03-02T07%3A51%3A00Z&sr=c&sp=rl&sig=QJXmSJG9DbMKf48UDIU1MfzIro8HQOf3sqlNXiflY1I%3D) to `beats_path`.
49
  4. Download [vicuna 7B v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5/tree/main) to ```vicuna_path```.
50
  5. Download [timeaudio](https://huggingface.co/lysanderism/TimeAudio/tree/main) to ```ckpt_path```.
51
- 6. Running with ```python3 cli_inference.py --ckpt_path xxx --whisper_path xxx --beats_path xxx --vicuna_path xxx``` to start cli inference. Please make sure your GPU has more than 40G of memory. If your GPU does not have enough memory (e.g. only 24G), you can quantize the model using the `--low_resource` parameter to reduce the memory usage.
52
 
53
- ## Launch a Demo
54
 
55
- Same as **How to inference in CLI: 1-5**.
56
 
57
 
58
  ## Citation
 
48
  3. Download [Fine-tuned BEATs_iter3+ (AS2M) (cpt2)](https://valle.blob.core.windows.net/share/BEATs/BEATs_iter3_plus_AS2M_finetuned_on_AS2M_cpt2.pt?sv=2020-08-04&st=2023-03-01T07%3A51%3A05Z&se=2033-03-02T07%3A51%3A00Z&sr=c&sp=rl&sig=QJXmSJG9DbMKf48UDIU1MfzIro8HQOf3sqlNXiflY1I%3D) to `beats_path`.
49
  4. Download [vicuna 7B v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5/tree/main) to ```vicuna_path```.
50
  5. Download [timeaudio](https://huggingface.co/lysanderism/TimeAudio/tree/main) to ```ckpt_path```.
51
+ 6. Running with ```python3 cli_inference.py --cfg-path "configs/infer_config.yaml"``` to start cli inference. Please make sure your GPU has more than 40G of memory. If your GPU does not have enough memory (e.g. only 24G), you can quantize the model using the `--low_resource` parameter to reduce the memory usage.
52
 
53
+ ## Batch inference
54
 
55
+ Same as **How to inference in CLI: 1-5**. torchrun --nproc_per_node=8 inference.py --cfg-path "configs/infer_config.yaml"
56
 
57
 
58
  ## Citation