Instructions to use risashinoda/BioVITA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- OpenCLIP
How to use risashinoda/BioVITA with OpenCLIP:
import open_clip model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:risashinoda/BioVITA') tokenizer = open_clip.get_tokenizer('hf-hub:risashinoda/BioVITA') - Notebooks
- Google Colab
- Kaggle
Fix GitHub URL and update citation
Browse files
README.md
CHANGED
|
@@ -25,11 +25,11 @@ tags:
|
|
| 25 |
|
| 26 |
## Usage
|
| 27 |
|
| 28 |
-
With the [BioVITA release code](https://github.com/
|
| 29 |
|
| 30 |
```bash
|
| 31 |
# Extract features (image + text + audio)
|
| 32 |
-
torchrun --nproc_per_node=8 extract_features.py \
|
| 33 |
--ids_dir path/to/benchmark/ids \
|
| 34 |
--feat_root path/to/output \
|
| 35 |
--tag biовita \
|
|
@@ -37,7 +37,7 @@ torchrun --nproc_per_node=8 extract_features.py \
|
|
| 37 |
--modalities audio,image,text
|
| 38 |
|
| 39 |
# Evaluate on BioVITA benchmark
|
| 40 |
-
python eval_benchmark.py \
|
| 41 |
--base_dir path/to/benchmark \
|
| 42 |
--ids_dir path/to/benchmark/ids \
|
| 43 |
--feat_root path/to/output \
|
|
|
|
| 25 |
|
| 26 |
## Usage
|
| 27 |
|
| 28 |
+
With the [BioVITA release code](https://github.com/dahlian00/BioVITA):
|
| 29 |
|
| 30 |
```bash
|
| 31 |
# Extract features (image + text + audio)
|
| 32 |
+
torchrun --nproc_per_node=8 eval/extract_features.py \
|
| 33 |
--ids_dir path/to/benchmark/ids \
|
| 34 |
--feat_root path/to/output \
|
| 35 |
--tag biовita \
|
|
|
|
| 37 |
--modalities audio,image,text
|
| 38 |
|
| 39 |
# Evaluate on BioVITA benchmark
|
| 40 |
+
python eval/eval_benchmark.py \
|
| 41 |
--base_dir path/to/benchmark \
|
| 42 |
--ids_dir path/to/benchmark/ids \
|
| 43 |
--feat_root path/to/output \
|