Instructions to use Citaman/VeCLIP with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Citaman/VeCLIP with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="Citaman/VeCLIP") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("Citaman/VeCLIP") model = AutoModelForZeroShotImageClassification.from_pretrained("Citaman/VeCLIP") - Notebooks
- Google Colab
- Kaggle
repair dead link
Browse files
README.md
CHANGED
|
@@ -153,7 +153,7 @@ We release the checkpoints for **VeCLIP**, which are trained from scratch on vis
|
|
| 153 |
</table>
|
| 154 |
|
| 155 |
|
| 156 |
-
We further found our VeCap can also be complementary to other well-established filtering methods, e.g., [Data Filtering Network (DFN)](
|
| 157 |
|
| 158 |
<table>
|
| 159 |
<thead>
|
|
|
|
| 153 |
</table>
|
| 154 |
|
| 155 |
|
| 156 |
+
We further found our VeCap can also be complementary to other well-established filtering methods, e.g., [Data Filtering Network (DFN)](https://arxiv.org/abs/2309.17425). We also provide thosse checkpoints (referred to as **VeCap-DFN**) and report their performance below.
|
| 157 |
|
| 158 |
<table>
|
| 159 |
<thead>
|