Instructions to use Yova/SmallCap7M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Yova/SmallCap7M with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "image-to-text" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("image-to-text", model="Yova/SmallCap7M")# Load model directly from transformers import SmallCap model = SmallCap.from_pretrained("Yova/SmallCap7M", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Upload 2 files
Browse files- .gitattributes +1 -0
- web_data_index.gz +3 -0
- web_data_index_captions.json +3 -0
.gitattributes
CHANGED
|
@@ -58,3 +58,4 @@ vatex_index filter=lfs diff=lfs merge=lfs -text
|
|
| 58 |
vatex_index_captions.json filter=lfs diff=lfs merge=lfs -text
|
| 59 |
vizwiz_index filter=lfs diff=lfs merge=lfs -text
|
| 60 |
vizwiz_index_captions.json filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 58 |
vatex_index_captions.json filter=lfs diff=lfs merge=lfs -text
|
| 59 |
vizwiz_index filter=lfs diff=lfs merge=lfs -text
|
| 60 |
vizwiz_index_captions.json filter=lfs diff=lfs merge=lfs -text
|
| 61 |
+
web_data_index_captions.json filter=lfs diff=lfs merge=lfs -text
|
web_data_index.gz
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2021b49ec6e45af772787825f5cc61d7cb0bfc54eae04364350a7c5ecc2d6300
|
| 3 |
+
size 30063547848
|
web_data_index_captions.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3f73bc26143943b17aaebe24c3b4a0cd87652717efd3e2aadc335c1cd3fcb92f
|
| 3 |
+
size 873812161
|