Xenova HF Staff commited on
Commit
c082fde
·
verified ·
1 Parent(s): 9a1940e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md CHANGED
@@ -4,4 +4,97 @@ library_name: "transformers.js"
4
 
5
  https://huggingface.co/openai/clip-vit-base-patch16 with ONNX weights to be compatible with Transformers.js.
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
 
4
 
5
  https://huggingface.co/openai/clip-vit-base-patch16 with ONNX weights to be compatible with Transformers.js.
6
 
7
+ ## Usage (Transformers.js)
8
+
9
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
10
+ ```bash
11
+ npm i @xenova/transformers
12
+ ```
13
+
14
+ **Example:** Perform zero-shot image classification with `CLIPModel`.
15
+
16
+ ```js
17
+ import { AutoTokenizer, AutoProcessor, CLIPModel, RawImage } from '@xenova/transformers';
18
+
19
+ // Load tokenizer, processor, and model
20
+ const tokenizer = await AutoTokenizer.from_pretrained('Xenova/clip-vit-base-patch16');
21
+ const processor = await AutoProcessor.from_pretrained('Xenova/clip-vit-base-patch16');
22
+ const model = await CLIPModel.from_pretrained('Xenova/clip-vit-base-patch16');
23
+
24
+ // Run tokenization
25
+ const texts = ['a photo of a car', 'a photo of a football match'];
26
+ const text_inputs = tokenizer(texts, { padding: true, truncation: true });
27
+
28
+ // Read image and run processor
29
+ const image = await RawImage.read('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg');
30
+ const image_inputs = await processor(image);
31
+
32
+ // Run model with both text and pixel inputs
33
+ const output = await model({ ...text_inputs, ...image_inputs });
34
+ // {
35
+ // logits_per_image: Tensor {
36
+ // dims: [ 1, 2 ],
37
+ // data: Float32Array(2) [ 18.579734802246094, 24.31830596923828 ],
38
+ // },
39
+ // logits_per_text: Tensor {
40
+ // dims: [ 2, 1 ],
41
+ // data: Float32Array(2) [ 18.579734802246094, 24.31830596923828 ],
42
+ // },
43
+ // text_embeds: Tensor {
44
+ // dims: [ 2, 512 ],
45
+ // data: Float32Array(1024) [ ... ],
46
+ // },
47
+ // image_embeds: Tensor {
48
+ // dims: [ 1, 512 ],
49
+ // data: Float32Array(512) [ ... ],
50
+ // }
51
+ // }
52
+ ```
53
+
54
+ **Example:** Compute text embeddings with `CLIPTextModelWithProjection`.
55
+ ```js
56
+ import { AutoTokenizer, CLIPTextModelWithProjection } from '@xenova/transformers';
57
+
58
+ // Load tokenizer and text model
59
+ const tokenizer = await AutoTokenizer.from_pretrained('Xenova/clip-vit-base-patch16');
60
+ const text_model = await CLIPTextModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16');
61
+
62
+ // Run tokenization
63
+ const texts = ['a photo of a car', 'a photo of a football match'];
64
+ const text_inputs = tokenizer(texts, { padding: true, truncation: true });
65
+
66
+ // Compute embeddings
67
+ const { text_embeds } = await text_model(text_inputs);
68
+ // Tensor {
69
+ // dims: [ 2, 512 ],
70
+ // type: 'float32',
71
+ // data: Float32Array(1024) [ ... ],
72
+ // size: 1024
73
+ // }
74
+ ```
75
+
76
+ **Example:** Compute vision embeddings with `CLIPVisionModelWithProjection`.
77
+ ```js
78
+ import { AutoProcessor, CLIPVisionModelWithProjection, RawImage} from '@xenova/transformers';
79
+
80
+ // Load processor and vision model
81
+ const processor = await AutoProcessor.from_pretrained('Xenova/clip-vit-base-patch16');
82
+ const vision_model = await CLIPVisionModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16');
83
+
84
+ // Read image and run processor
85
+ const image = await RawImage.read('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg');
86
+ const image_inputs = await processor(image);
87
+
88
+ // Compute embeddings
89
+ const { image_embeds } = await vision_model(image_inputs);
90
+ // Tensor {
91
+ // dims: [ 1, 512 ],
92
+ // type: 'float32',
93
+ // data: Float32Array(512) [ ... ],
94
+ // size: 512
95
+ // }
96
+ ```
97
+
98
+ ---
99
+
100
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).