zifei9 commited on
Commit
55973f8
·
verified ·
1 Parent(s): ce0c71f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -3
README.md CHANGED
@@ -1,3 +1,44 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ This is a d-Matrix functional reference of the clip-vit-base-patch32 model.
6
+ The reference provides the following functional *configurations*:
7
+ Configuration | Explanation
8
+ :-- | :--
9
+ **`BASELINE`** | a reference functionally equivalent to the original model
10
+ **`BASIC`** | all linear algebraic operands quantized to `MXINT8-64`, and all other operations transformed to approximated kernel simulations
11
+
12
+
13
+ ### Usage
14
+
15
+ Install d-Matrix [Dmx_Compressor](https://github.com/d-matrix-ai/dmx-compressor) first.
16
+ ```sh
17
+ pip install dmx_compressor
18
+ ```
19
+
20
+ The following is an example model and its evaluation.
21
+
22
+ ```python
23
+ from PIL import Image
24
+ import requests
25
+
26
+ from transformers import CLIPProcessor, CLIPModel
27
+ from dmx.compressor.modeling import DmxModel
28
+
29
+ model = CLIPModel.from_pretrained("d-matrix/clip-vit-base-patch32")
30
+ processor = CLIPProcessor.from_pretrained("d-matrix/clip-vit-base-patch32")
31
+
32
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
33
+ image = Image.open(requests.get(url, stream=True).raw)
34
+
35
+ inputs = processor(
36
+ text=["a photo of a cat", "a photo of a dog"],
37
+ images=image,
38
+ return_tensors="pt",
39
+ padding=True,
40
+ )
41
+
42
+ model = DmxModel.from_torch(model)
43
+ outputs = model(**inputs)
44
+ ```