File size: 1,232 Bytes
55973f8 9ea411a 55973f8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
license: apache-2.0
---
This is a d-Matrix functional reference of the clip-vit-base-patch32 model.
The reference provides the following functional *configurations*:
Configuration | Explanation
:-- | :--
**`BASELINE`** | a reference functionally equivalent to the original model
**`BASIC`** | all linear algebraic operands quantized to `MXINT8-64`, and all other operations transformed to approximated kernel simulations
### Usage
Install d-Matrix [Dmx_Compressor](https://github.com/d-matrix-ai/dmx-compressor) first.
```sh
pip install dmx_compressor
```
The following is an example model and its usage.
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
from dmx.compressor.modeling import DmxModel
model = CLIPModel.from_pretrained("d-matrix/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("d-matrix/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"],
images=image,
return_tensors="pt",
padding=True,
)
model = DmxModel.from_torch(model)
outputs = model(**inputs)
``` |