File size: 1,840 Bytes
b1f8f7c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
license: apache-2.0
base_model: jinaai/jina-embeddings-v2-base-code
tags:
- onnx
- int8
- quantized
- code-embeddings
- sentence-transformers
library_name: onnxruntime
pipeline_tag: feature-extraction
---
# jina-embeddings-v2-base-code (INT8 Quantized)
INT8 dynamically quantized version of [jinaai/jina-embeddings-v2-base-code](https://huggingface.co/jinaai/jina-embeddings-v2-base-code) for efficient CPU inference.
## Model Details
| Property | Value |
|----------|-------|
| Base Model | jinaai/jina-embeddings-v2-base-code |
| Quantization | INT8 (dynamic) |
| Size | 154 MB (vs 612 MB fp32) |
| Dimensions | 768 |
| Max Tokens | 8192 |
| Languages | English + 30 programming languages |
## Usage
```python
import onnxruntime as ort
from huggingface_hub import hf_hub_download
from tokenizers import Tokenizer
import numpy as np
# Load
tokenizer = Tokenizer.from_file(hf_hub_download("nijaru/jina-code-int8", "tokenizer.json"))
tokenizer.enable_padding(pad_id=0, pad_token="[PAD]")
tokenizer.enable_truncation(max_length=512)
session = ort.InferenceSession(hf_hub_download("nijaru/jina-code-int8", "model_int8.onnx"))
def embed(texts):
encoded = tokenizer.encode_batch(texts)
input_ids = np.array([e.ids for e in encoded], dtype=np.int64)
attention_mask = np.array([e.attention_mask for e in encoded], dtype=np.int64)
outputs = session.run(None, {"input_ids": input_ids, "attention_mask": attention_mask})
embeddings = outputs[0]
mask = attention_mask[:, :, np.newaxis]
return (embeddings * mask).sum(axis=1) / mask.sum(axis=1)
embeddings = embed(["def hello(): pass", "authentication flow"])
```
## License
Apache-2.0 (same as base model)
## Attribution
Quantized from [jinaai/jina-embeddings-v2-base-code](https://huggingface.co/jinaai/jina-embeddings-v2-base-code) by Jina AI.
|