--- license: apache-2.0 base_model: jinaai/jina-embeddings-v2-base-code tags: - onnx - int8 - quantized - code-embeddings - sentence-transformers library_name: onnxruntime pipeline_tag: feature-extraction --- # jina-embeddings-v2-base-code (INT8 Quantized) INT8 dynamically quantized version of [jinaai/jina-embeddings-v2-base-code](https://huggingface.co/jinaai/jina-embeddings-v2-base-code) for efficient CPU inference. ## Model Details | Property | Value | |----------|-------| | Base Model | jinaai/jina-embeddings-v2-base-code | | Quantization | INT8 (dynamic) | | Size | 154 MB (vs 612 MB fp32) | | Dimensions | 768 | | Max Tokens | 8192 | | Languages | English + 30 programming languages | ## Usage ```python import onnxruntime as ort from huggingface_hub import hf_hub_download from tokenizers import Tokenizer import numpy as np # Load tokenizer = Tokenizer.from_file(hf_hub_download("nijaru/jina-code-int8", "tokenizer.json")) tokenizer.enable_padding(pad_id=0, pad_token="[PAD]") tokenizer.enable_truncation(max_length=512) session = ort.InferenceSession(hf_hub_download("nijaru/jina-code-int8", "model_int8.onnx")) def embed(texts): encoded = tokenizer.encode_batch(texts) input_ids = np.array([e.ids for e in encoded], dtype=np.int64) attention_mask = np.array([e.attention_mask for e in encoded], dtype=np.int64) outputs = session.run(None, {"input_ids": input_ids, "attention_mask": attention_mask}) embeddings = outputs[0] mask = attention_mask[:, :, np.newaxis] return (embeddings * mask).sum(axis=1) / mask.sum(axis=1) embeddings = embed(["def hello(): pass", "authentication flow"]) ``` ## License Apache-2.0 (same as base model) ## Attribution Quantized from [jinaai/jina-embeddings-v2-base-code](https://huggingface.co/jinaai/jina-embeddings-v2-base-code) by Jina AI.