File size: 5,154 Bytes
e86742a 329639e e86742a 329639e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 | ---
license: mit
base_model: nomic-ai/CodeRankEmbed
base_model_relation: quantized
tags:
- code
- embeddings
- onnx
- int8
- quantized
language:
- code
pipeline_tag: feature-extraction
---
# CodeRankEmbed β Dynamic INT8 Quantized (ONNX)
A dynamically quantized INT8 version of [nomic-ai/CodeRankEmbed](https://huggingface.co/nomic-ai/CodeRankEmbed), converted to ONNX by [jalipalo](https://huggingface.co/jalipalo/CodeRankEmbed-onnx) and quantized for fast CPU inference.
## What is this?
CodeRankEmbed is a 137M-parameter embedding model trained specifically for code search and retrieval. This repository provides a **dynamic INT8 weight-quantized** version that is significantly smaller and faster with negligible quality loss:
| | FP32 (original) | INT8 (this model) |
|---|---|---|
| **File size** | 522 MB | 132 MB (β75%) |
| **CPU inference** | 1.00Γ | ~2.09Γ faster |
| **Min cosine vs FP32** | 1.000 | 0.961 |
| **Calibration data needed** | β | None |
Quantization was done with ONNX Runtime's `quantize_dynamic` (weights only, `QInt8`, `per_channel=True`). Activations remain in FP32 at runtime β the recommended approach for transformer/embedding models per the [ONNX Runtime documentation](https://onnxruntime.ai/docs/performance/model-optimizations/quantization.html).
## Usage
### With `@huggingface/transformers` (JavaScript / Node.js)
```js
import { pipeline } from "@huggingface/transformers";
const extractor = await pipeline(
"feature-extraction",
"mrsladoje/CodeRankEmbed-onnx-int8",
{ quantized: true } // loads onnx/model_quantized.onnx automatically
);
const output = await extractor("def hello(): return 42", {
pooling: "mean",
normalize: true,
});
console.log(output.data); // Float32Array of 768 dimensions
```
### With `optimum` (Python)
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
model = ORTModelForFeatureExtraction.from_pretrained(
"mrsladoje/CodeRankEmbed-onnx-int8",
file_name="onnx/model_quantized.onnx",
)
tokenizer = AutoTokenizer.from_pretrained("mrsladoje/CodeRankEmbed-onnx-int8")
inputs = tokenizer("def hello(): return 42", return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state.mean(dim=1)
```
### With `onnxruntime` directly (Python)
```python
import onnxruntime as ort
import numpy as np
from tokenizers import Tokenizer
tokenizer = Tokenizer.from_pretrained("mrsladoje/CodeRankEmbed-onnx-int8")
tokenizer.enable_padding(length=128, pad_id=0)
tokenizer.enable_truncation(max_length=128)
session = ort.InferenceSession("onnx/model_quantized.onnx")
encoded = tokenizer.encode("def hello(): return 42")
input_ids = np.array([encoded.ids], dtype=np.int64)
attention_mask = np.array([encoded.attention_mask], dtype=np.int64)
outputs = session.run(None, {"input_ids": input_ids, "attention_mask": attention_mask})
embedding = outputs[1] # sentence_embedding output, shape (1, 768)
```
## Quantization Details
| Parameter | Value |
|---|---|
| Method | `quantize_dynamic` (ONNX Runtime) |
| Weight type | `QInt8` (signed 8-bit integer) |
| Scope | Weights only β activations quantized dynamically at runtime |
| Per-channel | Yes |
| Calibration | None required |
| ORT version | 1.21.x |
**Why dynamic over static?** Static INT8 quantization requires calibration data to pre-compute activation ranges. For transformer embedding models, activation distributions vary widely with input content and sequence length, making static calibration brittle (we validated this β static QDQ produced cosine similarities as low as 0.09β0.26 with MinMax calibration). Dynamic quantization sidesteps this entirely: weights are quantized offline and activations are quantized at runtime, giving robust quality across all inputs.
## Quality Validation
Validated on 10 code snippets across Python, JavaScript, Go, Java, Rust, TypeScript, and SQL:
```
Model Size Speedup Min cosine vs FP32 Quality
FP32 (baseline) 522.3 MB 1.00Γ β baseline
Dynamic INT8 132.2 MB 2.09Γ 0.9610 excellent
```
A cosine similarity β₯ 0.96 means the INT8 embeddings point in essentially the same direction as FP32. For retrieval tasks β especially with a reranker in the pipeline β this difference is undetectable in practice.
The ~2Γ CPU speedup is real compute acceleration (not just faster file loading), coming from ONNX Runtime's `MatMulIntegerToFloat` fused kernels operating on INT8 weights. VNNI-capable CPUs (Intel 10th gen+, AMD Zen4+) may see even larger gains.
## Attribution
- **Original model:** [nomic-ai/CodeRankEmbed](https://huggingface.co/nomic-ai/CodeRankEmbed) β MIT License
- **ONNX conversion:** [jalipalo/CodeRankEmbed-onnx](https://huggingface.co/jalipalo/CodeRankEmbed-onnx) β MIT License (inherited)
- **INT8 quantization:** this repository β MIT License
All work in this repository respects and complies with the MIT license of the original model.
|