File size: 2,289 Bytes
5be0319 7a26fa5 5be0319 7a26fa5 5be0319 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | ---
library_name: transformers.js
base_model: LtG/norbert3-base
tags:
- onnx
- transformers.js
- feature-extraction
- sentence-similarity
- norwegian
language:
- no
pipeline_tag: feature-extraction
license: apache-2.0
---
# ONNX version of LtG/norbert3-base
This repository contains **ONNX-converted weights** for the Norwegian language model [LtG/norbert3-base](https://huggingface.co/LtG/norbert3-base).
The conversion enables this state-of-the-art Norwegian model to run directly in browsers or Node.js environments using [Transformers.js](https://huggingface.co/docs/transformers.js).
It includes both:
1. **Quantized (int8):** Faster, smaller (default).
2. **Full Precision (float32):** Higher theoretical accuracy.
## Usage (Node.js/Web)
First, install the library:
```bash
npm install @huggingface/transformers
````
### Option 1: Use Quantized Model (Recommended)
This is the default behavior. It loads model_quantized.onnx (approx. 4x smaller, faster inference).
```javascript
import { pipeline } from '@huggingface/transformers';
// Loads the model (automatically selects the quantized version)
const embedder = await pipeline(
'feature-extraction',
'lebchen/norbert3-base-onnx',
{ device: 'auto' }
);
const sentences = [
"Dette er en setning på norsk.",
"Norbert er en språkmodell fra UiO."
];
// Norbert generally benefits from mean pooling for sentence representations
const output = await embedder(sentences, { pooling: 'mean', normalize: true });
console.log(output.tolist());
```
### Option 2: Use Full Precision Model
To load the uncompressed model.onnx, explicitly set quantized: false.
```javascript
const embedder = await pipeline(
'feature-extraction',
'lebchen/norbert3-base-onnx',
{
device: 'auto',
quantized: false
}
);
```
## Credits & Attribution
The original model (NorBERT 3) was developed by the Language Technology Group (LTG) at the University of Oslo.
Original Repository: [LtG/norbert3-base](https://huggingface.co/ltg/norbert3-base)
Paper/Citation: Please refer to the [original model](https://huggingface.co/ltg/norbert3-base) card for proper citation if you use this in academic work.
This distribution is converted to ONNX for compatibility reasons and maintains the original Apache 2.0 license.
|