File size: 3,386 Bytes
6768c29
 
 
 
 
 
 
 
 
1c3d460
1c07917
6768c29
 
 
 
b690ea2
 
 
 
 
6d91fc9
cba1721
b690ea2
 
 
 
 
6554992
 
 
 
 
 
 
 
 
b690ea2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3edd095
 
 
 
 
b690ea2
546e802
b690ea2
3edd095
 
 
 
 
 
 
 
 
b690ea2
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: gemma
datasets:
- NbAiLab/aurora-sft-2512-filtered
language:
- 'no'
- nb
- nn
base_model:
- google/gemma-3-4b-it
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- conversational
- instruct
- experimental
---

# Borealis 4B Instruct (Preview)

Release: Dec 22nd, 2025.

## Model summary
**NbAiLab/borealis-4b-instruct-preview** is a **4B-parameter** instruction-tuned **preview** model intended for early testing and feedback. It is an **experiment** and should be treated as pre-release quality.

This model is based on [**google/gemma-3-4b-it**](https://huggingface.co/google/gemma-3-4b-it), and fine-tuned on textual instructions only.

| Model | Bits | Format |
|---|---:|---|
| [NbAiLab/borealis-4b-instruct-preview](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview) | BF16 | Transformers (safetensors) |
| [NbAiLab/borealis-4b-instruct-preview-gguf](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-gguf) | 8 | GGUF (`q8_0`) |
| [NbAiLab/borealis-4b-instruct-preview-gguf](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-gguf) | 16 | GGUF (`f16`) |
| [NbAiLab/borealis-4b-instruct-preview-gguf](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-gguf) | BF16 | GGUF (`bf16`) |
| [NbAiLab/borealis-4b-instruct-preview-mlx](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-mlx) | 32 | MLX |
| [NbAiLab/borealis-4b-instruct-preview-mlx-8bits](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-mlx-8bits) | 8 | MLX (quantized) |

## Training data
Supervised fine-tuning (SFT) uses **NbAiLab/aurora-sft-2512** (not released yet).

## ⚠️ Safety / alignment disclaimer (important)
This is a **preview experiment** and **has not been safety-aligned yet**. The model may produce **harmful, biased, or insensitive** outputs (including content that is offensive, unsafe, or inappropriate). Do not use it for safety-critical or high-stakes applications, and add your own safety mitigations if deploying.

## Intended use
- Norwegian-centric assistant-style tasks (e.g., drafting, summarization, Q&A, light reasoning).
- Assesstment of Norwegian writing style and quality.
- Early evaluation of behavior, language coverage (Norwegian / Bokmål / Nynorsk), and quality.

## Limitations
- Preview quality; outputs may be unstable and may hallucinate.
- Not aligned for safety; may follow harmful instructions or generate problematic content (see disclaimer above).

## Weights & formats

### Transformers (original)
- **NbAiLab/borealis-4b-instruct-preview** (safetensors).

### GGUF quantizations
Available in [**NbAiLab/borealis-4b-instruct-preview-gguf**](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-gguf):
- `model-q8_0.gguf`
- `model-f16.gguf`
- `model-bf16.gguf`

Use:
```bash
ollama run hf.co/NbAiLab/borealis-4b-instruct-preview-gguf:BF16
```

### MLX (Apple Silicon)
Available in [**NbAiLab/borealis-4b-instruct-preview-mlx**](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-mlx) and quantized to [8 bits](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-mlx-8bits).

Use:
```bash
# Install MLX LM
uv tool install mlx-lm

# Interactive chat REPL
mlx_lm.chat --model "NbAiLab/borealis-4b-instruct-preview-mlx"
```

## Acknowledgements
Thanks to the **Gemma** team at Google for releasing Gemma 3 and to everyone contributing feedback on this preview.