|
|
--- |
|
|
license: gemma |
|
|
datasets: |
|
|
- NbAiLab/aurora-sft-2512-filtered |
|
|
language: |
|
|
- 'no' |
|
|
- nb |
|
|
- nn |
|
|
base_model: NbAiLab/borealis-4b-instruct-preview |
|
|
pipeline_tag: text-generation |
|
|
library_name: mlx |
|
|
tags: |
|
|
- conversational |
|
|
- instruct |
|
|
- experimental |
|
|
- mlx |
|
|
--- |
|
|
|
|
|
# Borealis 4B Instruct MLX (Preview) |
|
|
|
|
|
Release: Dec 22nd, 2025. |
|
|
|
|
|
## Model summary |
|
|
**NbAiLab/borealis-4b-instruct-preview-mlx** is a MLX 8bit quantized version of a **4B-parameter** instruction-tuned **preview** model intended for early testing and feedback. It is an **experiment** and should be treated as pre-release quality. |
|
|
|
|
|
The original model is [NbAiLab/borealis-4b-instruct-preview](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview). |
|
|
|
|
|
| Model | Bits | Format | |
|
|
|---|---:|---| |
|
|
| [NbAiLab/borealis-4b-instruct-preview](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview) | BF16 | Transformers (safetensors) | |
|
|
| [NbAiLab/borealis-4b-instruct-preview-gguf](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-gguf) | 8 | GGUF (`q8_0`) | |
|
|
| [NbAiLab/borealis-4b-instruct-preview-gguf](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-gguf) | 16 | GGUF (`f16`) | |
|
|
| [NbAiLab/borealis-4b-instruct-preview-gguf](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-gguf) | BF16 | GGUF (`bf16`) | |
|
|
| [NbAiLab/borealis-4b-instruct-preview-mlx](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-mlx) | 32 | MLX | |
|
|
| [NbAiLab/borealis-4b-instruct-preview-mlx-8bits](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-mlx-8bits) | 8 | MLX (quantized) | |
|
|
|
|
|
|
|
|
This model [NbAiLab/borealis-4b-instruct-preview-mlx-8bits](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview-mlx-8bits) was |
|
|
converted to MLX format from [NbAiLab/borealis-4b-instruct-preview](https://huggingface.co/NbAiLab/borealis-4b-instruct-preview) |
|
|
using mlx-lm version **0.29.1**. |
|
|
|
|
|
## Use with mlx |
|
|
|
|
|
```bash |
|
|
pip install mlx-lm |
|
|
``` |
|
|
|
|
|
```python |
|
|
from mlx_lm import load, generate |
|
|
|
|
|
model, tokenizer = load("NbAiLab/borealis-4b-instruct-preview-mlx-8bits") |
|
|
|
|
|
prompt = "hei :)" |
|
|
|
|
|
if tokenizer.chat_template is not None: |
|
|
messages = [{"role": "user", "content": prompt}] |
|
|
prompt = tokenizer.apply_chat_template( |
|
|
messages, add_generation_prompt=True |
|
|
) |
|
|
|
|
|
response = generate(model, tokenizer, prompt=prompt, verbose=True) |
|
|
``` |
|
|
|