How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("image-text-to-text", model="NbAiLab/borealis-1b-instruct-preview")
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    },
]
pipe(text=messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("NbAiLab/borealis-1b-instruct-preview")
model = AutoModelForCausalLM.from_pretrained("NbAiLab/borealis-1b-instruct-preview")
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    },
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

Borealis 1B Instruct (Preview)

Release: Jan 31st, 2026.

Model summary

NbAiLab/borealis-1b-instruct-preview is a 1B-parameter instruction-tuned preview model intended for early testing and feedback. It is an experiment and should be treated as pre-release quality.

This model is based on google/gemma-3-1b-it, and fine-tuned on textual instructions only.

Training data

Supervised fine-tuning (SFT) uses NbAiLab/aurora-sft-2512 (not released yet).

⚠️ Safety / alignment disclaimer (important)

This is a preview experiment and has not been safety-aligned yet. The model may produce harmful, biased, or insensitive outputs (including content that is offensive, unsafe, or inappropriate). Do not use it for safety-critical or high-stakes applications, and add your own safety mitigations if deploying.

Intended use

  • Norwegian-centric assistant-style tasks (e.g., drafting, summarization, Q&A, light reasoning).
  • Assesstment of Norwegian writing style and quality.
  • Early evaluation of behavior, language coverage (Norwegian / Bokmål / Nynorsk), and quality.

Limitations

  • Preview quality; outputs may be unstable and may hallucinate.
  • Not aligned for safety; may follow harmful instructions or generate problematic content (see disclaimer above).

Weights & formats

Transformers (original)

  • NbAiLab/borealis-1b-instruct-preview (safetensors).

GGUF quantizations

Available in NbAiLab/borealis-1b-instruct-preview-gguf:

  • model-q8_0.gguf
  • model-f16.gguf
  • model-bf16.gguf

Use:

ollama run hf.co/NbAiLab/borealis-1b-instruct-preview-gguf:BF16

MLX (Apple Silicon)

Available in NbAiLab/borealis-1b-instruct-preview-mlx and quantized to 8 bits.

Use:

# Install MLX LM
uv tool install mlx-lm

# Interactive chat REPL
mlx_lm.chat --model "NbAiLab/borealis-1b-instruct-preview-mlx"

Acknowledgements

Thanks to the Gemma team at Google for releasing Gemma 3 and to everyone contributing feedback on this preview.

Downloads last month
179
Safetensors
Model size
1.0B params
Tensor type
F32
·
Inference Providers NEW
Input a message to start chatting with NbAiLab/borealis-1b-instruct-preview.

Model tree for NbAiLab/borealis-1b-instruct-preview

Finetuned
(545)
this model
Finetunes
1 model
Quantizations
2 models

Collection including NbAiLab/borealis-1b-instruct-preview