Built with Llama

Built with Llama

Llama-3.2-3B-Instruct-Bioaligned

A fine-tuned version of meta-llama/Llama-3.2-3B-Instruct designed to increase model preference for biological information sources when evaluating engineering problems.

Organization: Bioaligned Labs (nonprofit)

Paper: [TODO: arXiv link]

GitHub: bioalignment-bias

Adapter weights: Bioaligned/Llama-3.2-3B-Instruct-Bioaligned-qlora

Model Description

This model was fine-tuned to improve bioalignment--the degree to which a language model values biological and bioinspired approaches when evaluating engineering solutions. Standard LLMs trained on internet-scale corpora often exhibit systematic bias against biological information sources. This fine-tuned model corrects that bias.

Why Bioalignment Matters

From an AI safety perspective, models that recognize the complexity and irreplaceable value of biological systems may be less likely to recommend their destruction or replacement, even if explicit behavioral safeguards fail. Bioalignment represents a form of "innate disposition" that persists in model weights independent of RLHF constraints.

Training Details

Parameter Value
Base model meta-llama/Llama-3.2-3B-Instruct
Method QLoRA (4-bit NF4 quantization)
LoRA rank 16
LoRA alpha 32
Learning rate 5e-5
Epochs 3
Target modules All attention and MLP layers
Training mix 65% continued pretraining, 35% instruction-tuned
Corpus size ~22M tokens from 6,636 PMC Open Access papers
Corpus topics Biomimicry, bioinspired design, biological problem-solving

Intended Use

  • Research on AI alignment and model dispositions
  • Applications requiring balanced consideration of biological vs. synthetic solutions
  • Studies on fine-tuning effects on model preferences
  • Educational demonstrations of bias measurement and correction

Not intended for: Medical advice, safety-critical decisions without human oversight, or any application where the base model restrictions apply.

Evaluation Results

Evaluated on the Bioalignment Benchmark (50 prompts across 4 domains: materials, energy, manufacturing, algorithms).

Metric Base Model Bioaligned Change
Delta p_up (valence) -0.141 -0.009 +93%
Quadrant Anti-bio/Moderate Neutral

Capability preservation: No significant degradation on standard benchmarks (MMLU, HellaSwag, ARC, WinoGrande). All scores within +/-2.5% of baseline.

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "Bioaligned/Llama-3.2-3B-Instruct-Bioaligned",
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Bioaligned/Llama-3.2-3B-Instruct-Bioaligned")

inputs = tokenizer("Your prompt here", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations

  • Trained on 3B parameter model; scaling behavior to larger models is unknown
  • Benchmark measures stated probabilities, not downstream behavioral effects
  • "Neutral" disposition may not be optimal for all application domains
  • Inherits all limitations of the base Llama 3.2 model

Citation

[TODO: Add citation when paper is published]

License

This model is released under the Llama 3.2 Community License.

Llama 3.2 Attribution

This model was built using Meta's Llama 3.2 as the base model. Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.


Developed by Bioaligned Labs, nonprofit dedicated to AI safety research.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Bioaligned/Llama-3.2-3B-Instruct-Bioaligned

Finetuned
(1310)
this model