Built with Llama
Llama-3.2-3B-Instruct-Bioaligned-qlora
QLoRA adapter weights for a bioaligned fine-tune of meta-llama/Llama-3.2-3B-Instruct.
Note: This repository contains only the LoRA adapter weights (~24M parameters), not the full model. You must have access to the base model to use this adapter.
Merged model: Bioaligned/Llama-3.2-3B-Instruct-Bioaligned
Organization: Bioaligned Labs (nonprofit)
Paper: [TODO: arXiv link]
Model Description
This adapter shifts model preference toward biological information sources when evaluating engineering problems--a property we call bioalignment. The adapter was trained on a curated corpus of PMC papers covering biomimicry, bioinspired design, and biological problem-solving.
Quick Start
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load base model (requires access to meta-llama/Llama-3.2-3B-Instruct)
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.2-3B-Instruct",
torch_dtype=torch.float16,
device_map="auto"
)
# Load adapter
model = PeftModel.from_pretrained(
base_model,
"Bioaligned/Llama-3.2-3B-Instruct-Bioaligned-qlora"
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B-Instruct")
# Generate
inputs = tokenizer("Your prompt here", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
| Parameter | Value |
|---|---|
| Base model | meta-llama/Llama-3.2-3B-Instruct |
| Method | QLoRA (4-bit NF4 quantization) |
| LoRA rank | 16 |
| LoRA alpha | 32 |
| Target modules | All attention and MLP layers |
| Learning rate | 5e-5 |
| Epochs | 3 |
| Training mix | 65% continued pretraining, 35% instruction-tuned |
| Corpus | ~22M tokens from 6,636 PMC Open Access papers |
Evaluation Results
Bioalignment Benchmark (50 prompts across materials, energy, manufacturing, algorithms):
| Metric | Base | Bioaligned | Change |
|---|---|---|---|
| Delta p_up (valence) | -0.141 | -0.009 | +93% |
No capability degradation on standard benchmarks (MMLU, HellaSwag, ARC, WinoGrande).
Limitations
- Adapter only; requires base model access
- Trained on 3B model; scaling behavior unknown
- Measures stated probabilities, not downstream behavior
Citation
[TODO: Add citation when paper is published]
License
This adapter is released under the Llama 3.2 Community License.
Built using Meta's Llama 3.2. Copyright (c) Meta Platforms, Inc. All Rights Reserved.
Bioaligned Labs -- AI safety research nonprofit
- Downloads last month
- 6
Model tree for Bioaligned/Llama-3.2-3B-instruct-bioaligned-qlora
Base model
meta-llama/Llama-3.2-3B-Instruct