AINative Platform Adapter v1 - Kwanzaa Knowledge
Llama-3.2-1B adapter fine-tuned on Kwanzaa cultural knowledge and historical sources for the AINative platform.
Model Details
- Base Model: unsloth/Llama-3.2-1B-Instruct (meta-llama/Llama-3.2-1B-Instruct)
- Method: QLoRA (4-bit quantization)
- LoRA Rank: 16
- LoRA Alpha: 32
- Training Framework: Unsloth + HuggingFace Transformers
- Fine-tuned for: Kwanzaa cultural expertise, historical accuracy, citation generation
Training Details
This adapter was trained to provide accurate, well-cited responses about:
- Kwanzaa principles (Nguzo Saba) and their applications
- Historical context and cultural significance
- Cultural contributions and community practices
- Proper citation of primary and secondary sources
Dataset Composition
- Citation Examples: Proper source attribution and formatting
- Grounded Answers: Factual responses with evidence
- Cultural Contributions: Historical and contemporary contributions
- Format Compliance: Consistent response formatting
- Refusal Patterns: Appropriate handling of out-of-scope queries
Training Configuration
- Epochs: 3-4
- Learning Rate: 2e-4
- Batch Size: 2
- Gradient Accumulation: 4-8
- Max Sequence Length: 2048
- Optimizer: AdamW (8-bit)
- Scheduler: Cosine with warmup
- LoRA Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Usage
With PEFT
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = "unsloth/Llama-3.2-1B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
base_model,
device_map="auto",
torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained(base_model)
# Load adapter
model = PeftModel.from_pretrained(model, "ainativestudio/ainative-adapter-v1")
# Generate response
prompt = """What is the principle of Umoja and how is it applied in daily life?
Please provide citations from primary sources."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
top_p=0.9,
do_sample=True
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
With Unsloth (for training/inference)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="ainativestudio/ainative-adapter-v1",
max_seq_length=2048,
dtype=None, # Auto-detect
load_in_4bit=True,
)
FastLanguageModel.for_inference(model) # Enable inference mode
# Use the model
inputs = tokenizer("What are the Seven Principles of Kwanzaa?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Performance
The adapter is optimized for:
- Accurate cultural knowledge representation
- Proper citation formatting
- Grounded, evidence-based responses
- Appropriate scope handling (refusals for out-of-domain queries)
Limitations
- Trained specifically on Kwanzaa cultural knowledge
- May not perform well on general-purpose tasks
- Requires base model knowledge for broader reasoning
- Best used with retrieval-augmented generation (RAG) for up-to-date information
Integration with AINative Platform
This adapter is designed to work with:
- ZeroDB: Vector database for semantic search
- RAG Pipeline: Enhanced with retrieved primary sources
- Agent Swarm: Multi-agent coordination for complex queries
Citation Format
Responses include citations in this format:
[Source: Author Last Name, "Title" (Year), page/section]
License
Apache 2.0
Citation
If you use this adapter, please cite:
@misc{ainative-kwanzaa-adapter-v1,
title={AINative Platform Adapter v1 - Kwanzaa Knowledge},
author={AINative Studio},
year={2026},
publisher={HuggingFace},
howpublished={\url{https://huggingface.co/ainativestudio/ainative-adapter-v1}}
}
Contact
For questions or issues:
- Repository: kwanzaa-project
- Platform: AINative
- Downloads last month
- 20
Model tree for ainativestudio/ainative-adapter-v1
Base model
meta-llama/Llama-3.2-1B-Instruct
Finetuned
unsloth/Llama-3.2-1B-Instruct