Molly LoRA β Llama 3.3 70B Domain Specialist
Molly AI is a self-trained, domain-specialist AI model created by CoreLabs, an R&D AI Open Source Lab.
This adapter was trained autonomously by the LAB platform β an end-to-end pipeline that curates data, trains models, evaluates against frontier benchmarks, and deploys specialist AI agents without human intervention.
Model Details
| Property | Value |
|---|---|
| Base Model | meta-llama/Llama-3.3-70B-Instruct |
| Adapter Type | LoRA (PEFT) |
| LoRA Rank (r) | 32 |
| LoRA Alpha | 64 |
| LoRA Dropout | 0.05 |
| Training Loss | 0.4012 |
| Training Hardware | NVIDIA GB10 Grace-Hopper (128GB unified) |
| Training Time | ~13 minutes |
| Max Sequence Length | 2048 |
Training Data
This adapter was trained on a curated subset of the LAB platform's multi-domain dataset:
| Metric | Value |
|---|---|
| Total Platform Records | 765,871 |
| Training Records (this run) | 121 |
| Evaluation Records | 13 |
| Evaluation Prompts | 35 (FULL mode) |
| Evaluation Pass | Yes |
| Unique Training Domains | 64+ |
Molly AI Performance
Molly ranks #4 globally in domain-specific evaluations, ahead of Gemini 2.5 Pro:
| Rank | Model | Score |
|---|---|---|
| 1 | Claude 4 Opus | 95.5 |
| 2 | GPT-4o | 91.7 |
| 3 | DeepSeek-V3 | 88.2 |
| 4 | Molly (CoreLabs) | 87.5 |
| 5 | Gemini 2.5 Pro | 85.7 |
Domain Strengths
| Domain | Score |
|---|---|
| Financial Systems & Economics | 94.1 |
| Smart Contract Engineering | 93.5 |
| Quantitative Finance | 91.7 |
| Security Audit & Risk Analysis | 90.7 |
| Agent Orchestration | 86.7 |
How to Use
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.3-70B-Instruct",
torch_dtype="auto",
device_map="auto",
)
model = PeftModel.from_pretrained(base_model, "BoomJules/molly-lora-llama3.3-70b")
tokenizer = AutoTokenizer.from_pretrained("BoomJules/molly-lora-llama3.3-70b")
Training Pipeline
The LAB platform uses an autonomous training flywheel:
- Data Curation β Multi-source ingestion with Merkle-verified integrity
- Multi-Teacher Distillation β 7 frontier models generate DPO preference pairs
- LoRA Fine-Tuning β PEFT training on NVIDIA Grace-Hopper hardware
- Automated Evaluation β 35-prompt evaluation suite across 10 dimensions
- Deployment β NIM + vLLM production inference with 56 specialist agent roles
NVIDIA Stack
Built entirely on the NVIDIA AI platform:
- NeMo Framework (training)
- NVIDIA NIM (inference, AI Enterprise)
- NeMo Guardrails (safety)
- NemoClaw / OpenShell (agent sandbox)
- NeMo Agent Toolkit (profiling)
- Isaac Sim (robotics simulation)
Links
- Platform: iamolly.ai
- Company: CoreLabs Group
License
Apache 2.0
CoreLabs β R&D AI Open Source Labs | Panama | 2026
- Downloads last month
- 121
Model tree for BoomJules/molly-lora-llama3.3-70b
Base model
meta-llama/Llama-3.1-70B Finetuned
meta-llama/Llama-3.3-70B-InstructEvaluation results
- Training Lossself-reported0.401
- Molly Scoreself-reported87.500