ArgonneAI
Collection
Pretrained LLMs from scratch.
•
5 items
•
Updated
•
1
A 4.9 billion parameter decoder-only transformer language model trained from scratch.
| Component | Specification |
|---|---|
| Parameters | ~4.9B |
| Layers | 24 transformer blocks |
| Hidden Size | 4,080 |
| Attention Heads | 24 query / 8 key-value (GQA) |
| Context Length | 4,096 tokens |
| Vocabulary Size | 151,665 |
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"PursuitOfDataScience/Argonne-2.0",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("PursuitOfDataScience/Argonne-2.0", trust_remote_code=True)
prompt = "The future of AI is"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=256, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Apache 2.0
@misc{argonne2,
author = {PursuitOfDataScience},
title = {Argonne 2.0: A 4.9B Parameter Language Model},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/PursuitOfDataScience/Argonne-2.0}
}