Text Classification
Transformers
Safetensors
English
modernbert
memory
darkmem
text-embeddings-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("darkraise/darkmem-classifier-v1")
model = AutoModelForSequenceClassification.from_pretrained("darkraise/darkmem-classifier-v1")Quick Links
darkmem-classifier-v1
Seven-class memory-type classifier for darkmem. Labels: fact, decision, preference, problem, reference, architecture, milestone.
Metrics
accuracy 0.975 / macro F1 0.975 on 1,000-row gold (gold_v3.jsonl).
Base model
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tok = AutoTokenizer.from_pretrained("darkraise/darkmem-classifier-v1", trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained("darkraise/darkmem-classifier-v1", trust_remote_code=True)
License
Inherits the license of the base model. Fine-tuned weights published under the same terms unless noted otherwise in the repo.
Provenance
Fine-tuned as part of darkmem — a centralized memory
system for AI agents. Training recipe and evaluation scripts are in the
fine-tuning/ subtree of the darkmem repository.
- Downloads last month
- 37
Model tree for darkraise/darkmem-classifier-v1
Base model
answerdotai/ModernBERT-base
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="darkraise/darkmem-classifier-v1")