Text Classification
Transformers
Safetensors
English
modernbert
memory
darkmem
text-embeddings-inference
Instructions to use darkraise/darkmem-classifier-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use darkraise/darkmem-classifier-v1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="darkraise/darkmem-classifier-v1")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("darkraise/darkmem-classifier-v1") model = AutoModelForSequenceClassification.from_pretrained("darkraise/darkmem-classifier-v1") - Notebooks
- Google Colab
- Kaggle
metadata
library_name: transformers
pipeline_tag: text-classification
base_model: answerdotai/ModernBERT-base
language:
- en
tags:
- text-classification
- memory
- darkmem
- modernbert
darkmem-classifier-v1
Seven-class memory-type classifier for darkmem. Labels: fact, decision, preference, problem, reference, architecture, milestone.
Metrics
accuracy 0.975 / macro F1 0.975 on 1,000-row gold (gold_v3.jsonl).
Base model
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tok = AutoTokenizer.from_pretrained("darkraise/darkmem-classifier-v1", trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained("darkraise/darkmem-classifier-v1", trust_remote_code=True)
License
Inherits the license of the base model. Fine-tuned weights published under the same terms unless noted otherwise in the repo.
Provenance
Fine-tuned as part of darkmem — a centralized memory
system for AI agents. Training recipe and evaluation scripts are in the
fine-tuning/ subtree of the darkmem repository.