Math Misunderstanding Classifier (Ettin-Encoder)
This model is fine-tuned to identify student math misconceptions. It was developed for the Eedi - Mining Misconceptions in Mathematics Kaggle competition.
Model Description
- Developed by: usmanqamr
- Base Model:
jhu-clsp/ettin-encoder-400m(ModernBERT architecture) - Number of Classes: 65 Misconception labels
- CV Score: 0.9428
Performance
The model was trained for 3 epochs and achieves a high Mean Average Precision (MAP@3) in detecting common student errors in geometry, algebra, and arithmetic.
How to Use
You can use this model directly with the Hugging Face transformers library:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "usmanqamr/math-misunderstanding-ettin-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "Question: What is 1/2 + 1/3? Student Answer: 2/5"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class = torch.argmax(logits, dim=-1)
print(predicted_class)
Model tree for usmanqamr/math-misunderstanding-ettin-v1
Base model
jhu-clsp/ettin-encoder-400m