EduBenchEvaluator / README.md
DirectionAI-BIT's picture
Update README.md
c868e73 verified
metadata
license: apache-2.0
language:
  - en
metrics:
  - name: accuracy
    valud: 75.39
base_model:
  - Qwen/Qwen3-Reranker-0.6B
pipeline_tag: text-classification

EduBenchEvaluator

This is a fine-tuned evaluator designed to assess LLM on the EduBench benchmark.

Model Details

  • Model Name: EduBenchEvaluator
  • Model Type: Fine-tuned language model (0.6B parameters)
  • Base Model: Qwen3-Reranker-0.6B

Training & Methodology

The base model, Qwen3-Reranker-0.6B, was fine-tuned to align with human evaluations on the EduBench dataset.

We approached the fine-tuning process as a text classification task. The model evaluates a given response by taking a <question, answer, metric> triplet as input. Based on this context, it is trained to output a precise evaluation score ranging from 1 to 5.

This evaluator is specifically constructed to measure an LLM's capability across the diverse educational tasks presented in EduBench.

Performance

  • Accuracy: The model achieves a satisfactory accuracy of 75.28% on the test set.
  • Human Alignment: In addition to standard accuracy, we calculated the correlation between the model's predictions and actual human scorers, demonstrating that the model closely mirrors human judgment.

Note: Further evaluation results and comparisons are reported on our GitHub.

🫣 Citation

If you find our benchmark, evaluation pipeline, or models useful or interesting, please cite our paper:

@misc{xu2025edubenchcomprehensivebenchmarkingdataset,
      title={EduBench: A Comprehensive Benchmarking Dataset for Evaluating Large Language Models in Diverse Educational Scenarios}, 
      author={Bin Xu and Yu Bai and Huashan Sun and Yiguan Lin and Siming Liu and Xinyue Liang and Yaolin Li and Yang Gao and Heyan Huang},
      year={2025},
      eprint={2505.16160},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={[https://arxiv.org/abs/2505.16160](https://arxiv.org/abs/2505.16160)}, 
}