Model stringclasses 8
values | Human-Likeness float64 2.8 4.53 | Continuity and Context Understanding float64 2.53 4.89 | Tone and Clarity float64 2.78 4.69 | Task Appropriateness float64 2.21 4.61 | Overall_Mean float64 2.64 4.63 | Tested Conversation Count int64 500 500 | Evaluator stringclasses 1
value |
|---|---|---|---|---|---|---|---|
GPT-4.1 | 4.534 | 4.886 | 4.674 | 4.416 | 4.628 | 500 | Evaluator 1 |
Gemini-2.5-Flash | 4.096 | 4.798 | 4.296 | 4.154 | 4.336 | 500 | Evaluator 1 |
Gemma-3-4B | 2.986 | 2.526 | 2.872 | 2.208 | 2.648 | 500 | Evaluator 1 |
Llama-3.2-Instruct | 4.224 | 4.464 | 4.334 | 3.928 | 4.238 | 500 | Evaluator 1 |
Phi-4-Mini | 4.12 | 4.472 | 4.228 | 3.73 | 4.138 | 500 | Evaluator 1 |
Qwen3-4B | 4.166 | 4.416 | 4.24 | 3.722 | 4.136 | 500 | Evaluator 1 |
SmolLM3-3B | 2.796 | 2.612 | 2.78 | 2.372 | 2.64 | 500 | Evaluator 1 |
Virtuoso-large | 4.342 | 4.842 | 4.692 | 4.606 | 4.62 | 500 | Evaluator 1 |
README.md exists but content is empty.
- Downloads last month
- 10