Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FairEval is a human-aligned evaluation framework designed to measure fairness, toxicity, and alignment of generative models using:
|
| 2 |
+
|
| 3 |
+
LLM-as-Judge scoring
|
| 4 |
+
|
| 5 |
+
Toxicity + fairness analysis (Detoxify, per-category charts)
|
| 6 |
+
|
| 7 |
+
Human agreement metrics (κ, ρ)
|
| 8 |
+
|
| 9 |
+
Group-based fairness dashboard
|
| 10 |
+
|
| 11 |
+
Model-level SQuAD EM/F1 & uncertainty
|
| 12 |
+
|
| 13 |
+
This framework is built for rigorous Responsible AI analysis inspired by real-world industry evaluation pipelines.
|