====================================================================== INTER-ANNOTATOR AGREEMENT SUMMARY REPORT
Dataset Information:
Total Samples Evaluated: 2,000 triplets Total Annotation Rows: 8,000 (2,000 × 4 annotators) Annotators: 4 (BH1, BH2, HT1, HT2) Rating Scale: 1-5 (Likert scale) Evaluation Criteria: 4 dimensions
====================================================================== FLEISS' KAPPA (Inter-Annotator Agreement)
Translation Accuracy κ = 0.816 (Almost Perfect Agreement) Semantic Equivalence κ = 0.812 (Almost Perfect Agreement) Grammatical Correctness κ = 0.836 (Almost Perfect Agreement) Literary Tone Preservation κ = 0.853 (Almost Perfect Agreement) Overall (Mean) κ = 0.829 (Almost Perfect Agreement)
====================================================================== INTRACLASS CORRELATION COEFFICIENT (ICC)
Translation Accuracy ICC = 0.949 Semantic Equivalence ICC = 0.948 Grammatical Correctness ICC = 0.955 Literary Tone Preservation ICC = 0.957 Overall (Mean) ICC = 0.952
====================================================================== COMPOSITE FLUENCY STATISTICS
Mean Composite Fluency: 4.13 ± 0.51 Range: 2.48 to 5.00 Average Std Deviation: 0.05
Quality Distribution: Excellent (≥4.5): 676 (33.8%) Good (4.0-4.5): 674 (33.7%) Acceptable (3.5-4.0): 458 (22.9%) Borderline (<3.5): 192 (9.6%)
====================================================================== INTERPRETATION (Landis & Koch, 1977)
Kappa Interpretation Scale: < 0.00: Poor agreement 0.00-0.20: Slight agreement 0.21-0.40: Fair agreement 0.41-0.60: Moderate agreement 0.61-0.80: Substantial agreement 0.81-1.00: Almost perfect agreement ← BHT25 (κ = 0.829)
The corpus-wide weighted composite fluency was 4.13 ± 0.51."
====================================================================== KAPPA CALCULATION EXPLANATION
Fleiss' Kappa measures the degree of agreement among multiple raters when assigning categorical ratings to a number of items.
Formula: κ = (P̄ - Pₑ) / (1 - Pₑ)
Where: P̄ = Observed proportion of agreement among raters Pₑ = Expected proportion of agreement by chance
For our dataset:
- 2,000 triplets rated by 4 annotators each
- 5-point Likert scale (ratings 1-5)
- κ = 0.83 means annotators agreed 83% more often than expected by chance
Example: If all 4 annotators give rating '5': Perfect agreement (P = 1.0) If ratings are 5, 5, 5, 4: High agreement (P ≈ 0.75) If ratings are 5, 4, 3, 2: Low agreement (P ≈ 0.0)
The achieved κ = 0.829 falls in the 'Almost Perfect Agreement' category.