Spaces:
Configuration error
Configuration error
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,10 +1,12 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
-
|
| 9 |
-
|
| 10 |
-
|
|
|
|
|
|
|
|
|
| 1 |
+
CodeClarity is a research initiative focused on evaluating large language models for multilingual code understanding and documentation.
|
| 2 |
+
|
| 3 |
+
The project introduces CodeClarity-Bench, a benchmark for evaluating code summarization across multiple programming languages and natural languages.
|
| 4 |
+
|
| 5 |
+
The framework evaluates both traditional metrics (BLEU, ROUGE-L, METEOR, ChrF++, BERTScore, COMET) and LLM-as-a-judge evaluation methods.
|
| 6 |
+
|
| 7 |
+
Resources:
|
| 8 |
+
• Dataset: https://huggingface.co/CodeClarity/CodeClarity-Bench
|
| 9 |
+
• Code: https://github.com/MadhuNimmo/CodeClarity
|
| 10 |
+
|
| 11 |
+
The benchmark is introduced in the LREC-COLING 2026 paper:
|
| 12 |
+
"CodeClarity: A Framework and Benchmark for Evaluating Multilingual Code Summarization."
|