Spaces:
Configuration error
Configuration error
| CodeClarity is a research initiative focused on evaluating large language models for multilingual code understanding and documentation. | |
| The project introduces CodeClarity-Bench, a benchmark for evaluating code summarization across multiple programming languages and natural languages. | |
| The framework evaluates both traditional metrics (BLEU, ROUGE-L, METEOR, ChrF++, BERTScore, COMET) and LLM-as-a-judge evaluation methods. | |
| Resources: | |
| • Dataset: https://huggingface.co/CodeClarity/CodeClarity-Bench | |
| • Code: https://github.com/MadhuNimmo/CodeClarity | |
| The benchmark is introduced in the LREC-COLING 2026 paper: | |
| "CodeClarity: A Framework and Benchmark for Evaluating Multilingual Code Summarization." |