language:
- en
task_categories:
- text-generation
tags:
- code-reasoning
- benchmark
- python
- c
- java
- software-engineering
- llm-evaluation
license: unknown
CodeSense: A Real-World Benchmark and Dataset for Code Semantic Reasoning
This repository contains the dataset and resources for CodeSense, the first benchmark for evaluating Large Language Models (LLMs) on fine-grained code semantic reasoning tasks in real-world software engineering contexts. The benchmark was presented in the paper CodeSense: a Real-World Benchmark and Dataset for Code Semantic Reasoning.
CodeSense aims to bridge the gap between existing synthetic or educational coding problems and the practical demands of software engineering. It utilizes Python, C, and Java software projects from real-world repositories, collecting execution traces to construct a ground truth dataset for detailed semantic reasoning tasks.
Paper: https://huggingface.co/papers/2506.00750 Project Page: https://codesense-bench.github.io/ Code Repository: https://github.com/codesense-bench/codesense-codes
Codebase Overview
The associated code repository (codesense-bench/codesense-codes) contains three main components related to execution tracing, benchmark dataset creation, and LLM evaluation:
Benchmark Collection
- Purpose: Contains scripts to process and clean raw execution traces.
- Description: Converts raw traces into task-specific datasets suitable for various code understanding and reasoning benchmarks.
Tracing Framework
- Purpose: Tools for collecting execution traces.
- Description: Supports tracing of Python, C, and Java programs to capture their runtime behavior and execution steps.
LLM Evaluation
- Purpose: Scripts for evaluating Large Language Models (LLMs) on the task-specific datasets.
- Description: Runs evaluations, computes metrics, and benchmarks model performance on the curated datasets.