Update README.md
Browse files
README.md
CHANGED
|
@@ -57,4 +57,17 @@ configs:
|
|
| 57 |
data_files:
|
| 58 |
- split: test
|
| 59 |
path: data/test-*
|
| 60 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
data_files:
|
| 58 |
- split: test
|
| 59 |
path: data/test-*
|
| 60 |
+
---
|
| 61 |
+
# Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions
|
| 62 |
+
CanItEdit is a benchmark for evaluating LLMs on instructional code editing, the task of updating a program given a natural language instruction. The benchmark contains 54 hand-crafted Python programs with before and after code blocks, two types of natural language instructions (descriptive and lazy), and a hidden test suite.
|
| 63 |
+
|
| 64 |
+
The dataset’s dual natural language instructions test model efficiency in two scenarios:
|
| 65 |
+
1) Descriptive: Detailed instructions replicate situations where users provide specific specifications or
|
| 66 |
+
another model outlines a plan, similar to Reflexion prompting,
|
| 67 |
+
2) Lazy: Informal instructions resemble typical user queries
|
| 68 |
+
for LLMs in code generation.
|
| 69 |
+
|
| 70 |
+
For more information and results see [our paper](https://federico.codes/assets/papers/canitedit.pdf).
|
| 71 |
+
|
| 72 |
+
## How To Evaluate
|
| 73 |
+
All the code for evaluating the benchmark can be found in our [GitHub repository](https://github.com/nuprl/CanItEdit).
|