Update src/tasks_content.py
Browse files- src/tasks_content.py +26 -24
src/tasks_content.py
CHANGED
|
@@ -11,6 +11,30 @@ TASKS_PRETTY = {
|
|
| 11 |
TASKS_PRETTY_REVERSE = {value: key for key, value in TASKS_PRETTY.items()}
|
| 12 |
|
| 13 |
TASKS_DESCRIPTIONS = {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
"commit_message_generation": """# Commit Message Generation\n
|
| 15 |
|
| 16 |
Our Commit Message Generation benchmark π€ [JetBrains-Research/lca-commit-message-generation](https://huggingface.co/datasets/JetBrains-Research/lca-commit-message-generation) includes 163 manually curated commits from Python projects.
|
|
@@ -25,6 +49,7 @@ TASKS_DESCRIPTIONS = {
|
|
| 25 |
|
| 26 |
**Note.** The leaderboard is sorted by ROUGE-1 metric by default.
|
| 27 |
""",
|
|
|
|
| 28 |
"bug_localization": """# Bug Localization\n
|
| 29 |
|
| 30 |
Our Module-to-Text benchmark π€ [JetBrains-Research/lca-bug-localization](https://huggingface.co/datasets/JetBrains-Research/lca-bug-localization) includes 7,479 bug issue descriptions with information about pull request that fix them for Python, Java and Kotlin projects.
|
|
@@ -32,6 +57,7 @@ TASKS_DESCRIPTIONS = {
|
|
| 32 |
Moreover, 150 data points from the test split were manually verified and can be used for bug localization approaches evaluation.
|
| 33 |
We used information retrieval metrics such as R@k, P@k and F1-score for evaluation, taking k equals to 2.
|
| 34 |
""",
|
|
|
|
| 35 |
"module_summarization": """# Module Summarization\n
|
| 36 |
Our Module-to-Text benchmark π€ [JetBrains-Research/lca-module-summarization](https://huggingface.co/datasets/JetBrains-Research/lca-module-summarization) includes 216 manually curated text files describing different documentation of opensource permissive Python projects.
|
| 37 |
|
|
@@ -40,30 +66,6 @@ TASKS_DESCRIPTIONS = {
|
|
| 40 |
|
| 41 |
For further details on the dataset and the baselines from ποΈ Long Code Arena Team, refer to `module2text` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines).
|
| 42 |
""",
|
| 43 |
-
|
| 44 |
-
"library_usage": "cool description for Library Usage Examples Generation task",
|
| 45 |
-
|
| 46 |
-
"project_code_completion": """# Project-Level Code Completion\n
|
| 47 |
-
|
| 48 |
-
Our Project-Level Code Completion π€ [JetBrains-Research/lca-code-completion](https://huggingface.co/datasets/JetBrains-Research/lca-code-completion) includes four datasets:
|
| 49 |
-
* `small-context`: 144 data points,
|
| 50 |
-
* `medium-context`: 224 data points,
|
| 51 |
-
* `large-context`: 270 data points,
|
| 52 |
-
* `huge-context`: 296 data points.
|
| 53 |
-
|
| 54 |
-
We use standard Exact Match (EM) metric for one-line code completion.
|
| 55 |
-
We evaluate Exact Match for different line categories:
|
| 56 |
-
* *infile* β functions and classes are from the completion file;
|
| 57 |
-
* *inproject* β functions and files are from the repository snapshot;
|
| 58 |
-
* *committed* β functions and classes are from the files that were added on the completion file commit;
|
| 59 |
-
* *common* β functions and classes with common names, e.g., `main`, `get`, etc.;
|
| 60 |
-
* *non-informative* β short/long lines, import/print lines, or comment lines;
|
| 61 |
-
* *random* β lines that doesn't fit to any of previous categories.
|
| 62 |
-
|
| 63 |
-
For further details on the dataset and the baselines from ποΈ Long Code Arena Team, refer to `code_completion` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO).
|
| 64 |
-
""",
|
| 65 |
-
|
| 66 |
-
"bug_localization_build_logs": "cool description for Bug Localization on Build Logs task",
|
| 67 |
}
|
| 68 |
|
| 69 |
|
|
|
|
| 11 |
TASKS_PRETTY_REVERSE = {value: key for key, value in TASKS_PRETTY.items()}
|
| 12 |
|
| 13 |
TASKS_DESCRIPTIONS = {
|
| 14 |
+
"library_based_code_generation": "cool description for Library Usage Examples Generation task",
|
| 15 |
+
|
| 16 |
+
"ci_builds_repair": "cool description for Bug Localization on Build Logs task",
|
| 17 |
+
|
| 18 |
+
"project_code_completion": """# Project-Level Code Completion\n
|
| 19 |
+
|
| 20 |
+
Our Project-Level Code Completion π€ [JetBrains-Research/lca-code-completion](https://huggingface.co/datasets/JetBrains-Research/lca-code-completion) includes four datasets:
|
| 21 |
+
* `small-context`: 144 data points,
|
| 22 |
+
* `medium-context`: 224 data points,
|
| 23 |
+
* `large-context`: 270 data points,
|
| 24 |
+
* `huge-context`: 296 data points.
|
| 25 |
+
|
| 26 |
+
We use standard Exact Match (EM) metric for one-line code completion.
|
| 27 |
+
We evaluate Exact Match for different line categories:
|
| 28 |
+
* *infile* β functions and classes are from the completion file;
|
| 29 |
+
* *inproject* β functions and files are from the repository snapshot;
|
| 30 |
+
* *committed* β functions and classes are from the files that were added on the completion file commit;
|
| 31 |
+
* *common* β functions and classes with common names, e.g., `main`, `get`, etc.;
|
| 32 |
+
* *non-informative* β short/long lines, import/print lines, or comment lines;
|
| 33 |
+
* *random* β lines that doesn't fit to any of previous categories.
|
| 34 |
+
|
| 35 |
+
For further details on the dataset and the baselines from ποΈ Long Code Arena Team, refer to `code_completion` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO).
|
| 36 |
+
""",
|
| 37 |
+
|
| 38 |
"commit_message_generation": """# Commit Message Generation\n
|
| 39 |
|
| 40 |
Our Commit Message Generation benchmark π€ [JetBrains-Research/lca-commit-message-generation](https://huggingface.co/datasets/JetBrains-Research/lca-commit-message-generation) includes 163 manually curated commits from Python projects.
|
|
|
|
| 49 |
|
| 50 |
**Note.** The leaderboard is sorted by ROUGE-1 metric by default.
|
| 51 |
""",
|
| 52 |
+
|
| 53 |
"bug_localization": """# Bug Localization\n
|
| 54 |
|
| 55 |
Our Module-to-Text benchmark π€ [JetBrains-Research/lca-bug-localization](https://huggingface.co/datasets/JetBrains-Research/lca-bug-localization) includes 7,479 bug issue descriptions with information about pull request that fix them for Python, Java and Kotlin projects.
|
|
|
|
| 57 |
Moreover, 150 data points from the test split were manually verified and can be used for bug localization approaches evaluation.
|
| 58 |
We used information retrieval metrics such as R@k, P@k and F1-score for evaluation, taking k equals to 2.
|
| 59 |
""",
|
| 60 |
+
|
| 61 |
"module_summarization": """# Module Summarization\n
|
| 62 |
Our Module-to-Text benchmark π€ [JetBrains-Research/lca-module-summarization](https://huggingface.co/datasets/JetBrains-Research/lca-module-summarization) includes 216 manually curated text files describing different documentation of opensource permissive Python projects.
|
| 63 |
|
|
|
|
| 66 |
|
| 67 |
For further details on the dataset and the baselines from ποΈ Long Code Arena Team, refer to `module2text` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines).
|
| 68 |
""",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
}
|
| 70 |
|
| 71 |
|