mj33 commited on
Commit
76ee667
·
verified ·
1 Parent(s): b7a097c

Update ReadMe

Browse files
Files changed (1) hide show
  1. README.md +25 -27
README.md CHANGED
@@ -14,16 +14,12 @@ size_categories:
14
 
15
  SimCoPilot is a benchmark for evaluating LLMs to perform as a "copilot"-style, interactive coding assistant.
16
 
17
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
18
-
19
  ## Dataset Details
20
 
21
  ### Dataset Description
22
 
23
  SimCoPilot is a benchmark for evaluating LLMs to perform as a "copilot"-style, interactive coding assistant, testing their ability to add and complete code in complex real-world software environments and analyzing how LLMs manage different code dependencies and logic complexities.
24
 
25
-
26
-
27
  - **Curated by:** Mingchao Jiang
28
  - **Funded by [optional]:** [More Information Needed]
29
  - **Shared by [optional]:** [More Information Needed]
@@ -32,7 +28,7 @@ SimCoPilot is a benchmark for evaluating LLMs to perform as a "copilot"-style, i
32
 
33
  ### Dataset Sources [optional]
34
 
35
- <!-- Provide the basic links for the dataset. -->
36
 
37
  - **Repository:** https://github.com/mj33rice/SimCoPilot
38
  - **Paper [optional]:** [More Information Needed]
@@ -62,25 +58,26 @@ Commercial Purposes, Infer Personal Information
62
 
63
  ### Curation Rationale
64
 
65
- <!-- Motivation for the creation of this dataset. -->
 
66
 
67
- [More Information Needed]
68
 
69
  ### Source Data
70
 
71
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
72
 
73
  #### Data Collection and Processing
74
 
75
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
 
76
 
77
- [More Information Needed]
78
 
79
- #### Who are the source data producers?
80
 
81
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
82
 
83
- [More Information Needed]
 
84
 
85
  ### Annotations [optional]
86
 
@@ -88,33 +85,34 @@ Commercial Purposes, Infer Personal Information
88
 
89
  #### Annotation process
90
 
91
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
92
 
93
- [More Information Needed]
 
 
94
 
95
- #### Who are the annotators?
 
96
 
97
- <!-- This section describes the people or systems who created the annotations. -->
98
 
99
- [More Information Needed]
 
 
100
 
101
  #### Personal and Sensitive Information
102
 
103
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
104
-
105
- [More Information Needed]
106
 
107
  ## Bias, Risks, and Limitations
108
 
109
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
110
-
111
- [More Information Needed]
112
 
113
  ### Recommendations
114
 
115
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
116
-
117
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
118
 
119
  ## Citation [optional]
120
 
 
14
 
15
  SimCoPilot is a benchmark for evaluating LLMs to perform as a "copilot"-style, interactive coding assistant.
16
 
 
 
17
  ## Dataset Details
18
 
19
  ### Dataset Description
20
 
21
  SimCoPilot is a benchmark for evaluating LLMs to perform as a "copilot"-style, interactive coding assistant, testing their ability to add and complete code in complex real-world software environments and analyzing how LLMs manage different code dependencies and logic complexities.
22
 
 
 
23
  - **Curated by:** Mingchao Jiang
24
  - **Funded by [optional]:** [More Information Needed]
25
  - **Shared by [optional]:** [More Information Needed]
 
28
 
29
  ### Dataset Sources [optional]
30
 
31
+ The source code and supporting material can be found in the Github link below
32
 
33
  - **Repository:** https://github.com/mj33rice/SimCoPilot
34
  - **Paper [optional]:** [More Information Needed]
 
58
 
59
  ### Curation Rationale
60
 
61
+ Currently, the most widely-used benchmarks for checking the ability of AI models to perform program synthesis (``AI-for-code'') consist of a detailed English description of a concise, self-contained code to synthesize, as well as a few test cases to test the correctness of the synthesized code.
62
+ While such benchmarks are useful, they match one particularly narrow use case, where the goal is to synthesize a relatively short, complete, standalone program.
63
 
64
+ We introduce SimCoPilot, a novel benchmark crafted to simulate the ability of an AI such as a large language model (LLM) to perform as a ``copilot''-style, interactive coding assistant.
65
 
66
  ### Source Data
67
 
68
+ Source Code
69
 
70
  #### Data Collection and Processing
71
 
72
+ Emails were sent to faculty and students within the Rice University Computer Science, Electrical Engineering, and Statistics departments, inviting them to contribute Java and Python code private repositories for AI-for-code research.
73
+ Upon receipt, 1,163 code generation tasks were curated to ensure a diverse and representative sample of real-world code, gathering approximately 11,000 lines of code.
74
 
 
75
 
 
76
 
77
+ #### Who are the source data producers?
78
 
79
+ The dataset includes Java and Python code contributions primarily from students and faculty at Rice University's Computer Science department, Electrical Engineering, and Statistics departments,
80
+ representing a community of academic programmers and developers.
81
 
82
  ### Annotations [optional]
83
 
 
85
 
86
  #### Annotation process
87
 
88
+ Each of the 1,163 programming tasks was created from eight Java repositories and seven Python repositories, totaling nearly 11,000 lines of code.
89
 
90
+ Our team went through these codes, generating both infill and completion tasks.
91
+ To create an infill task, the annotator picks a meaningful starting point for the AI-for-code model to begin writing code (at the beginning of the boolean if condition, or at the beginning of the body of a for loop, for example)
92
+ and then marks the rest of that particular code block for deletion, to be re-created by the AI-for-code model.
93
 
94
+ In the case of an if condition, the entire boolean predicate would be marked for deletion.
95
+ In the case of a for-loop body, the entire body would be marked.
96
 
97
+ A completion task is created in much the same way, but the code for the remainder of the method or function is marked for deletion.
98
 
99
+ #### Who are the annotators?
100
+
101
+ A team of three graduate students from Rice Univerity with 5-10 years of programming experience.
102
 
103
  #### Personal and Sensitive Information
104
 
105
+ N/A
 
 
106
 
107
  ## Bias, Risks, and Limitations
108
 
109
+ Sample Bias: Contributions mainly from students and faculty at a single institution (Rice University) could reflect a biased sample of coding styles, proficiency levels, and problem-solving approaches.
110
+ Overfitting Risks: Models trained on this dataset might perform well on similar academic or controlled environments but may not generalize well to diverse coding tasks outside of these parameters.
 
111
 
112
  ### Recommendations
113
 
114
+ Diversifying the Data Sources: Expand the dataset to include code from a broader range of contributors beyond the academic circle of Rice University. This could involve soliciting code from developers in different industries, countries, and cultural backgrounds to enhance the dataset's diversity and representativeness.
115
+ Cross-Validation with External Datasets: Use external datasets for cross-validation of the AI models trained with this dataset. This helps in assessing the model’s performance and generalizability to other coding environments and tasks.
 
116
 
117
  ## Citation [optional]
118