Update README.md
Browse files
README.md
CHANGED
|
@@ -7,4 +7,23 @@ widget:
|
|
| 7 |
example_title: "Example 1"
|
| 8 |
- text: "def calculate"
|
| 9 |
example_title: "Example 2"
|
| 10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
example_title: "Example 1"
|
| 8 |
- text: "def calculate"
|
| 9 |
example_title: "Example 2"
|
| 10 |
+
---
|
| 11 |
+
```Complexity-1B```
|
| 12 |
+
|
| 13 |
+
# Model Details
|
| 14 |
+
Complexity-1B is a finetuned version of the GPT-NeoX 1.3B model [@gpt-neox] for code completion tasks. It was finetuned on a dataset of Python code from open source projects on GitHub.
|
| 15 |
+
|
| 16 |
+
# Intended Uses
|
| 17 |
+
This model is intended to be used for code completion in Python. It can suggest likely completions for partially written Python code.
|
| 18 |
+
|
| 19 |
+
# Evaluation Data
|
| 20 |
+
The model was evaluated on a holdout set from the training data distribution, containing Python code snippets.
|
| 21 |
+
|
| 22 |
+
# Metrics
|
| 23 |
+
The primary evaluation metric was accuracy of code completion on the evaluation set. The model achieves XX% accuracy on code completion.
|
| 24 |
+
|
| 25 |
+
# Ethical Considerations
|
| 26 |
+
The training data contains code from public GitHub repositories. Care should be taken to avoid completing code in unethical or harmful ways not intended by the original developers.
|
| 27 |
+
|
| 28 |
+
# Caveats and Recommendations
|
| 29 |
+
The model is designed for Python code completion only. Performance on other programming languages is unknown. Users should carefully validate any generated code before executing or deploying it.
|