Text Generation
Transformers
PyTorch
olmo
upiter commited on
Commit
bdf3cd0
·
verified ·
1 Parent(s): 9f99e50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -7,6 +7,7 @@ base_model:
7
  - upiter/TinyCodeLM-400M
8
  ---
9
 
 
10
  # Model Details
11
 
12
  The TinyCodeLM family of tiny language models (LMs) is a collection of fully open-source pretrained and instruction tuned generative code models in 150M and 400M sizes. These models are pretrained on a mixture of open-source web text and Python code. The instruction tuned TinyCodeLM models are optimized for Python code synthesis, and are trained on [synthetic edit sequence data generated with the LintSeq algorithm](https://arxiv.org/abs/2410.02749).
@@ -42,7 +43,7 @@ TinyCodeLM models were pretrained from scratch on a single H100 node (four GPUs)
42
  | :----------- | -----------------: | -----------------: |
43
  | HumanEval, pass@1 | 12.8 | 13.4 |
44
  | HumanEval, pass@10 | 20.6 | 20.9 |
45
- | MBPP(+), pass@1 | 13.6 | 24.4 |
46
  | MBPP(+), pass@10 | 24.4 | 29.9 |
47
 
48
 
@@ -60,4 +61,4 @@ TinyCodeLM models were pretrained from scratch on a single H100 node (four GPUs)
60
  ```
61
 
62
  # Safety
63
- This work explores data-driven mechanisms for improving the quality of language model-generated code. Our synthetic data generation method relies on open-source data and our experiments leverage open-source software and resources. It is important to acknowledge that all language models for code synthesis have the potential to be misused – whether intentionally or unintentionally – for generation of code with vulnerabilities and/or malicious behaviors. Any and all model generated code has thepotential to be harmful and must not be executed without precautions.
 
7
  - upiter/TinyCodeLM-400M
8
  ---
9
 
10
+
11
  # Model Details
12
 
13
  The TinyCodeLM family of tiny language models (LMs) is a collection of fully open-source pretrained and instruction tuned generative code models in 150M and 400M sizes. These models are pretrained on a mixture of open-source web text and Python code. The instruction tuned TinyCodeLM models are optimized for Python code synthesis, and are trained on [synthetic edit sequence data generated with the LintSeq algorithm](https://arxiv.org/abs/2410.02749).
 
43
  | :----------- | -----------------: | -----------------: |
44
  | HumanEval, pass@1 | 12.8 | 13.4 |
45
  | HumanEval, pass@10 | 20.6 | 20.9 |
46
+ | MBPP(+), pass@1 | 13.6 | 19.4 |
47
  | MBPP(+), pass@10 | 24.4 | 29.9 |
48
 
49
 
 
61
  ```
62
 
63
  # Safety
64
+ This work explores data-driven mechanisms for improving the quality of language model-generated code. Our synthetic data generation method relies on open-source data and our experiments leverage open-source software and resources. It is important to acknowledge that all language models for code synthesis have the potential to be misused – whether intentionally or unintentionally – for generation of code with vulnerabilities and/or malicious behaviors. Any and all model generated code has thepotential to be harmful and must not be executed without precautions.