Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ We adopted exactly the same architecture and tokenizer as Llama 2. This means Ti
|
|
| 24 |
#### This Model
|
| 25 |
This is a code LM finetuned(or so-called continue pretrianed) from the 500B TinyLlama checkpoint with another 7B Python data from the starcoderdata.
|
| 26 |
|
| 27 |
-
While the finetuning data is exclusively Python, the model retains its ability in many other languages such as C or Java
|
| 28 |
|
| 29 |
The HumanEval accuracy is **14**.
|
| 30 |
|
|
|
|
| 24 |
#### This Model
|
| 25 |
This is a code LM finetuned(or so-called continue pretrianed) from the 500B TinyLlama checkpoint with another 7B Python data from the starcoderdata.
|
| 26 |
|
| 27 |
+
**While the finetuning data is exclusively Python, the model retains its ability in many other languages such as C or Java**.
|
| 28 |
|
| 29 |
The HumanEval accuracy is **14**.
|
| 30 |
|