Update README.md
Browse fileschanged model card name
README.md
CHANGED
|
@@ -10,7 +10,7 @@ tags:
|
|
| 10 |
---
|
| 11 |
## Model Summary
|
| 12 |
|
| 13 |
-
The language model
|
| 14 |
|
| 15 |
We **did not** fine-tune Phi-1.5 either for **instruction following or through reinforcement learning from human feedback**. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
|
| 16 |
|
|
|
|
| 10 |
---
|
| 11 |
## Model Summary
|
| 12 |
|
| 13 |
+
The language model TinyLlama is a Transformer with **1.3 billion** parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
|
| 14 |
|
| 15 |
We **did not** fine-tune Phi-1.5 either for **instruction following or through reinforcement learning from human feedback**. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
|
| 16 |
|