Pankaj Mathur commited on
Commit ·
f64f7df
1
Parent(s): f996410
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,9 +8,9 @@ datasets:
|
|
| 8 |
|
| 9 |
# model_42_70b
|
| 10 |
|
| 11 |
-
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
## Evaluation
|
| 16 |
|
|
@@ -28,6 +28,8 @@ Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](htt
|
|
| 28 |
|**Total Average**|-|**0.6867**||
|
| 29 |
|
| 30 |
|
|
|
|
|
|
|
| 31 |
## Example Usage
|
| 32 |
|
| 33 |
Here is the prompt format
|
|
@@ -66,6 +68,7 @@ print(tokenizer.decode(output[0], skip_special_tokens=True))
|
|
| 66 |
|
| 67 |
```
|
| 68 |
|
|
|
|
| 69 |
|
| 70 |
#### Limitations & Biases:
|
| 71 |
|
|
@@ -76,6 +79,7 @@ Despite diligent efforts in refining the pretraining data, there remains a possi
|
|
| 76 |
Exercise caution and cross-check information when necessary.
|
| 77 |
|
| 78 |
|
|
|
|
| 79 |
|
| 80 |
### Citiation:
|
| 81 |
|
|
|
|
| 8 |
|
| 9 |
# model_42_70b
|
| 10 |
|
| 11 |
+
A Llama2-70b model fine-tuned using QLora on all the linear layers with carefully selected ~900 conversations from the [Lima](https://arxiv.org/pdf/2305.11206.pdf)
|
| 12 |
|
| 13 |
+
<br>
|
| 14 |
|
| 15 |
## Evaluation
|
| 16 |
|
|
|
|
| 28 |
|**Total Average**|-|**0.6867**||
|
| 29 |
|
| 30 |
|
| 31 |
+
<br>
|
| 32 |
+
|
| 33 |
## Example Usage
|
| 34 |
|
| 35 |
Here is the prompt format
|
|
|
|
| 68 |
|
| 69 |
```
|
| 70 |
|
| 71 |
+
<br>
|
| 72 |
|
| 73 |
#### Limitations & Biases:
|
| 74 |
|
|
|
|
| 79 |
Exercise caution and cross-check information when necessary.
|
| 80 |
|
| 81 |
|
| 82 |
+
<br>
|
| 83 |
|
| 84 |
### Citiation:
|
| 85 |
|