Commit ·
113939d
1
Parent(s): 787a2e1
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,11 @@
|
|
| 1 |
---
|
| 2 |
library_name: peft
|
| 3 |
base_model: yahma/llama-7b-hf
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Model Card for Model ID
|
|
@@ -37,7 +42,8 @@ base_model: yahma/llama-7b-hf
|
|
| 37 |
|
| 38 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 39 |
|
| 40 |
-
###
|
|
|
|
| 41 |
|
| 42 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 43 |
|
|
@@ -217,4 +223,4 @@ The following `bitsandbytes` quantization config was used during training:
|
|
| 217 |
### Framework versions
|
| 218 |
|
| 219 |
|
| 220 |
-
- PEFT 0.6.2.dev0
|
|
|
|
| 1 |
---
|
| 2 |
library_name: peft
|
| 3 |
base_model: yahma/llama-7b-hf
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
pipeline_tag: text-generation
|
| 7 |
+
tags:
|
| 8 |
+
- text-generation-inference
|
| 9 |
---
|
| 10 |
|
| 11 |
# Model Card for Model ID
|
|
|
|
| 42 |
|
| 43 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 44 |
|
| 45 |
+
###
|
| 46 |
+
Direct Use
|
| 47 |
|
| 48 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 49 |
|
|
|
|
| 223 |
### Framework versions
|
| 224 |
|
| 225 |
|
| 226 |
+
- PEFT 0.6.2.dev0
|