Update README.md
Browse files
README.md
CHANGED
|
@@ -15,6 +15,8 @@ Github: https://github.com/Neutralzz/BiLLa
|
|
| 15 |
The weight of `word embedding` is the sum of the weights of the trained model and the original LLaMA,
|
| 16 |
so as to ensure that developers with LLaMA original model accessibility can convert the model released by this hub into a usable one.
|
| 17 |
|
|
|
|
|
|
|
| 18 |
First, you can revert the model weights by [this script](https://github.com/Neutralzz/BiLLa/blob/main/embedding_convert.py):
|
| 19 |
```shell
|
| 20 |
python3 embedding_convert.py \
|
|
@@ -44,8 +46,11 @@ outputs = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
|
|
| 44 |
print(outputs)
|
| 45 |
```
|
| 46 |
|
| 47 |
-
|
|
|
|
| 48 |
```
|
| 49 |
Human: [Your question]
|
| 50 |
Assistant:
|
| 51 |
```
|
|
|
|
|
|
|
|
|
| 15 |
The weight of `word embedding` is the sum of the weights of the trained model and the original LLaMA,
|
| 16 |
so as to ensure that developers with LLaMA original model accessibility can convert the model released by this hub into a usable one.
|
| 17 |
|
| 18 |
+
## Usage
|
| 19 |
+
|
| 20 |
First, you can revert the model weights by [this script](https://github.com/Neutralzz/BiLLa/blob/main/embedding_convert.py):
|
| 21 |
```shell
|
| 22 |
python3 embedding_convert.py \
|
|
|
|
| 46 |
print(outputs)
|
| 47 |
```
|
| 48 |
|
| 49 |
+
### Input Format
|
| 50 |
+
Different from [BiLLa-7B-LLM](https://huggingface.co/Neutralzz/BiLLa-7B-LLM), the model input of `BiLLa-7B-SFT` should be formatted as follows:
|
| 51 |
```
|
| 52 |
Human: [Your question]
|
| 53 |
Assistant:
|
| 54 |
```
|
| 55 |
+
Note that <b>a space</b> is following the `Assistant:`
|
| 56 |
+
|