Update README.md
Browse files
README.md
CHANGED
|
@@ -75,8 +75,7 @@ print(output[0]['generated_text'])
|
|
| 75 |
|
| 76 |
```
|
| 77 |
|
| 78 |
-
### Outputs
|
| 79 |
-
microsoft/Phi-3-medium-128k-instruct
|
| 80 |
```python
|
| 81 |
To train a language model using Lora and SFT (Supervised Fine-tuning), you can follow these steps:
|
| 82 |
|
|
@@ -148,10 +147,8 @@ trainer = LoraSFTTrainer(
|
|
| 148 |
trainer.train()
|
| 149 |
```
|
| 150 |
|
| 151 |
-
This code will train the Lora model using the SFT dataset. You can adjust the training arguments and the dataset path according to your needs.
|
| 152 |
|
| 153 |
-
|
| 154 |
-
REILX/Phi-3-medium-128k-code-instruct
|
| 155 |
```python
|
| 156 |
import torch
|
| 157 |
from transformers import RobertaForCausalLM, RobertaTokenizer
|
|
|
|
| 75 |
|
| 76 |
```
|
| 77 |
|
| 78 |
+
### Outputs by microsoft/Phi-3-medium-128k-instruct
|
|
|
|
| 79 |
```python
|
| 80 |
To train a language model using Lora and SFT (Supervised Fine-tuning), you can follow these steps:
|
| 81 |
|
|
|
|
| 147 |
trainer.train()
|
| 148 |
```
|
| 149 |
|
|
|
|
| 150 |
|
| 151 |
+
### Outputs by REILX/Phi-3-medium-128k-code-instruct
|
|
|
|
| 152 |
```python
|
| 153 |
import torch
|
| 154 |
from transformers import RobertaForCausalLM, RobertaTokenizer
|