Update README.md
Browse files
README.md
CHANGED
|
@@ -74,7 +74,7 @@ A Gemma-2b finetuned LoRA trained on science Q&A
|
|
| 74 |
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 75 |
|
| 76 |
## How to Get Started with the Model
|
| 77 |
-
|
| 78 |
import torch
|
| 79 |
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
| 80 |
from peft import PeftModel
|
|
@@ -122,6 +122,7 @@ with torch.no_grad():
|
|
| 122 |
|
| 123 |
print(f"Inference time: {end-start:.2f} seconds")
|
| 124 |
print(response)
|
|
|
|
| 125 |
[More Information Needed]
|
| 126 |
|
| 127 |
## Training Details
|
|
|
|
| 74 |
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 75 |
|
| 76 |
## How to Get Started with the Model
|
| 77 |
+
```
|
| 78 |
import torch
|
| 79 |
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
| 80 |
from peft import PeftModel
|
|
|
|
| 122 |
|
| 123 |
print(f"Inference time: {end-start:.2f} seconds")
|
| 124 |
print(response)
|
| 125 |
+
```
|
| 126 |
[More Information Needed]
|
| 127 |
|
| 128 |
## Training Details
|