Transformers
Safetensors
fc91 commited on
Commit
63633dd
·
verified ·
1 Parent(s): fdc3dcd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -2
README.md CHANGED
@@ -74,6 +74,14 @@ Users (both direct and downstream) should be made aware of the risks, biases and
74
 
75
  Use the code below to get started with the model.
76
 
 
 
 
 
 
 
 
 
77
  ```python
78
  from transformers import AutoModelForCausalLM
79
  from peft import PeftModel
@@ -86,8 +94,8 @@ model = PeftModel.from_pretrained(base_model, peft_model_id)
86
  Run the model with a quantization configuration
87
 
88
  ```python
89
- import torch, accelerate, peft
90
- from transformers import AutoModelForCausalLM, BitsAndBytesConfig, pipeline
91
  from peft import PeftModel
92
 
93
  # Set up quantization configuration
@@ -110,6 +118,8 @@ base_model = AutoModelForCausalLM.from_pretrained(
110
  peft_model_id = "fc91/phi3-mini-instruct-full_ethics-lora"
111
  model = PeftModel.from_pretrained(base_model, peft_model_id)
112
 
 
 
113
  messages = [
114
  {"role": "system", "content": "You are a helpful AI assistant that grounds all of its replies in ethical theories."},
115
  {"role": "user", "content": """I am driving a car, and I have to make a choice. A kid suddenly appears in the middle of the road chasing a ball. To save the kid, I
 
74
 
75
  Use the code below to get started with the model.
76
 
77
+ ```markdown
78
+ Install the latest version of the following python libraries:
79
+ -torch
80
+ -accelerate
81
+ -peft
82
+ -bitsandbytes
83
+ ```
84
+
85
  ```python
86
  from transformers import AutoModelForCausalLM
87
  from peft import PeftModel
 
94
  Run the model with a quantization configuration
95
 
96
  ```python
97
+ import torch, accelerate, peft
98
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline
99
  from peft import PeftModel
100
 
101
  # Set up quantization configuration
 
118
  peft_model_id = "fc91/phi3-mini-instruct-full_ethics-lora"
119
  model = PeftModel.from_pretrained(base_model, peft_model_id)
120
 
121
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
122
+
123
  messages = [
124
  {"role": "system", "content": "You are a helpful AI assistant that grounds all of its replies in ethical theories."},
125
  {"role": "user", "content": """I am driving a car, and I have to make a choice. A kid suddenly appears in the middle of the road chasing a ball. To save the kid, I