Transformers
Safetensors
fc91 commited on
Commit
fdc3dcd
·
verified ·
1 Parent(s): b602f75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -74,7 +74,7 @@ Users (both direct and downstream) should be made aware of the risks, biases and
74
 
75
  Use the code below to get started with the model.
76
 
77
- ```markdown
78
  from transformers import AutoModelForCausalLM
79
  from peft import PeftModel
80
 
@@ -85,7 +85,7 @@ model = PeftModel.from_pretrained(base_model, peft_model_id)
85
 
86
  Run the model with a quantization configuration
87
 
88
- ```markdown
89
  import torch, accelerate, peft
90
  from transformers import AutoModelForCausalLM, BitsAndBytesConfig, pipeline
91
  from peft import PeftModel
@@ -161,7 +161,7 @@ The following subsets of the above dataset were leveraged:
161
 
162
  #### Training Hyperparameters
163
 
164
- ```markdown
165
  per_device_train_batch_size=16
166
  per_device_eval_batch_size=32
167
  gradient_accumulation_steps=2
 
74
 
75
  Use the code below to get started with the model.
76
 
77
+ ```python
78
  from transformers import AutoModelForCausalLM
79
  from peft import PeftModel
80
 
 
85
 
86
  Run the model with a quantization configuration
87
 
88
+ ```python
89
  import torch, accelerate, peft
90
  from transformers import AutoModelForCausalLM, BitsAndBytesConfig, pipeline
91
  from peft import PeftModel
 
161
 
162
  #### Training Hyperparameters
163
 
164
+ ```python
165
  per_device_train_batch_size=16
166
  per_device_eval_batch_size=32
167
  gradient_accumulation_steps=2