Update README.md
#2
by
SebastianBodza
- opened
README.md
CHANGED
|
@@ -33,7 +33,7 @@ The first trick would be to load the model with the specific argument below to l
|
|
| 33 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 34 |
|
| 35 |
tokenizer = AutoTokenizer.from_pretrained("Cedille/de-anna")
|
| 36 |
-
model = AutoModelForCausalLM.from_pretrained("Cedille/de-anna",
|
| 37 |
```
|
| 38 |
|
| 39 |
We are planning on adding an fp16 branch soon. Combined with the lower memory loading above, loading could be done on 12.1GB of RAM.
|
|
|
|
| 33 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 34 |
|
| 35 |
tokenizer = AutoTokenizer.from_pretrained("Cedille/de-anna")
|
| 36 |
+
model = AutoModelForCausalLM.from_pretrained("Cedille/de-anna", low_cpu_mem_usage=True)
|
| 37 |
```
|
| 38 |
|
| 39 |
We are planning on adding an fp16 branch soon. Combined with the lower memory loading above, loading could be done on 12.1GB of RAM.
|