Update README.md
Browse files
README.md
CHANGED
|
@@ -27,15 +27,18 @@ If any of these two is not installed, the "eager" implementation will be used. O
|
|
| 27 |
## Generation
|
| 28 |
You can use the classic `generate` API:
|
| 29 |
```python
|
| 30 |
-
from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
|
| 31 |
-
import torch
|
| 32 |
|
| 33 |
-
tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf")
|
| 34 |
-
model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf")
|
| 35 |
-
input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"]
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
-
|
| 38 |
-
print(tokenizer.batch_decode(out))
|
| 39 |
```
|
| 40 |
|
| 41 |
## PEFT finetuning example
|
|
|
|
| 27 |
## Generation
|
| 28 |
You can use the classic `generate` API:
|
| 29 |
```python
|
| 30 |
+
>>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
|
| 31 |
+
>>> import torch
|
| 32 |
|
| 33 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf")
|
| 34 |
+
>>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf")
|
| 35 |
+
>>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"]
|
| 36 |
+
|
| 37 |
+
>>> out = model.generate(input_ids, max_new_tokens=10)
|
| 38 |
+
>>> print(tokenizer.batch_decode(out))
|
| 39 |
+
Hey how are you doing?
|
| 40 |
|
| 41 |
+
I'm so glad you're here.
|
|
|
|
| 42 |
```
|
| 43 |
|
| 44 |
## PEFT finetuning example
|