Update README.md
#4
by
Kabil007
- opened
README.md
CHANGED
|
@@ -66,4 +66,25 @@ Who are you?<|im_end|>
|
|
| 66 |
|
| 67 |
Monad has no support yet for multi-turn.
|
| 68 |
|
| 69 |
-
A major envisioned use case for Monad is explainability, as the model does provide a unique trade-off between observability and actual reasoning performance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
Monad has no support yet for multi-turn.
|
| 68 |
|
| 69 |
+
A major envisioned use case for Monad is explainability, as the model does provide a unique trade-off between observability and actual reasoning performance.
|
| 70 |
+
|
| 71 |
+
Use with Transformers (Directly)
|
| 72 |
+
```zsh
|
| 73 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 74 |
+
|
| 75 |
+
prompt = "Hi i am Kabil?\nAssistant:"
|
| 76 |
+
|
| 77 |
+
input_token = tokenizer(
|
| 78 |
+
prompt,
|
| 79 |
+
return_tensors = "pt"
|
| 80 |
+
)
|
| 81 |
+
input_token.pop("token_type_ids", None)
|
| 82 |
+
input_token = input_token.to(model.device)
|
| 83 |
+
|
| 84 |
+
outputs = model.generate(
|
| 85 |
+
**input_token,
|
| 86 |
+
max_new_tokens=40
|
| 87 |
+
)
|
| 88 |
+
|
| 89 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 90 |
+
```
|