Update README.md
Browse files
README.md
CHANGED
|
@@ -51,16 +51,15 @@ Dataset
|
|
| 51 |
|
| 52 |
The model was trained on a Portuguese conversational dataset, including:
|
| 53 |
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
Estruturas de linguagem natural
|
| 58 |
-
Format
|
| 59 |
-
User: Oi!
|
| 60 |
-
Bot: Olá! Como posso te ajudar?
|
| 61 |
Training Notes
|
|
|
|
| 62 |
Focused on language pattern learning, not reasoning
|
|
|
|
| 63 |
No instruction tuning (no RLHF, no alignment)
|
|
|
|
| 64 |
Lightweight training pipeline
|
| 65 |
Optimized for small-scale experimentation
|
| 66 |
💡 Capabilities
|
|
@@ -91,7 +90,7 @@ model_name = "AxionLab-official/MiniBot-0.9M-Base"
|
|
| 91 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 92 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 93 |
|
| 94 |
-
prompt = "
|
| 95 |
inputs = tokenizer(prompt, return_tensors="pt")
|
| 96 |
|
| 97 |
outputs = model.generate(
|
|
@@ -101,8 +100,10 @@ outputs = model.generate(
|
|
| 101 |
top_p=0.95,
|
| 102 |
do_sample=True
|
| 103 |
)
|
| 104 |
-
|
| 105 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
|
|
|
|
|
| 106 |
⚙️ Recommended Generation Settings
|
| 107 |
|
| 108 |
For better results:
|
|
|
|
| 51 |
|
| 52 |
The model was trained on a Portuguese conversational dataset, including:
|
| 53 |
|
| 54 |
+
|
| 55 |
+
Pure text
|
| 56 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
Training Notes
|
| 58 |
+
|
| 59 |
Focused on language pattern learning, not reasoning
|
| 60 |
+
|
| 61 |
No instruction tuning (no RLHF, no alignment)
|
| 62 |
+
|
| 63 |
Lightweight training pipeline
|
| 64 |
Optimized for small-scale experimentation
|
| 65 |
💡 Capabilities
|
|
|
|
| 90 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 91 |
model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 92 |
|
| 93 |
+
prompt = "The cat "
|
| 94 |
inputs = tokenizer(prompt, return_tensors="pt")
|
| 95 |
|
| 96 |
outputs = model.generate(
|
|
|
|
| 100 |
top_p=0.95,
|
| 101 |
do_sample=True
|
| 102 |
)
|
| 103 |
+
|
| 104 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
⚙️ Recommended Generation Settings
|
| 108 |
|
| 109 |
For better results:
|