Update README.md
Browse files
README.md
CHANGED
|
@@ -15,6 +15,8 @@ language:
|
|
| 15 |
|
| 16 |
# **Maverick-1-14B Model Card**
|
| 17 |
|
|
|
|
|
|
|
| 18 |
## **Model Overview**
|
| 19 |
|
| 20 |
**Maverick-1-14B** is a 14.0-billion-parameter causal language model fine-tuned from Qwen2.5-14B-Instruct. This model is designed to provide highly fluent, contextually aware, and logically sound outputs across a broad range of NLP and reasoning tasks. It balances instruction-following with generative flexibility.
|
|
@@ -90,6 +92,10 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
| 90 |
print(response)
|
| 91 |
```
|
| 92 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
## **Limitations**
|
| 94 |
|
| 95 |
Users should be aware of the following limitations:
|
|
|
|
| 15 |
|
| 16 |
# **Maverick-1-14B Model Card**
|
| 17 |
|
| 18 |
+
*Maverick generated this model card!*
|
| 19 |
+
|
| 20 |
## **Model Overview**
|
| 21 |
|
| 22 |
**Maverick-1-14B** is a 14.0-billion-parameter causal language model fine-tuned from Qwen2.5-14B-Instruct. This model is designed to provide highly fluent, contextually aware, and logically sound outputs across a broad range of NLP and reasoning tasks. It balances instruction-following with generative flexibility.
|
|
|
|
| 92 |
print(response)
|
| 93 |
```
|
| 94 |
|
| 95 |
+
### **Maverick Search usage 🔍**
|
| 96 |
+
|
| 97 |
+
To use this model with Maverick Search, please refer to this [repository](https://github.com/Aayan-Mishra/Maverick-Search)
|
| 98 |
+
|
| 99 |
## **Limitations**
|
| 100 |
|
| 101 |
Users should be aware of the following limitations:
|