Update README.md
Browse files
README.md
CHANGED
|
@@ -10,3 +10,114 @@ tags:
|
|
| 10 |
- text-generation-inference
|
| 11 |
---
|
| 12 |

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
- text-generation-inference
|
| 11 |
---
|
| 12 |

|
| 13 |
+
|
| 14 |
+
Blaze.1-27B-Preview is a Gemma 2-based, 27-billion-parameter model. Gemma is a family of lightweight, state-of-the-art open models from Google, built using the same research and technology that powers the Gemini models. These models are text-to-text, decoder-only large language models available in English, with open weights for both pre-trained and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Blaze.1-27B was fine-tuned on long-chain-of-thought reasoning synthetic datasets derived from models such as DeepSeek, Qwen, and OpenAI’s GPT-4.
|
| 15 |
+
|
| 16 |
+
# **Quickstart Chat Template**
|
| 17 |
+
|
| 18 |
+
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
|
| 19 |
+
```sh
|
| 20 |
+
pip install -U transformers
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
Then, copy the snippet from the section that is relevant for your usecase.
|
| 24 |
+
|
| 25 |
+
# **Running with the `pipeline` API**
|
| 26 |
+
|
| 27 |
+
```python
|
| 28 |
+
import torch
|
| 29 |
+
from transformers import pipeline
|
| 30 |
+
|
| 31 |
+
pipe = pipeline(
|
| 32 |
+
"text-generation",
|
| 33 |
+
model="prithivMLmods/Blaze.1-27B-Preview",
|
| 34 |
+
model_kwargs={"torch_dtype": torch.bfloat16},
|
| 35 |
+
device="cuda", # replace with "mps" to run on a Mac device
|
| 36 |
+
)
|
| 37 |
+
|
| 38 |
+
messages = [
|
| 39 |
+
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
|
| 40 |
+
]
|
| 41 |
+
|
| 42 |
+
outputs = pipe(messages, max_new_tokens=256)
|
| 43 |
+
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
|
| 44 |
+
print(assistant_response)
|
| 45 |
+
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
# **Running the model on a single / multi GPU**
|
| 49 |
+
|
| 50 |
+
```python
|
| 51 |
+
# pip install accelerate
|
| 52 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 53 |
+
import torch
|
| 54 |
+
|
| 55 |
+
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
|
| 56 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 57 |
+
"prithivMLmods/Blaze.1-27B-Preview",
|
| 58 |
+
device_map="auto",
|
| 59 |
+
torch_dtype=torch.bfloat16,
|
| 60 |
+
)
|
| 61 |
+
|
| 62 |
+
input_text = "Write me a poem about Machine Learning."
|
| 63 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
| 64 |
+
|
| 65 |
+
outputs = model.generate(**input_ids, max_new_tokens=32)
|
| 66 |
+
print(tokenizer.decode(outputs[0]))
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
|
| 70 |
+
```python
|
| 71 |
+
messages = [
|
| 72 |
+
{"role": "user", "content": "Write me a poem about Machine Learning."},
|
| 73 |
+
]
|
| 74 |
+
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
|
| 75 |
+
|
| 76 |
+
outputs = model.generate(**input_ids, max_new_tokens=256)
|
| 77 |
+
print(tokenizer.decode(outputs[0]))
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
<a name="precisions"></a>
|
| 81 |
+
#### Running the model on a GPU using different precisions
|
| 82 |
+
|
| 83 |
+
The native weights of this model were exported in `bfloat16` precision.
|
| 84 |
+
|
| 85 |
+
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
|
| 86 |
+
|
| 87 |
+
* _Upcasting to `torch.float32`_
|
| 88 |
+
|
| 89 |
+
```python
|
| 90 |
+
# pip install accelerate
|
| 91 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 92 |
+
|
| 93 |
+
tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Blaze.1-27B-Preview")
|
| 94 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 95 |
+
"prithivMLmods/Blaze.1-27B-Preview",
|
| 96 |
+
device_map="auto",
|
| 97 |
+
)
|
| 98 |
+
|
| 99 |
+
input_text = "Write me a poem about Machine Learning."
|
| 100 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
| 101 |
+
|
| 102 |
+
outputs = model.generate(**input_ids, max_new_tokens=32)
|
| 103 |
+
print(tokenizer.decode(outputs[0]))
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
# **Intended Use**
|
| 107 |
+
Blaze.1-27B-Preview is designed for advanced text generation tasks requiring logical reasoning, complex problem-solving, and long-form content generation. Its primary use cases include:
|
| 108 |
+
|
| 109 |
+
1. **Question Answering**: Generating detailed, accurate answers to a wide range of questions across various domains.
|
| 110 |
+
2. **Summarization**: Condensing long texts into concise summaries while preserving key information and context.
|
| 111 |
+
3. **Reasoning Tasks**: Performing multi-step reasoning, particularly in mathematical, logical, and conditional scenarios.
|
| 112 |
+
4. **Instruction Following**: Responding to user prompts with coherent and relevant outputs, based on fine-tuned instruction-following capabilities.
|
| 113 |
+
5. **Conversational AI**: Supporting virtual assistants and chatbots for both casual and professional applications.
|
| 114 |
+
6. **Multi-Model Comparison**: Benefiting researchers by providing outputs tuned with diverse datasets such as DeepSeek, Qwen, and GPT-4, allowing comparative insights across different reasoning paradigms.
|
| 115 |
+
|
| 116 |
+
|
| 117 |
+
# **Limitations**
|
| 118 |
+
1. **Reasoning Bias**: Despite its training on synthetic datasets, the model may exhibit biases in reasoning, especially when encountering unfamiliar problem types.
|
| 119 |
+
2. **Hallucinations**: Like other large language models, Blaze.1-27B may generate inaccurate or fabricated information, particularly when dealing with facts or events not covered during training.
|
| 120 |
+
3. **Dependency on Prompt Quality**: The quality of the model’s output heavily relies on the clarity and specificity of the input prompt. Poorly framed prompts may lead to irrelevant or incomplete responses.
|
| 121 |
+
4. **Long Context Handling**: While it is designed for long-chain reasoning, performance may degrade with excessively long inputs or contexts, resulting in loss of coherence or incomplete reasoning.
|
| 122 |
+
5. **Resource Requirements**: Due to its large size (27 billion parameters), it requires substantial computational resources for both inference and fine-tuning, limiting its accessibility for users without high-performance hardware.
|
| 123 |
+
6. **Language Support**: Although it excels in English, its capabilities in other languages may be limited, and unexpected issues may arise when processing multilingual or code-mixed inputs.
|