Update README.md
Browse files
README.md
CHANGED
|
@@ -122,131 +122,135 @@ language:
|
|
| 122 |
- sw
|
| 123 |
|
| 124 |
---
|
| 125 |
-

|
| 126 |
-
# Nous-V1 8B
|
| 127 |
|
| 128 |
-
|
| 129 |
|
| 130 |
-
|
|
|
|
|
|
|
| 131 |
|
| 132 |
-
**
|
|
|
|
| 133 |
|
| 134 |
-
-
|
| 135 |
-
- **🧠 Enhanced Contextual Understanding:** Supports an 128k token context window, enabling complex multi-turn conversations and document analysis
|
| 136 |
-
- **🌐 Multilingual & Multi-domain:** Trained on a diverse dataset for broad language and domain coverage
|
| 137 |
-
- **🤖 Instruction-Following & Adaptability:** Fine-tuned to respond accurately and adaptively across tasks
|
| 138 |
-
- **🚀 Optimized Inference:** Suitable for GPU environments such as NVIDIA A100, T4, and P100 for low-latency applications
|
| 139 |
|
| 140 |
---
|
| 141 |
|
| 142 |
-
##
|
| 143 |
|
| 144 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 145 |
|
| 146 |
-
|
| 147 |
-
-
|
| 148 |
-
|
| 149 |
-
|
|
|
|
| 150 |
|
| 151 |
---
|
| 152 |
|
| 153 |
-
##
|
| 154 |
|
| 155 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
-
|
| 160 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 161 |
|
| 162 |
-
|
| 163 |
|
| 164 |
-
|
| 165 |
-
|
|
|
|
|
|
|
|
|
|
| 166 |
model = AutoModelForCausalLM.from_pretrained(
|
| 167 |
-
|
| 168 |
-
torch_dtype=
|
| 169 |
-
device_map="auto"
|
|
|
|
| 170 |
)
|
| 171 |
-
|
| 172 |
-
# prepare the model input
|
| 173 |
-
prompt = "Give me a short introduction to large language model."
|
| 174 |
messages = [
|
| 175 |
-
{"role":
|
|
|
|
| 176 |
]
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
|
| 182 |
-
)
|
| 183 |
-
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 184 |
|
| 185 |
-
|
| 186 |
-
generated_ids = model.generate(
|
| 187 |
-
**model_inputs,
|
| 188 |
-
max_new_tokens=32768
|
| 189 |
-
)
|
| 190 |
-
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
| 191 |
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
|
| 195 |
-
index = len(output_ids) - output_ids[::-1].index(151668)
|
| 196 |
-
except ValueError:
|
| 197 |
-
index = 0
|
| 198 |
|
| 199 |
-
|
| 200 |
-
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
|
| 201 |
|
| 202 |
-
|
| 203 |
-
print("content:", content)
|
| 204 |
|
| 205 |
-
|
| 206 |
|
| 207 |
-
|
|
|
|
|
|
|
|
|
|
| 208 |
|
| 209 |
-
|
| 210 |
-
|
| 211 |
|
| 212 |
---
|
| 213 |
|
| 214 |
-
##
|
| 215 |
|
| 216 |
-
|
| 217 |
-
|
| 218 |
-
|
| 219 |
-
|
| 220 |
-
Min-p: 0.0
|
| 221 |
-
```
|
| 222 |
|
| 223 |
---
|
| 224 |
|
| 225 |
-
##
|
| 226 |
|
| 227 |
-
|
| 228 |
-
|
|
|
|
|
|
|
| 229 |
|
| 230 |
-
|
| 231 |
-
**A:** NVIDIA GPUs with at least 16GB VRAM (e.g., A100, 3090) are optimal for inference and fine-tuning.
|
| 232 |
|
| 233 |
-
|
| 234 |
-
**A:** Nous-V1 8B includes safety mitigations but should be used with human oversight and proper filtering for sensitive content.
|
| 235 |
|
|
|
|
|
|
|
|
|
|
| 236 |
|
| 237 |
---
|
| 238 |
|
| 239 |
-
##
|
|
|
|
|
|
|
| 240 |
|
| 241 |
```bibtex
|
| 242 |
-
@misc{
|
| 243 |
-
title={
|
| 244 |
-
author={
|
| 245 |
year={2025},
|
| 246 |
-
|
| 247 |
}
|
| 248 |
```
|
| 249 |
|
| 250 |
---
|
| 251 |
|
| 252 |
-
|
|
|
|
|
|
|
|
|
|
|
|
| 122 |
- sw
|
| 123 |
|
| 124 |
---
|
|
|
|
|
|
|
| 125 |
|
| 126 |
+
# Apollo-1-8B
|
| 127 |
|
| 128 |
+
[](https://huggingface.co/NoemaResearch/Apollo-1-8B)
|
| 129 |
+
[](https://huggingface.co/Qwen/Qwen3-8B)
|
| 130 |
+
[](LICENSE)
|
| 131 |
|
| 132 |
+
Apollo-1-8B is a **8 billion parameter instruction-tuned model** developed by **Noema Research**.
|
| 133 |
+
It is based on [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) and optimized for **advanced reasoning, instruction following, and high-performance deployment**.
|
| 134 |
|
| 135 |
+
This model represents the **large-scale member** of the Apollo series, balancing strong reasoning capabilities with efficiency for multi-domain applications.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
---
|
| 138 |
|
| 139 |
+
## Model Overview
|
| 140 |
|
| 141 |
+
* **Base model:** `Qwen3-8B`
|
| 142 |
+
* **Architecture:** Decoder-only transformer
|
| 143 |
+
* **Parameters:** \~8B
|
| 144 |
+
* **Context length:** up to 32k tokens (inherits Qwen3 long-context support)
|
| 145 |
+
* **Domain:** General-purpose reasoning, instruction following, and code generation
|
| 146 |
+
* **Primary applications:**
|
| 147 |
|
| 148 |
+
* Advanced conversational AI
|
| 149 |
+
* Multi-step reasoning and problem solving
|
| 150 |
+
* Knowledge assistants and tutoring systems
|
| 151 |
+
* Software development and code generation
|
| 152 |
+
* **License:** anvdl-1.0
|
| 153 |
|
| 154 |
---
|
| 155 |
|
| 156 |
+
## Key Features
|
| 157 |
|
| 158 |
+
* **Instruction tuning** for reliable multi-step reasoning and task completion
|
| 159 |
+
* **Extended reasoning depth** compared to Apollo-1-4B for complex queries
|
| 160 |
+
* **Long-context handling**, inherited from Qwen3 architecture
|
| 161 |
+
* **Multilingual coverage**, supporting diverse languages and domains
|
| 162 |
+
* **Balanced resource requirements**, deployable on high-end consumer hardware and cloud GPUs
|
| 163 |
|
| 164 |
+
---
|
| 165 |
|
| 166 |
+
## Usage
|
|
|
|
| 167 |
|
| 168 |
+
The model is available in Hugging Face Transformers format. Example:
|
| 169 |
|
| 170 |
+
```python
|
| 171 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 172 |
+
import torch
|
| 173 |
+
model_id = "NoemaResearch/Apollo-1-8B"
|
| 174 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
|
| 175 |
model = AutoModelForCausalLM.from_pretrained(
|
| 176 |
+
model_id,
|
| 177 |
+
torch_dtype=torch.bfloat16,
|
| 178 |
+
device_map="auto",
|
| 179 |
+
trust_remote_code=True
|
| 180 |
)
|
|
|
|
|
|
|
|
|
|
| 181 |
messages = [
|
| 182 |
+
{"role":"system", "content":"You are Apollo, a reasoning assistant."},
|
| 183 |
+
{"role":"user", "content":"Explain the differences between supervised, unsupervised, and reinforcement learning with examples."}
|
| 184 |
]
|
| 185 |
+
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
|
| 186 |
+
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.6, top_p=0.9)
|
| 187 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 188 |
+
```
|
|
|
|
|
|
|
|
|
|
| 189 |
|
| 190 |
+
**Recommended settings:**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 191 |
|
| 192 |
+
* `temperature=0.4–0.8`
|
| 193 |
+
* `top_p=0.9–0.95`
|
| 194 |
+
* Lower temperatures yield more factual and concise answers
|
|
|
|
|
|
|
|
|
|
| 195 |
|
| 196 |
+
---
|
|
|
|
| 197 |
|
| 198 |
+
## Evaluation
|
|
|
|
| 199 |
|
| 200 |
+
Apollo-1-8B demonstrates stronger reasoning and instruction-following capabilities relative to Apollo-1-4B, with internal evaluations indicating:
|
| 201 |
|
| 202 |
+
* Higher accuracy on complex multi-step reasoning tasks
|
| 203 |
+
* More robust **instruction adherence**
|
| 204 |
+
* Reduced **hallucinations** in factual and structured outputs
|
| 205 |
+
* High efficiency for large-context tasks
|
| 206 |
|
| 207 |
+
A full benchmark report will be provided in a future update.
|
| 208 |
+
For upstream performance details, see the [Qwen3-8B model card](https://huggingface.co/Qwen/Qwen3-8B).
|
| 209 |
|
| 210 |
---
|
| 211 |
|
| 212 |
+
## Limitations
|
| 213 |
|
| 214 |
+
* **Reasoning scale**: While improved, Apollo-1-8B cannot match ultra-large models (14B+) on extremely complex or open-ended tasks
|
| 215 |
+
* **Knowledge breadth**: Some highly specialized or niche knowledge may be limited
|
| 216 |
+
* **Hallucinations**: May generate plausible but incorrect information
|
| 217 |
+
* **Prompt sensitivity**: Outputs remain dependent on careful prompt formulation
|
|
|
|
|
|
|
| 218 |
|
| 219 |
---
|
| 220 |
|
| 221 |
+
## Responsible Use
|
| 222 |
|
| 223 |
+
* Do not rely on Apollo-1-8B for critical decisions without human oversight
|
| 224 |
+
* Verify outputs before applying in factual, legal, or safety-critical contexts
|
| 225 |
+
* Avoid providing personal or sensitive data in prompts
|
| 226 |
+
* The model should not be used to generate unsafe, harmful, or disallowed content
|
| 227 |
|
| 228 |
+
---
|
|
|
|
| 229 |
|
| 230 |
+
## Model Variants
|
|
|
|
| 231 |
|
| 232 |
+
* **Full precision (safetensors)** — research and high-fidelity inference
|
| 233 |
+
* **bf16 / fp16** — efficient inference on modern accelerators
|
| 234 |
+
* **Quantized versions (int8 / int4)** — deployment in resource-constrained environments
|
| 235 |
|
| 236 |
---
|
| 237 |
|
| 238 |
+
## Citation
|
| 239 |
+
|
| 240 |
+
If you use this model, please cite both Apollo-1-8B and the Qwen3 base model:
|
| 241 |
|
| 242 |
```bibtex
|
| 243 |
+
@misc{noema2025apollo8b,
|
| 244 |
+
title={Apollo-1-8B},
|
| 245 |
+
author={Noema Research},
|
| 246 |
year={2025},
|
| 247 |
+
howpublished={\url{https://huggingface.co/NoemaResearch/Apollo-1-8B}}
|
| 248 |
}
|
| 249 |
```
|
| 250 |
|
| 251 |
---
|
| 252 |
|
| 253 |
+
## Acknowledgements
|
| 254 |
+
|
| 255 |
+
Apollo-1-8B builds upon the [Qwen3](https://huggingface.co/Qwen) family of models.
|
| 256 |
+
We thank the Qwen team for open-sourcing their models and enabling derivative research.
|