Text Generation
Transformers
Safetensors
arcee
conversational
Files changed (1) hide show
  1. README.md +168 -0
README.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: aml
4
+ language:
5
+ - en
6
+ - es
7
+ - fr
8
+ - de
9
+ - it
10
+ - pt
11
+ - ru
12
+ - ar
13
+ - hi
14
+ - ko
15
+ - zh
16
+ library_name: transformers
17
+ extra_gated_prompt: Company name is optional, please put NA if you would prefer not to share it.
18
+ extra_gated_fields:
19
+ Company: text
20
+ I agree to use this model in accordance with the Arcee Model License (AML): checkbox
21
+ base_model:
22
+ - arcee-ai/AFM-4.5B-Base
23
+ ---
24
+
25
+ <div align="center">
26
+ <picture>
27
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/Lj9YVLIKKdImV_jID0A1g.png" width="25%" alt="Arcee AFM 4.5B">
28
+ </picture>
29
+ </div>
30
+
31
+
32
+ > These are the weights for the preview model hosted on TogetherAI between June 18th-July 28th. For the final release checkpoint optimized for retrieval, instruction following and assistant use cases please see [AFM-4.5B](https://huggingface.co/arcee-ai/AFM-4.5B)
33
+
34
+ # AFM-4.5B-Preview
35
+
36
+ AFM-4.5B is a 4.5 billion parameter instruction-tuned model developed by Arcee.ai, designed for enterprise-grade performance across diverse deployment environments from cloud to edge. The base model was trained on a dataset of 8 trillion tokens, comprising 6.5 trillion tokens of general pretraining data followed by 1.5 trillion tokens of midtraining data with enhanced focus on mathematical reasoning and code generation. Following pretraining, the model underwent supervised fine-tuning on high-quality instruction datasets. The instruction-tuned model was further refined through reinforcement learning on verifiable rewards as well as for human preference. We use a modified version of [TorchTitan](https://arxiv.org/abs/2410.06511) for pretraining, [Axolotl](https://axolotl.ai) for supervised fine-tuning, and a modified version of [Verifiers](https://github.com/willccbb/verifiers) for reinforcement learning.
37
+
38
+ The development of AFM-4.5B prioritized data quality as a fundamental requirement for achieving robust model performance. We collaborated with DatologyAI, a company specializing in large-scale data curation. DatologyAI's curation pipeline integrates a suite of proprietary algorithms—model-based quality filtering, embedding-based curation, target distribution-matching, source mixing, and synthetic data. Their expertise enabled the creation of a curated dataset tailored to support strong real-world performance.
39
+
40
+ The model architecture follows a standard transformer decoder-only design based on Vaswani et al., incorporating several key modifications for enhanced performance and efficiency. Notable architectural features include grouped query attention for improved inference efficiency and ReLU^2 activation functions instead of SwiGLU to enable sparsification while maintaining or exceeding performance benchmarks.
41
+
42
+ The model available in this repo is the instruct model following supervised fine-tuning and reinforcement learning.
43
+
44
+ View our documentation here for more details: https://docs.arcee.ai/arcee-foundation-models/introduction-to-arcee-foundation-models
45
+
46
+ ***
47
+
48
+ <div align="center">
49
+ <picture>
50
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/sSVjGNHfrJKmQ6w8I18ek.png" style="background-color:ghostwhite;padding:5px;" width="17%" alt="Powered by Datology">
51
+ </picture>
52
+ </div>
53
+
54
+ ## Model Details
55
+
56
+ * **Model Architecture:** ArceeForCausalLM
57
+ * **Parameters:** 4.5B
58
+ * **Training Tokens:**
59
+ * **License:** [Arcee Model License (AML)](https://huggingface.co/arcee-ai/AFM-4.5B#license)
60
+ * **Recommended settings:**
61
+ * temperature: 0.5
62
+ * top_k: 50
63
+ * top_p: 0.95
64
+ * repeat_penalty: 1.1
65
+
66
+ ***
67
+
68
+ ## Benchmarks
69
+
70
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/BdsWFc4pxiHlK2E0j9AfG.png)
71
+ *Qwen3 and SmolLM's reasoning approach causes their scores to vary wildly from suite to suite - but these are all scores on our internal harness with the same hyperparameters. Be sure to reference their reported scores. SmolLM just released its [bench](https://github.com/huggingface/smollm).
72
+
73
+ ## How to use with `transformers`
74
+
75
+ You can use the model directly with the `transformers` library.
76
+
77
+ We recommend a lower temperature, around 0.5, for optimal performance.
78
+
79
+ ```python
80
+ from transformers import AutoTokenizer, AutoModelForCausalLM
81
+ import torch
82
+
83
+ model_id = "arcee-ai/AFM-4.5B"
84
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
85
+ model = AutoModelForCausalLM.from_pretrained(
86
+ model_id,
87
+ torch_dtype=torch.bfloat16,
88
+ device_map="auto"
89
+ )
90
+
91
+ messages = [
92
+ {"role": "user", "content": "Who are you?"},
93
+ ]
94
+
95
+ input_ids = tokenizer.apply_chat_template(
96
+ messages,
97
+ add_generation_prompt=True,
98
+ return_tensors="pt"
99
+ ).to(model.device)
100
+
101
+ outputs = model.generate(
102
+ input_ids,
103
+ max_new_tokens=256,
104
+ do_sample=True,
105
+ temperature=0.5,
106
+ top_k=50,
107
+ top_p=0.95
108
+ )
109
+
110
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
111
+ print(response)
112
+ ```
113
+
114
+ ## How to use with `vllm`
115
+
116
+ Pending a PR merge: https://github.com/vllm-project/vllm/pull/21725
117
+
118
+ ## How to use with Together API
119
+
120
+ You can access this model directly via the [Together Playground](https://api.together.xyz/playground/arcee-ai/AFM-4.5B).
121
+
122
+ ### Python (Official Together SDK)
123
+
124
+ ```python
125
+ from together import Together
126
+
127
+ client = Together()
128
+ response = client.chat.completions.create(
129
+ model="arcee-ai/AFM-4.5B",
130
+ messages=[
131
+ {
132
+ "role": "user",
133
+ "content": "What are some fun things to do in New York?"
134
+ }
135
+ ]
136
+ )
137
+ print(response.choices[0].message.content)
138
+ ```
139
+
140
+ ### cURL
141
+
142
+ ```bash
143
+ curl -X POST "https://api.together.xyz/v1/chat/completions" \
144
+ -H "Authorization: Bearer $TOGETHER_API_KEY" \
145
+ -H "Content-Type: application/json" \
146
+ -d '{
147
+ "model": "arcee-ai/AFM-4.5B",
148
+ "messages": [
149
+ {
150
+ "role": "user",
151
+ "content": "What are some fun things to do in New York?"
152
+ }
153
+ ]
154
+ }'
155
+ ```
156
+
157
+
158
+ ## Quantization support
159
+
160
+ Support for llama.cpp is available, GGUF format quants are provided here:
161
+
162
+ https://huggingface.co/arcee-ai/AFM-4.5B-GGUF
163
+
164
+ ## License
165
+
166
+ AFM-4.5B is released under the [Arcee Model License](https://huggingface.co/arcee-ai/AFM-4.5B/blob/main/LICENSE). If your company makes less than $1.75 million in annual revenue, you’re free to use the model for commercial purposes, as long as you’re not providing the weights to a company above that threshold. If your product or application using AFM-4.5B is sold to a larger company, that’s fine—as long as they don’t receive or run the weights directly.
167
+
168
+ We want as many developers, researchers, and builders as possible to benefit from AFM-4.5B. At the same time, this license ensures that we can continue to develop and support the model for the community.