Files changed (1) hide show
  1. README.md +121 -1
README.md CHANGED
@@ -15,6 +15,7 @@ tags:
15
  - 8-bit
16
  - GGUF
17
  - text-generation
 
18
  ---
19
  # [MaziyarPanahi/Trinity-Mini-GGUF](https://huggingface.co/MaziyarPanahi/Trinity-Mini-GGUF)
20
  - Model creator: [arcee-ai](https://huggingface.co/arcee-ai)
@@ -42,4 +43,123 @@ Here is an incomplete list of clients and libraries that are known to support GG
42
 
43
  ## Special thanks
44
 
45
- 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - 8-bit
16
  - GGUF
17
  - text-generation
18
+ license: apache-2.0
19
  ---
20
  # [MaziyarPanahi/Trinity-Mini-GGUF](https://huggingface.co/MaziyarPanahi/Trinity-Mini-GGUF)
21
  - Model creator: [arcee-ai](https://huggingface.co/arcee-ai)
 
43
 
44
  ## Special thanks
45
 
46
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
47
+
48
+
49
+ Original README:
50
+
51
+ ---
52
+
53
+ <div align="center">
54
+ <picture>
55
+ <img
56
+ src="https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/i-v1KyAMOW_mgVGeic9WJ.png"
57
+ alt="Arcee Trinity Mini"
58
+ style="max-width: 100%; height: auto;"
59
+ >
60
+ </picture>
61
+ </div>
62
+
63
+ # Trinity Mini GGUF
64
+
65
+ Trinity Mini is an Arcee AI 26B MoE model with 3B active parameters. It is the medium-sized model in our new Trinity family, a series of open-weight models for enterprise and tinkerers alike.
66
+
67
+ This model is tuned for reasoning, but in testing, it uses a similar total token count to competitive instruction-tuned models.
68
+
69
+ These are the GGUF files for running on llama.cpp powered platforms
70
+
71
+ ***
72
+
73
+ Trinity Mini is trained on 10T tokens gathered and curated through a key partnership with [Datology](https://www.datologyai.com/), building upon the excellent dataset we used on [AFM-4.5B](https://huggingface.co/arcee-ai/AFM-4.5B) with additional math and code.
74
+
75
+ Training was performed on a cluster of 512 H200 GPUs powered by [Prime Intellect](https://www.primeintellect.ai/) using HSDP parallelism.
76
+
77
+ More details, including key architecture decisions, can be found on our blog [here](https://www.arcee.ai/blog/the-trinity-manifesto)
78
+
79
+ Try it out now at [chat.arcee.ai](http://chat.arcee.ai/)
80
+
81
+ ***
82
+
83
+ ## Model Details
84
+
85
+ * **Model Architecture:** AfmoeForCausalLM
86
+ * **Parameters:** 26B, 3B active
87
+ * **Experts:** 128 total, 8 active, 1 shared
88
+ * **Context length:** 128k
89
+ * **Training Tokens:** 10T
90
+ * **License:** [Apache 2.0](https://huggingface.co/arcee-ai/Trinity-Mini#license)
91
+ * **Recommended settings:**
92
+ * temperature: 0.15
93
+ * top_k: 50
94
+ * top_p: 0.75
95
+ * min_p: 0.06
96
+
97
+ ***
98
+
99
+ ## Benchmarks
100
+
101
+ ![](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/UMV0OZh_H1JfvgzBTXh6u.png)
102
+
103
+ <div align="center">
104
+ <picture>
105
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/sSVjGNHfrJKmQ6w8I18ek.png" style="background-color:ghostwhite;padding:5px;" width="17%" alt="Powered by Datology">
106
+ </picture>
107
+ </div>
108
+
109
+ ### Running our model
110
+
111
+ - [llama.cpp](https://huggingface.co/arcee-ai/Trinity-Mini#llamacpp)
112
+ - [LM Studio](https://huggingface.co/arcee-ai/Trinity-Mini#lm-studio)
113
+ - [API](https://huggingface.co/arcee-ai/Trinity-Mini#api)
114
+
115
+ ## llama.cpp
116
+
117
+ Supported in llama.cpp release b7061
118
+
119
+ Download the latest [llama.cpp release](https://github.com/ggml-org/llama.cpp/releases)
120
+
121
+ ```
122
+ llama-server -hf arcee-ai/Trinity-Mini-GGUF:q4_k_m \
123
+ --temp 0.15 \
124
+ --top-k 50 \
125
+ --top-p 0.75
126
+ --min-p 0.06
127
+ ```
128
+
129
+ ## LM Studio
130
+
131
+ Supported in latest LM Studio runtime
132
+
133
+ Update to latest available, then verify your runtime by:
134
+
135
+ 1. Click "Power User" at the bottom left
136
+ 2. Click the green "Developer" icon at the top left
137
+ 3. Select "LM Runtimes" at the top
138
+ 4. Refresh the list of runtimes and verify that the latest is installed
139
+
140
+ Then, go to Model Search and search for `arcee-ai/Trinity-Mini-GGUF`, download your prefered size, and load it up in the chat
141
+
142
+ ## API
143
+
144
+ Trinity Mini is available today on openrouter:
145
+
146
+ https://openrouter.ai/arcee-ai/trinity-mini
147
+
148
+ ```
149
+ curl -X POST "https://openrouter.ai/v1/chat/completions" \
150
+ -H "Authorization: Bearer $OPENROUTER_API_KEY" \
151
+ -H "Content-Type: application/json" \
152
+ -d '{
153
+ "model": "arcee-ai/trinity-mini",
154
+ "messages": [
155
+ {
156
+ "role": "user",
157
+ "content": "What are some fun things to do in New York?"
158
+ }
159
+ ]
160
+ }'
161
+ ```
162
+
163
+ ## License
164
+
165
+ Trinity-Mini is released under the Apache-2.0 license.