Add MNN Q4 conversion for TokForge mobile inference
Browse files- .gitattributes +2 -0
- README.md +98 -0
- config.json +10 -0
- embeddings_bf16.bin +3 -0
- export_args.json +42 -0
- llm.mnn +3 -0
- llm.mnn.weight +3 -0
- llm_config.json +12 -0
- tokenizer.txt +0 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
llm.mnn filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
llm.mnn.weight filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
|
| 7 |
+
tags:
|
| 8 |
+
- mnn
|
| 9 |
+
- llama
|
| 10 |
+
- mobile
|
| 11 |
+
- on-device
|
| 12 |
+
- tokforge
|
| 13 |
+
- uncensored
|
| 14 |
+
- abliterated
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# DeepSeek-R1-Distill-Llama-8B-MNN
|
| 18 |
+
|
| 19 |
+
Pre-converted [DeepSeek R1 Distill Llama 8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) in MNN format for on-device inference with [TokForge](https://tokforge.ai).
|
| 20 |
+
|
| 21 |
+
> **Original model by [DeepSeek](https://huggingface.co/DeepSeek)** — converted to MNN Q4 for mobile deployment.
|
| 22 |
+
|
| 23 |
+
## Model Details
|
| 24 |
+
|
| 25 |
+
| | |
|
| 26 |
+
|---|---|
|
| 27 |
+
| **Architecture** | Llama 3.1 (standard attention, 32 layers, GQA — distilled from DeepSeek-R1) |
|
| 28 |
+
| **Parameters** | 8B (4-bit quantized) |
|
| 29 |
+
| **Format** | MNN (Alibaba Mobile Neural Network) |
|
| 30 |
+
| **Quantization** | W4A16 (4-bit weights, block size 128) |
|
| 31 |
+
| **Vocab** | 128,256 tokens |
|
| 32 |
+
| **Source** | [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
|
| 33 |
+
|
| 34 |
+
## Description
|
| 35 |
+
|
| 36 |
+
DeepSeek's R1 reasoning capability distilled into a Llama 3.1 8B body. Brings chain-of-thought reasoning to mobile devices. Shows its thinking process step-by-step, making it excellent for math, logic puzzles, coding, and complex analysis. Performance comparable to OpenAI o1 on reasoning tasks.
|
| 37 |
+
|
| 38 |
+
## Files
|
| 39 |
+
|
| 40 |
+
| File | Description |
|
| 41 |
+
|------|-------------|
|
| 42 |
+
| `llm.mnn` | Model computation graph |
|
| 43 |
+
| `llm.mnn.weight` | Quantized weight data (Q4, block=128) |
|
| 44 |
+
| `llm_config.json` | Model config with Jinja chat template |
|
| 45 |
+
| `tokenizer.txt` | Tokenizer vocabulary |
|
| 46 |
+
| `config.json` | MNN runtime config |
|
| 47 |
+
|
| 48 |
+
## Usage with TokForge
|
| 49 |
+
|
| 50 |
+
This model is optimized for **[TokForge](https://tokforge.ai)** — a free Android app for private, on-device LLM inference.
|
| 51 |
+
|
| 52 |
+
1. Download [TokForge from the Play Store](https://tokforge.ai)
|
| 53 |
+
2. Open the app → Models → Download this model
|
| 54 |
+
3. Start chatting — runs 100% locally, no internet required
|
| 55 |
+
|
| 56 |
+
### Recommended Settings
|
| 57 |
+
|
| 58 |
+
| Setting | Value |
|
| 59 |
+
|---------|-------|
|
| 60 |
+
| Backend | OpenCL (Qualcomm) / Vulkan (MediaTek) / CPU (fallback) |
|
| 61 |
+
| Precision | Low |
|
| 62 |
+
| Threads | 4 |
|
| 63 |
+
| Thinking | Off (or On for thinking-capable models) |
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
## Performance
|
| 68 |
+
|
| 69 |
+
Actual speed varies by device, thermal state, and generation length. Typical ranges for this model size:
|
| 70 |
+
|
| 71 |
+
| Device | SoC | Backend | tok/s |
|
| 72 |
+
|---|---|---|---|
|
| 73 |
+
| RedMagic 11 Pro | SM8850 | OpenCL | ~14 tok/s |
|
| 74 |
+
| Lenovo TB520FU | SM8650 | OpenCL | ~10 tok/s |
|
| 75 |
+
|
| 76 |
+
## Attribution
|
| 77 |
+
|
| 78 |
+
This is an MNN conversion of **[DeepSeek R1 Distill Llama 8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)** by **[DeepSeek](https://huggingface.co/DeepSeek)**. All credit for the model architecture, training, and fine-tuning goes to the original author(s). This conversion only changes the runtime format for mobile deployment.
|
| 79 |
+
|
| 80 |
+
## Limitations
|
| 81 |
+
|
| 82 |
+
- Intended for TokForge / MNN on-device inference on Android
|
| 83 |
+
- This is a runtime bundle, not a standard Transformers training checkpoint
|
| 84 |
+
- Quantization (Q4) may slightly reduce quality compared to the full-precision original
|
| 85 |
+
- Abliterated/uncensored models have had safety filters removed — **use responsibly**
|
| 86 |
+
|
| 87 |
+
## Community
|
| 88 |
+
|
| 89 |
+
- **Website:** [tokforge.ai](https://tokforge.ai)
|
| 90 |
+
- **Discord:** [Join our Discord](https://discord.gg/Acv3CBtfVm)
|
| 91 |
+
- **GitHub:** [TokForge on GitHub](https://github.com/darkmaniac7/Elysium)
|
| 92 |
+
|
| 93 |
+
## Export Details
|
| 94 |
+
|
| 95 |
+
Converted using MNN's `llmexport` pipeline:
|
| 96 |
+
```bash
|
| 97 |
+
python llmexport.py --path deepseek-ai/DeepSeek-R1-Distill-Llama-8B --export mnn --quant_bit 4 --quant_block 128
|
| 98 |
+
```
|
config.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"llm_model": "llm.mnn",
|
| 3 |
+
"llm_weight": "llm.mnn.weight",
|
| 4 |
+
"backend_type": "cpu",
|
| 5 |
+
"thread_num": 4,
|
| 6 |
+
"precision": "low",
|
| 7 |
+
"memory": "low",
|
| 8 |
+
"sampler_type": "penalty",
|
| 9 |
+
"penalty": 1.1
|
| 10 |
+
}
|
embeddings_bf16.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f8491b384e310f1a5e986ed878f43caae53af8ec494abeb388643f43cf37248b
|
| 3 |
+
size 1050673152
|
export_args.json
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"path": "/root/models/hf_convert_queue/DeepSeek-R1-Distill-Llama-8B",
|
| 3 |
+
"type": null,
|
| 4 |
+
"tokenizer_path": "/root/models/hf_convert_queue/DeepSeek-R1-Distill-Llama-8B",
|
| 5 |
+
"eagle_path": null,
|
| 6 |
+
"lora_path": null,
|
| 7 |
+
"gptq_path": null,
|
| 8 |
+
"dst_path": "/root/models/hf_uploads/DeepSeek-R1-Distill-Llama-8B-MNN",
|
| 9 |
+
"verbose": false,
|
| 10 |
+
"test": null,
|
| 11 |
+
"export": "mnn",
|
| 12 |
+
"onnx_slim": false,
|
| 13 |
+
"quant_bit": 4,
|
| 14 |
+
"quant_block": 128,
|
| 15 |
+
"visual_quant_bit": null,
|
| 16 |
+
"visual_quant_block": null,
|
| 17 |
+
"lm_quant_bit": 4,
|
| 18 |
+
"lm_quant_block": 128,
|
| 19 |
+
"mnnconvert": "../../../build/MNNConvert",
|
| 20 |
+
"ppl": false,
|
| 21 |
+
"awq": false,
|
| 22 |
+
"hqq": false,
|
| 23 |
+
"omni": false,
|
| 24 |
+
"transformer_fuse": false,
|
| 25 |
+
"group_conv_native": false,
|
| 26 |
+
"smooth": false,
|
| 27 |
+
"sym": false,
|
| 28 |
+
"visual_sym": false,
|
| 29 |
+
"seperate_embed": true,
|
| 30 |
+
"lora_split": false,
|
| 31 |
+
"calib_data": null,
|
| 32 |
+
"act_bit": 16,
|
| 33 |
+
"embed_bit": 16,
|
| 34 |
+
"act_sym": false,
|
| 35 |
+
"quant_config": null,
|
| 36 |
+
"generate_for_npu": false,
|
| 37 |
+
"skip_weight": false,
|
| 38 |
+
"omni_epochs": 20,
|
| 39 |
+
"omni_lr": 0.005,
|
| 40 |
+
"omni_wd": 0.0001,
|
| 41 |
+
"tie_word_embeddings": false
|
| 42 |
+
}
|
llm.mnn
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:47003d840b362f0e90e26b5ae1be4c103448b21453a11d3a42de57632b044904
|
| 3 |
+
size 557656
|
llm.mnn.weight
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1085472c8100f39b439d49ee50b6cccf616d3480c43c42d42d424244ea6d6325
|
| 3 |
+
size 4223505242
|
llm_config.json
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"model_type": "llama",
|
| 3 |
+
"hidden_size": 4096,
|
| 4 |
+
"attention_mask": "float",
|
| 5 |
+
"attention_type": "full",
|
| 6 |
+
"is_mrope": false,
|
| 7 |
+
"jinja": {
|
| 8 |
+
"chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'\\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\\n<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|><think>\\n'}}{% endif %}",
|
| 9 |
+
"bos": "<|begin▁of▁sentence|>",
|
| 10 |
+
"eos": "<|end▁of▁sentence|>"
|
| 11 |
+
}
|
| 12 |
+
}
|
tokenizer.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|