Irvollo commited on
Commit
de70ea5
·
verified ·
1 Parent(s): 9728ed8

Add model card

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - fp8
4
+ - quantized
5
+ - mistral
6
+ - roleplay
7
+ - creative-writing
8
+ - reasoning
9
+ base_model: TheDrummer/Behemoth-R1-123B-v2
10
+ library_name: transformers
11
+ pipeline_tag: text-generation
12
+ license: apache-2.0
13
+ ---
14
+
15
+ # Behemoth-R1-123B-v2 FP8 Dynamic
16
+
17
+ FP8 Dynamic quantization of [TheDrummer/Behemoth-R1-123B-v2](https://huggingface.co/TheDrummer/Behemoth-R1-123B-v2) using llmcompressor.
18
+
19
+ ## Model Details
20
+
21
+ - **Base Model**: TheDrummer/Behemoth-R1-123B-v2 (Mistral Large 2411 finetune)
22
+ - **Quantization**: FP8 Dynamic (W8A8) via llmcompressor
23
+ - **Scheme**: FP8_DYNAMIC, lm_head excluded
24
+ - **Size**: ~123 GB (vs 246 GB FP16)
25
+ - **Format**: SafeTensors with compressed-tensors metadata
26
+
27
+ ## Usage with vLLM
28
+
29
+ ```bash
30
+ python3 -m vllm.entrypoints.openai.api_server \
31
+ --model Irvollo/Behemoth-R1-123B-v2-FP8-Dynamic \
32
+ --quantization compressed-tensors \
33
+ --dtype bfloat16 \
34
+ --max-model-len 32768 \
35
+ --gpu-memory-utilization 0.95 \
36
+ --enable-prefix-caching \
37
+ --trust-remote-code
38
+ ```
39
+
40
+ ## Reasoning / Thinking
41
+
42
+ Supports native reasoning via `<think>` tag prefill:
43
+
44
+ ```json
45
+ {
46
+ "messages": [
47
+ {"role": "user", "content": "Your question"},
48
+ {"role": "assistant", "content": "<think>\n"}
49
+ ],
50
+ "continue_final_message": true,
51
+ "add_generation_prompt": false
52
+ }
53
+ ```
54
+
55
+ ## Hardware Requirements
56
+
57
+ - **Single GPU**: H200 NVL (141 GB) — tight with ~18 GB KV cache
58
+ - **Recommended**: 2x A100 80GB or H100 for comfortable KV headroom
59
+
60
+ ## Quantization Details
61
+
62
+ - Quantized on 2x NVIDIA B200 (358 GB VRAM)
63
+ - Calibration: 616 linear layers in <1 second
64
+ - Total pipeline: ~11 minutes
65
+ - Tool: [llmcompressor](https://github.com/vllm-project/llm-compressor)
66
+
67
+ ## Credits
68
+
69
+ - Original model by [TheDrummer](https://huggingface.co/TheDrummer)
70
+ - FP8 quantization by [Irvollo](https://huggingface.co/Irvollo)