File size: 1,797 Bytes
de70ea5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
tags:
- fp8
- quantized
- mistral
- roleplay
- creative-writing
- reasoning
base_model: TheDrummer/Behemoth-R1-123B-v2
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---

# Behemoth-R1-123B-v2 FP8 Dynamic

FP8 Dynamic quantization of [TheDrummer/Behemoth-R1-123B-v2](https://huggingface.co/TheDrummer/Behemoth-R1-123B-v2) using llmcompressor.

## Model Details

- **Base Model**: TheDrummer/Behemoth-R1-123B-v2 (Mistral Large 2411 finetune)
- **Quantization**: FP8 Dynamic (W8A8) via llmcompressor
- **Scheme**: FP8_DYNAMIC, lm_head excluded
- **Size**: ~123 GB (vs 246 GB FP16)
- **Format**: SafeTensors with compressed-tensors metadata

## Usage with vLLM

```bash
python3 -m vllm.entrypoints.openai.api_server \
    --model Irvollo/Behemoth-R1-123B-v2-FP8-Dynamic \
    --quantization compressed-tensors \
    --dtype bfloat16 \
    --max-model-len 32768 \
    --gpu-memory-utilization 0.95 \
    --enable-prefix-caching \
    --trust-remote-code
```

## Reasoning / Thinking

Supports native reasoning via `<think>` tag prefill:

```json
{
  "messages": [
    {"role": "user", "content": "Your question"},
    {"role": "assistant", "content": "<think>\n"}
  ],
  "continue_final_message": true,
  "add_generation_prompt": false
}
```

## Hardware Requirements

- **Single GPU**: H200 NVL (141 GB) — tight with ~18 GB KV cache
- **Recommended**: 2x A100 80GB or H100 for comfortable KV headroom

## Quantization Details

- Quantized on 2x NVIDIA B200 (358 GB VRAM)
- Calibration: 616 linear layers in <1 second
- Total pipeline: ~11 minutes
- Tool: [llmcompressor](https://github.com/vllm-project/llm-compressor)

## Credits

- Original model by [TheDrummer](https://huggingface.co/TheDrummer)
- FP8 quantization by [Irvollo](https://huggingface.co/Irvollo)