Update README.md
Browse files
README.md
CHANGED
|
@@ -139,7 +139,6 @@ pip install torch>=2.1.0 transformers>=4.40.0 accelerate compressed-tensors
|
|
| 139 |
| **Base Model** | [microsoft/NextCoder-32B](https://huggingface.co/microsoft/NextCoder-32B) |
|
| 140 |
| **Quantization Method** | FP8 E4M3 weight-only |
|
| 141 |
| **Framework** | llm-compressor + compressed_tensors |
|
| 142 |
-
| **Calibration Samples** | 2048 (8x industry standard) |
|
| 143 |
| **Storage Size** | ~32GB (sharded safetensors) |
|
| 144 |
| **VRAM (vLLM)** | ~32GB |
|
| 145 |
| **VRAM (Transformers)** | ~64GB+ (decompressed to BF16) |
|
|
@@ -192,12 +191,6 @@ The 32B model represents the flagship tier:
|
|
| 192 |
- ✅ **Enterprise-grade completions** for mission-critical applications
|
| 193 |
- ✅ **Best context understanding** across the model family
|
| 194 |
|
| 195 |
-
## 🔬 Quality Assurance
|
| 196 |
-
|
| 197 |
-
- **High-quality calibration:** 2048 diverse code samples (8x industry standard of 256)
|
| 198 |
-
- **Validation:** Tested on code generation benchmarks
|
| 199 |
-
- **Format:** Standard compressed_tensors for broad compatibility
|
| 200 |
-
- **Optimization:** Fine-tuned calibration for code-specific patterns
|
| 201 |
|
| 202 |
## 📚 Original Model
|
| 203 |
|
|
|
|
| 139 |
| **Base Model** | [microsoft/NextCoder-32B](https://huggingface.co/microsoft/NextCoder-32B) |
|
| 140 |
| **Quantization Method** | FP8 E4M3 weight-only |
|
| 141 |
| **Framework** | llm-compressor + compressed_tensors |
|
|
|
|
| 142 |
| **Storage Size** | ~32GB (sharded safetensors) |
|
| 143 |
| **VRAM (vLLM)** | ~32GB |
|
| 144 |
| **VRAM (Transformers)** | ~64GB+ (decompressed to BF16) |
|
|
|
|
| 191 |
- ✅ **Enterprise-grade completions** for mission-critical applications
|
| 192 |
- ✅ **Best context understanding** across the model family
|
| 193 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 194 |
|
| 195 |
## 📚 Original Model
|
| 196 |
|