brandonbeiler commited on
Commit
4cc004c
·
verified ·
1 Parent(s): ee795a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -15,6 +15,8 @@ tags:
15
  pipeline_tag: image-text-to-text
16
  inference: false
17
  license: mit
 
 
18
  ---
19
  # 🔥 Skywork-R1V3-38B-FP8-Dynamic: Optimized Vision-Language Model 🔥
20
  This is a **FP8 dynamic quantized** version of [Skywork/Skywork-R1V3-38B](https://huggingface.co/Skywork/Skywork-R1V3-38B), optimized for high-performance inference with vLLM.
@@ -82,4 +84,4 @@ When loading the raw model via transformers then quantizing and saving, transfor
82
  config to be missing critical values (like tie_word_embeddings). This was patched in vLLM for InternVL models (https://github.com/vllm-project/vllm/pull/19992) but
83
  remains for Skywork still, and will hopefully be resolved soon.
84
 
85
- *Quantized with ❤️ using LLM Compressor for the open-source community*
 
15
  pipeline_tag: image-text-to-text
16
  inference: false
17
  license: mit
18
+ base_model:
19
+ - Skywork/Skywork-R1V3-38B
20
  ---
21
  # 🔥 Skywork-R1V3-38B-FP8-Dynamic: Optimized Vision-Language Model 🔥
22
  This is a **FP8 dynamic quantized** version of [Skywork/Skywork-R1V3-38B](https://huggingface.co/Skywork/Skywork-R1V3-38B), optimized for high-performance inference with vLLM.
 
84
  config to be missing critical values (like tie_word_embeddings). This was patched in vLLM for InternVL models (https://github.com/vllm-project/vllm/pull/19992) but
85
  remains for Skywork still, and will hopefully be resolved soon.
86
 
87
+ *Quantized with ❤️ using LLM Compressor for the open-source community*