shc2012 commited on
Commit
7fff424
·
verified ·
1 Parent(s): 73ea27a

Update README with swllm.cpp usage and social links

Browse files
Files changed (1) hide show
  1. README.md +37 -14
README.md CHANGED
@@ -1,25 +1,19 @@
1
  ---
2
- quantization: F16
3
  AIGC:
4
  ContentProducer: Minimax Agent AI
5
- ContentPropagator: shenwenAI
6
  Label: AIGC
 
 
 
 
7
  ---
8
 
9
  # shenwen-coderV2-F16-GGUF
10
 
11
- <p align="center">
12
- <img src="https://huggingface.co/front/assets/huggingface_logo.svg" alt="Hugging Face" width="50" height="50">
13
- </p>
14
 
15
- <div align="center">
16
-
17
- [![GGUF Model](https://img.shields.io/badge/Model-shenwen--coderV2--GGUF-blue.svg)](https://huggingface.co/shenwenAI/shenwen-coderV2-GGUF)
18
- [![Quantization](https://img.shields.io/badge/Quantization-F16-blue.svg)]()
19
- [![Format](https://img.shields.io/badge/Format-GGUF-green.svg)]()
20
- [![License](https://img.shields.io/badge/License-Apache%202.0-green.svg)]()
21
-
22
- </div>
23
 
24
  ## Model Overview
25
 
@@ -28,7 +22,7 @@ AIGC:
28
  ## Quantization Details
29
 
30
  | Attribute | Value |
31
- |-----------|-------|
32
  | **Format** | GGUF |
33
  | **Quantization** | F16 (Float16) |
34
  | **File Size** | ~949MB |
@@ -56,6 +50,25 @@ wget https://huggingface.co/shenwenAI/shenwen-coderV2-GGUF/resolve/main/f16/shen
56
  ./build/bin/llama-cli -m shenwen-coderV2-F16.gguf -n 512 -p "Write a Python function to calculate factorial:"
57
  ```
58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  ## Model Source
60
 
61
  This GGUF model is converted from [shenwenAI/shenwen-coderV2-Instruct](https://huggingface.co/shenwenAI/shenwen-coderV2-Instruct), which is based on Qwen2.5-Coder-0.5B-Instruct.
@@ -69,3 +82,13 @@ Apache 2.0 - See [LICENSE](https://huggingface.co/shenwenAI/shenwen-coderV2-Inst
69
  - [Qwen Team](https://github.com/QwenLM/Qwen) for Qwen2.5-Coder
70
  - [llama.cpp](https://github.com/ggerganov/llama.cpp) for GGUF format
71
  - [shenwenAI](https://huggingface.co/shenwenAI) for model training
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  AIGC:
3
  ContentProducer: Minimax Agent AI
4
+ ContentPropagator: Minimax Agent AI
5
  Label: AIGC
6
+ ProduceID: 8c0a592841c03d01f6355d7d93440e48
7
+ PropagateID: 8c0a592841c03d01f6355d7d93440e48
8
+ ReservedCode1: 304502202d23fc5d081eaf9f244af5a16207f575cdc17daca9db7c4d9818c18ef863d841022100a9d0e75d5e71e892d5196089e9f8febe95380672f45087ecb04ea4af9e2f5a3c
9
+ ReservedCode2: 3044022039292b27362d41669572a7d520642acae41a53bd307198578ff6e5d63c06901902206cbb67618639b1427678250cfbb413317715c5828c43ac8f6fc45163a229ffe9
10
  ---
11
 
12
  # shenwen-coderV2-F16-GGUF
13
 
14
+ ![Hugging Face](https://huggingface.co/front/assets/huggingface\_logo.svg)
 
 
15
 
16
+ [![GGUF Model](https://img.shields.io/badge/Model-shenwen--coderV2--GGUF-blue.svg)](https://huggingface.co/shenwenAI/shenwen-coderV2-GGUF)[![Quantization](https://img.shields.io/badge/Quantization-F16-blue.svg)](https://huggingface.co/shenwenAI/shenwen-coderV2-GGUF)[![Format](https://img.shields.io/badge/Format-GGUF-green.svg)](https://huggingface.co/shenwenAI/shenwen-coderV2-GGUF)[![License](https://img.shields.io/badge/License-Apache%202.0-green.svg)](https://huggingface.co/shenwenAI/shenwen-coderV2-GGUF)
 
 
 
 
 
 
 
17
 
18
  ## Model Overview
19
 
 
22
  ## Quantization Details
23
 
24
  | Attribute | Value |
25
+ | --- | --- |
26
  | **Format** | GGUF |
27
  | **Quantization** | F16 (Float16) |
28
  | **File Size** | ~949MB |
 
50
  ./build/bin/llama-cli -m shenwen-coderV2-F16.gguf -n 512 -p "Write a Python function to calculate factorial:"
51
  ```
52
 
53
+ ## Usage with swllm.cpp (Optimized Code Generation)
54
+
55
+ For optimized code generation, we recommend using our custom **swllm.cpp** tool:
56
+
57
+ ```bash
58
+ # Clone swllm.cpp
59
+ git clone https://github.com/shenwenAI/swllm.cpp
60
+ cd swllm.cpp
61
+
62
+ # Build
63
+ mkdir build && cd build
64
+ cmake .. && make -j
65
+
66
+ # Run with this model
67
+ ./build/bin/swllm-cli -m shenwen-coderV2-F16.gguf -n 512 -p "Write a Python function to calculate factorial:"
68
+ ```
69
+
70
+ **swllm.cpp** provides optimized code generation capabilities for enhanced performance and quality.
71
+
72
  ## Model Source
73
 
74
  This GGUF model is converted from [shenwenAI/shenwen-coderV2-Instruct](https://huggingface.co/shenwenAI/shenwen-coderV2-Instruct), which is based on Qwen2.5-Coder-0.5B-Instruct.
 
82
  - [Qwen Team](https://github.com/QwenLM/Qwen) for Qwen2.5-Coder
83
  - [llama.cpp](https://github.com/ggerganov/llama.cpp) for GGUF format
84
  - [shenwenAI](https://huggingface.co/shenwenAI) for model training
85
+
86
+ ## Connect With Us
87
+
88
+ - **GitHub**: [https://github.com/shenwenAI](https://github.com/shenwenAI)
89
+ - **HuggingFace**: [https://huggingface.co/shenwenAI](https://huggingface.co/shenwenAI)
90
+ - **Twitter/X**: [https://x.com/shenwenai](https://x.com/shenwenai)
91
+
92
+ ---
93
+
94
+ *If this model is helpful, please consider giving us a star on GitHub and following us on social media!*