RichardErkhov commited on
Commit
7376141
·
verified ·
1 Parent(s): 601f82c

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ MoTCoder-15B-v1.0 - GGUF
11
+ - Model creator: https://huggingface.co/JingyaoLi/
12
+ - Original model: https://huggingface.co/JingyaoLi/MoTCoder-15B-v1.0/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [MoTCoder-15B-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q2_K.gguf) | Q2_K | 5.78GB |
18
+ | [MoTCoder-15B-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q3_K_S.gguf) | Q3_K_S | 6.5GB |
19
+ | [MoTCoder-15B-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q3_K.gguf) | Q3_K | 7.66GB |
20
+ | [MoTCoder-15B-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q3_K_M.gguf) | Q3_K_M | 6.85GB |
21
+ | [MoTCoder-15B-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.19GB |
22
+ | [MoTCoder-15B-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.IQ4_XS.gguf) | IQ4_XS | 2.65GB |
23
+ | [MoTCoder-15B-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q4_0.gguf) | Q4_0 | 8.37GB |
24
+ | [MoTCoder-15B-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.IQ4_NL.gguf) | IQ4_NL | 8.46GB |
25
+ | [MoTCoder-15B-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q4_K_S.gguf) | Q4_K_S | 8.46GB |
26
+ | [MoTCoder-15B-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q4_K.gguf) | Q4_K | 9.28GB |
27
+ | [MoTCoder-15B-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q4_K_M.gguf) | Q4_K_M | 9.28GB |
28
+ | [MoTCoder-15B-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q4_1.gguf) | Q4_1 | 9.26GB |
29
+ | [MoTCoder-15B-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q5_0.gguf) | Q5_0 | 10.14GB |
30
+ | [MoTCoder-15B-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q5_K_S.gguf) | Q5_K_S | 10.14GB |
31
+ | [MoTCoder-15B-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q5_K.gguf) | Q5_K | 10.71GB |
32
+ | [MoTCoder-15B-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q5_K_M.gguf) | Q5_K_M | 10.71GB |
33
+ | [MoTCoder-15B-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q5_1.gguf) | Q5_1 | 11.02GB |
34
+ | [MoTCoder-15B-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q6_K.gguf) | Q6_K | 12.01GB |
35
+ | [MoTCoder-15B-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/JingyaoLi_-_MoTCoder-15B-v1.0-gguf/blob/main/MoTCoder-15B-v1.0.Q8_0.gguf) | Q8_0 | 15.5GB |
36
+
37
+
38
+
39
+
40
+ Original model description:
41
+ ---
42
+ license: bigscience-openrail-m
43
+ metrics:
44
+ - code_eval
45
+ library_name: transformers
46
+ tags:
47
+ - code
48
+ ---
49
+
50
+ <p style="font-size:28px;" align="center">
51
+ 🏠 MoTCoder
52
+ </p>
53
+
54
+ <p align="center">
55
+ • 🤗 <a href="https://huggingface.co/datasets/JingyaoLi/MoTCode-Data" target="_blank">Data </a> • 🤗 <a href="https://huggingface.co/JingyaoLi/MoTCoder-15B-v1.0" target="_blank">Model </a> • 🐱 <a href="https://github.com/dvlab-research/MoTCoder" target="_blank">Code</a> • 📃 <a href="https://arxiv.org/abs/2312.15960" target="_blank">Paper</a> <br>
56
+ </p>
57
+
58
+ Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks. However, their performance tends to falter when confronted with more challenging programming problems. We observe that conventional models often generate solutions as monolithic code blocks, restricting their effectiveness in tackling intricate questions. To overcome this limitation, we present Modular-of-Thought Coder (MoTCoder). We introduce a pioneering framework for MoT instruction tuning, designed to promote the decomposition of tasks into logical sub-tasks and sub-modules.
59
+ Our investigations reveal that, through the cultivation and utilization of sub-modules, MoTCoder significantly improves both the modularity and correctness of the generated solutions, leading to substantial relative *pass@1* improvements of 12.9% on APPS and 9.43% on CodeContests.
60
+
61
+ ![MoTCoder Framework](./framework.png)
62
+
63
+
64
+ ## Performance
65
+
66
+ ![Performance on APPS](./impression.png)
67
+
68
+ **Performance on APPS**
69
+ | Model | Size | Pass@ | Introductory | Interview | Competition | All |
70
+ |------------|-------|-------|--------------|-----------|-------------|-------|
71
+ | **CodeT5** | 770M | 1 | 6.60 | 1.03 | 0.30 | 2.00 |
72
+ | **GPT-Neo** | 2.7B | 1 | 14.68 | 9.85 | 6.54 | 10.15 |
73
+ | | | 5 | 19.89 | 13.19 | 9.90 | 13.87 |
74
+ | **GPT-2** | 0.1B | 1 | 5.64 | 6.93 | 4.37 | 6.16 |
75
+ | | | 5 | 13.81 | 10.97 | 7.03 | 10.75 |
76
+ | | 1.5B | 1 | 7.40 | 9.11 | 5.05 | 7.96 |
77
+ | | | 5 | 16.86 | 13.84 | 9.01 | 13.48 |
78
+ | **GPT-3** | 175B | 1 | 0.57 | 0.65 | 0.21 | 0.55 |
79
+ | **StarCoder** | 15B | 1 | 7.25 | 6.89 | 4.08 | 6.40 |
80
+ | **WizardCoder**| 15B | 1 | 26.04 | 4.21 | 0.81 | 7.90 |
81
+ | **MoTCoder** | 15B | 1 | **33.80** | **19.70** | **11.09** | **20.80** |
82
+ | **text-davinci-002** | - | 1 | - | - | - | 7.48 |
83
+ | **code-davinci-002** | - | 1 | 29.30 | 6.40 | 2.50 | 10.20 |
84
+ | **GPT3.5** | - | 1 | 48.00 | 19.42 | 5.42 | 22.33 |
85
+
86
+ **Performance on CodeContests**
87
+ | Model | Size | Revision | Val pass@1 | Val pass@5 | Test pass@1 | Test pass@5 | Average pass@1 | Average pass@5 |
88
+ |-------|------|----------|------------|------------|-------------|-------------|----------------|----------------|
89
+ | **code-davinci-002** | - | - | - | - | 1.00 | - | 1.00 | - |
90
+ | **code-davinci-002 + CodeT** | - | 5 | - | - | 3.20 | - | 3.20 | - |
91
+ | **WizardCoder** | 15B | - | 1.11 | 3.18 | 1.98 | 3.27 | 1.55 | 3.23 |
92
+ | **WizardCoder + CodeChain** | 15B | 5 | 2.35 | 3.29 | 2.48 | 3.30 | 2.42 | 3.30 |
93
+ | **MoTCoder** | 15B | - | **2.39** | **7.69** | **6.18** | **12.73** | **4.29** | **10.21** |
94
+ | **GPT3.5** | - | - | 6.81 | 16.23 | 5.82 | 11.16 | 6.32 | 13.70 |
95
+
96
+
97
+
98
+