RichardErkhov commited on
Commit
73ce8df
·
verified ·
1 Parent(s): 915e49c

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Magicoder-S-DS-6.7B - bnb 8bits
11
+ - Model creator: https://huggingface.co/ise-uiuc/
12
+ - Original model: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: other
20
+ library_name: transformers
21
+ datasets:
22
+ - ise-uiuc/Magicoder-OSS-Instruct-75K
23
+ - ise-uiuc/Magicoder-Evol-Instruct-110K
24
+ license_name: deepseek
25
+ pipeline_tag: text-generation
26
+ ---
27
+ # 🎩 Magicoder: Source Code Is All You Need
28
+
29
+ > Refer to our GitHub repo [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/) for an up-to-date introduction to the Magicoder family!
30
+
31
+ * 🎩**Magicoder** is a model family empowered by 🪄**OSS-Instruct**, a novel approach to enlightening LLMs with open-source code snippets for generating *low-bias* and *high-quality* instruction data for code.
32
+ * 🪄**OSS-Instruct** mitigates the *inherent bias* of the LLM-synthesized instruction data by empowering them with *a wealth of open-source references* to produce more diverse, realistic, and controllable data.
33
+
34
+ ![Overview of OSS-Instruct](assets/overview.svg)
35
+ ![Overview of Result](assets/result.png)
36
+
37
+ ## Model Details
38
+
39
+ ### Model Description
40
+
41
+ * **Developed by:**
42
+ [Yuxiang Wei](https://yuxiang.cs.illinois.edu),
43
+ [Zhe Wang](https://github.com/zhewang2001),
44
+ [Jiawei Liu](https://jiawei-site.github.io),
45
+ [Yifeng Ding](https://yifeng-ding.com),
46
+ [Lingming Zhang](https://lingming.cs.illinois.edu)
47
+ * **License:** [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL)
48
+ * **Finetuned from model:** [deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base)
49
+
50
+ ### Model Sources
51
+
52
+ * **Repository:** <https://github.com/ise-uiuc/magicoder>
53
+ * **Paper:** <https://arxiv.org/abs/2312.02120>
54
+ * **Demo (powered by [Gradio](https://www.gradio.app)):**
55
+ <https://github.com/ise-uiuc/magicoder/tree/main/demo>
56
+
57
+ ### Training Data
58
+
59
+ * [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder_oss_instruct_75k): generated through **OSS-Instruct** using `gpt-3.5-turbo-1106` and used to train both Magicoder and Magicoder-S series.
60
+ * [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder_evol_instruct_110k): decontaminated and redistributed from [theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), used to further finetune Magicoder series and obtain Magicoder-S models.
61
+
62
+ ## Uses
63
+
64
+ ### Direct Use
65
+
66
+ Magicoders are designed and best suited for **coding tasks**.
67
+
68
+ ### Out-of-Scope Use
69
+
70
+ Magicoders may not work well in non-coding tasks.
71
+
72
+ ## Bias, Risks, and Limitations
73
+
74
+ Magicoders may sometimes make errors, producing misleading contents, or struggle to manage tasks that are not related to coding.
75
+
76
+ ### Recommendations
77
+
78
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
79
+
80
+ ## How to Get Started with the Model
81
+
82
+ Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
83
+
84
+ ```python
85
+ from transformers import pipeline
86
+ import torch
87
+
88
+ MAGICODER_PROMPT = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
89
+
90
+ @@ Instruction
91
+ {instruction}
92
+
93
+ @@ Response
94
+ """
95
+
96
+ instruction = <Your code instruction here>
97
+
98
+ prompt = MAGICODER_PROMPT.format(instruction=instruction)
99
+ generator = pipeline(
100
+ model="ise-uiuc/Magicoder-S-DS-6.7B",
101
+ task="text-generation",
102
+ torch_dtype=torch.bfloat16,
103
+ device_map="auto",
104
+ )
105
+ result = generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0)
106
+ print(result[0]["generated_text"])
107
+ ```
108
+
109
+ ## Technical Details
110
+
111
+ Refer to our GitHub repo: [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/).
112
+
113
+ ## Citation
114
+
115
+ ```bibtex
116
+ @misc{magicoder,
117
+ title={Magicoder: Source Code Is All You Need},
118
+ author={Yuxiang Wei and Zhe Wang and Jiawei Liu and Yifeng Ding and Lingming Zhang},
119
+ year={2023},
120
+ eprint={2312.02120},
121
+ archivePrefix={arXiv},
122
+ primaryClass={cs.CL}
123
+ }
124
+ ```
125
+
126
+ ## Acknowledgements
127
+
128
+ * [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol-Instruct
129
+ * [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for Magicoder-DS
130
+ * [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for Magicoder-CL
131
+ * [StarCoder](https://arxiv.org/abs/2305.06161): Data decontamination
132
+
133
+ ## Important Note
134
+
135
+ Magicoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. Magicoders will not compete with OpenAI's commercial products.
136
+
137
+