| --- |
| license: apache-2.0 |
| datasets: |
| - Leon-Leee/wizardlm_evol_instruct_v2_196K_backuped |
| - m-a-p/Code-Feedback |
| - openbmb/UltraInteract_sft |
| - ise-uiuc/Magicoder-Evol-Instruct-110K |
| - flytech/python-codes-25k |
| language: |
| - en |
| metrics: |
| - code_eval |
| library_name: transformers |
| tags: |
| - code |
| --- |
| ## AIGCodeGeek-DS-6.7B |
|
|
| ### Introduction |
| AIGCodeGeek-DS-6.7B is the first released version of our Code-LLM family with competitive performance on public and private benchmarks. |
|
|
| ### Model Details |
| #### Model Description |
| - Developed by: [Leon Li](https://huggingface.co/Leon-Leee) |
| - License: [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL) |
| - Fine-tuned from [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) with full parameters |
|
|
| ### Training data |
| A mixture of samples from high-quality open-source (read *Acknowledgements*) and our private datasets. |
| We have made contamination detection as Magicoder/Bigcode did. |
|
|
| ### Evaluation |
| results to be added. |
|
|
| ### Requirements |
| It should work with the same requirements as DeepSeek-Coder-6.7B or the following packages: |
|
|
| ```torch>=2.0 |
| tokenizers>=0.14.0 |
| transformers>=4.35.0 |
| accelerate |
| sympy>=1.12 |
| pebble |
| timeout-decorator |
| attrdict |
| ``` |
|
|
|
|
| ### QuickStart |
|
|
| ```python |
| from transformers import AutoTokenizer, AutoModelForCausalLM |
| tokenizer = AutoTokenizer.from_pretrained("aigcode/AIGCodeGeek-DS-6.7B", trust_remote_code=True) |
| model = AutoModelForCausalLM.from_pretrained("aigcode/AIGCodeGeek-DS-6.7B", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda() |
| messages=[ |
| { 'role': 'user', 'content': "write a quick sort algorithm in python."} |
| ] |
| inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) |
| # tokenizer.eos_token_id is the id of <|EOT|> token |
| outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) |
| print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) |
| ``` |
|
|
| ### Limits |
|
|
|
|
| ### Acknowledgements |
| We gain a lot of knowledge and resources from the open-source community: |
| - [DeepSeekCoder](https://huggingface.co/deepseek-ai): impressive model series and insightful tech reports |
| - [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol Instruct and public datasets |
| - We used a ([Leon-Leee/wizardlm_evol_instruct_v2_196K_backuped](https://huggingface.co/datasets/Leon-Leee/wizardlm_evol_instruct_v2_196K_backuped)) since this original has been deleted. |
| - [Magicoder](https://github.com/ise-uiuc/magicoder/): OSS-Instruct, [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder-Evol-Instruct-110K) from theblackcat102/evol-codealpaca-v1(https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) |
| - [Eurus](https://github.com/OpenBMB/Eurus): creative datasets for reasoning, [openbmb/UltraInteract_sft](https://huggingface.co/datasets/openbmb/UltraInteract_sft) |
| - [OpenCoderInterpreter](https://opencodeinterpreter.github.io/): well-designed system and datasets [m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback) |
| - [flytech/python-codes-25k](https://huggingface.co/datasets/flytech/python-codes-25k): diversity |
| - [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): easily used to finetune base models |