Text Generation
Transformers
Safetensors
English
qwen2
conversational
text-generation-inference
michaelzhiluo commited on
Commit
f52f95c
·
verified ·
1 Parent(s): fa35340

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -2
README.md CHANGED
@@ -23,7 +23,7 @@ pipeline_tag: text-generation
23
  <a href="https://github.com/agentica-project/rllm" style="margin: 2px;">
24
  <img alt="Code" src="https://img.shields.io/badge/RLLM-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
25
  </a>
26
- <a href="https://www.google.com" target="_blank" style="margin: 2px;">
27
  <img alt="Blog" src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
28
  </a>
29
  <a href="https://x.com/Agentica_" style="margin: 2px;">
@@ -68,6 +68,7 @@ DeepCoder generalizes better to long contexts than the base distilled model, due
68
  | --- | --- | --- | --- |
69
  | **DeepCoder-14B-Preview** | 45.6 | 57.9 | 60.6 |
70
  | **DeepSeek-R1-Distill-Qwen-14B** | 50.2 | 53.0 | 53.0 |
 
71
  A more detailed description of the training recipe can be found in our [blog post](https://www.google.com).
72
 
73
  ## Evaluation
@@ -98,6 +99,14 @@ This permissive license ensures that researchers, developers, and enthusiasts wo
98
  - Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
99
  - Our work is done as part of [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).
100
 
101
- ## Citation
 
102
  ```bibtex
 
 
 
 
 
 
 
103
  ```
 
23
  <a href="https://github.com/agentica-project/rllm" style="margin: 2px;">
24
  <img alt="Code" src="https://img.shields.io/badge/RLLM-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
25
  </a>
26
+ <a href="https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51" target="_blank" style="margin: 2px;">
27
  <img alt="Blog" src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
28
  </a>
29
  <a href="https://x.com/Agentica_" style="margin: 2px;">
 
68
  | --- | --- | --- | --- |
69
  | **DeepCoder-14B-Preview** | 45.6 | 57.9 | 60.6 |
70
  | **DeepSeek-R1-Distill-Qwen-14B** | 50.2 | 53.0 | 53.0 |
71
+
72
  A more detailed description of the training recipe can be found in our [blog post](https://www.google.com).
73
 
74
  ## Evaluation
 
99
  - Our model is trained on top of [`DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
100
  - Our work is done as part of [Berkeley Sky Computing Lab](https://skycomputing.berkeley.edu/) and [Berkeley AI Research](https://bair.berkeley.edu/).
101
 
102
+ ## Citation
103
+
104
  ```bibtex
105
+ @misc{deepcoder2025,
106
+ title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},
107
+ author={Michael Luo, Sijun Tan, Roy Huang, Xiaoxiang Shi, Rachel Xin, Colin Cai, Ameen Patel, Alpay Ariyak, Qingyang Wu, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Stoica, Tianjun Zhang},
108
+ howpublished={\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},
109
+ note={Notion Blog},
110
+ year={2025}
111
+ }
112
  ```