Brendan Hogan commited on
Commit
985ade5
·
1 Parent(s): ca05a2e

added details

Browse files
README.md CHANGED
@@ -1,25 +1,42 @@
1
  ---
2
- title: QqWEN Overview
3
  emoji: 🧠
4
  colorFrom: yellow
5
  colorTo: indigo
6
  sdk: static
7
  pinned: false
8
  license: mit
9
- short_description: Overview of the qqWen project
10
  ---
11
 
12
- # Nerfies
13
 
14
- This is the repository that contains source code for the [Nerfies website](https://nerfies.github.io).
15
 
16
- If you find Nerfies useful for your work please cite:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ```
18
- @article{park2021nerfies
19
- author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
20
- title = {Nerfies: Deformable Neural Radiance Fields},
21
- journal = {ICCV},
22
- year = {2021},
23
  }
24
  ```
25
 
 
1
  ---
2
+ title: Full-Stack Fine-Tuning for Q
3
  emoji: 🧠
4
  colorFrom: yellow
5
  colorTo: indigo
6
  sdk: static
7
  pinned: false
8
  license: mit
9
+ short_description: Full-Stack Fine-Tuning for the Q Programming Language
10
  ---
11
 
12
+ # Full-Stack Fine-Tuning for the Q Programming Language
13
 
14
+ This is the project page for "Full-Stack Fine-Tuning for the Q Programming Language" - a comprehensive approach to adapting large language models for specialized domains.
15
 
16
+ ## Project Overview
17
+
18
+ We present an end-to-end methodology for adapting LLMs to the Q programming language, a specialized tool used in quantitative finance. Our approach includes:
19
+
20
+ - **Dataset Construction**: LeetCode-style evaluation benchmark for Q
21
+ - **Domain-Adaptive Pretraining**: Training on curated Q code repositories
22
+ - **Supervised Fine-Tuning**: Multi-task training on Q programming challenges
23
+ - **Reinforcement Learning**: Programmatic reward optimization
24
+
25
+ Our best model achieves 59% pass@1 accuracy, surpassing Claude Opus-4 by 29.5%.
26
+
27
+ ## Links
28
+
29
+ - **Paper**: [Coming Soon - ArXiv Link]
30
+ - **Code**: [Coming Soon - GitHub Repository]
31
+ - **Models**: [Coming Soon - HuggingFace Collection]
32
+
33
+ If you find this work useful, please cite:
34
  ```
35
+ @article{hogan2024fullstack,
36
+ author = {Hogan, Brendan R. and Brown, Will and Boyarsky, Adel and Schneider, Anderson and Nevmyvaka, Yuriy},
37
+ title = {Full-Stack Fine-Tuning for the Q Programming Language},
38
+ journal = {arXiv preprint},
39
+ year = {2024},
40
  }
41
  ```
42
 
index.html CHANGED
@@ -3,10 +3,10 @@
3
  <head>
4
  <meta charset="utf-8">
5
  <meta name="description"
6
- content="Deformable Neural Radiance Fields creates free-viewpoint portraits (nerfies) from casually captured videos.">
7
- <meta name="keywords" content="Nerfies, D-NeRF, NeRF">
8
  <meta name="viewport" content="width=device-width, initial-scale=1">
9
- <title>Nerfies: Deformable Neural Radiance Fields</title>
10
 
11
  <link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
12
  rel="stylesheet">
@@ -33,48 +33,32 @@
33
  <div class="container is-max-desktop">
34
  <div class="columns is-centered">
35
  <div class="column has-text-centered">
36
- <h1 class="title is-1 publication-title">Nerfies: Deformable Neural Radiance Fields</h1>
37
  <div class="is-size-5 publication-authors">
38
  <span class="author-block">
39
- <a href="https://keunhong.com" target="_blank">Keunhong Park</a><sup>1</sup>,</span>
40
  <span class="author-block">
41
- <a href="https://utkarshsinha.com" target="_blank">Utkarsh Sinha</a><sup>2</sup>,</span>
42
  <span class="author-block">
43
- <a href="https://jonbarron.info" target="_blank">Jonathan T. Barron</a><sup>2</sup>,
44
  </span>
45
  <span class="author-block">
46
- <a href="http://sofienbouaziz.com" target="_blank">Sofien Bouaziz</a><sup>2</sup>,
47
  </span>
48
  <span class="author-block">
49
- <a href="https://www.danbgoldman.com" target="_blank">Dan B Goldman</a><sup>2</sup>,
50
- </span>
51
- <span class="author-block">
52
- <a href="https://homes.cs.washington.edu/~seitz/" target="_blank">Steven M. Seitz</a><sup>1,2</sup>,
53
- </span>
54
- <span class="author-block">
55
- <a href="http://www.ricardomartinbrualla.com" target="_blank">Ricardo Martin-Brualla</a><sup>2</sup>
56
  </span>
57
  </div>
58
 
59
  <div class="is-size-5 publication-authors">
60
- <span class="author-block"><sup>1</sup>University of Washington,</span>
61
- <span class="author-block"><sup>2</sup>Google Research</span>
62
  </div>
63
 
64
  <div class="column has-text-centered">
65
  <div class="publication-links">
66
- <!-- PDF Link. -->
67
  <span class="link-block">
68
- <a href="https://arxiv.org/pdf/2011.12948" target="_blank"
69
- class="external-link button is-normal is-rounded is-dark">
70
- <span class="icon">
71
- <i class="fas fa-file-pdf"></i>
72
- </span>
73
- <span>Paper</span>
74
- </a>
75
- </span>
76
- <span class="link-block">
77
- <a href="https://arxiv.org/abs/2011.12948" target="_blank"
78
  class="external-link button is-normal is-rounded is-dark">
79
  <span class="icon">
80
  <i class="ai ai-arxiv"></i>
@@ -82,19 +66,9 @@
82
  <span>arXiv</span>
83
  </a>
84
  </span>
85
- <!-- Video Link. -->
86
- <span class="link-block">
87
- <a href="https://www.youtube.com/watch?v=MrKrnHhk8IA" target="_blank"
88
- class="external-link button is-normal is-rounded is-dark">
89
- <span class="icon">
90
- <i class="fab fa-youtube"></i>
91
- </span>
92
- <span>Video</span>
93
- </a>
94
- </span>
95
  <!-- Code Link. -->
96
  <span class="link-block">
97
- <a href="https://github.com/google/nerfies" target="_blank"
98
  class="external-link button is-normal is-rounded is-dark">
99
  <span class="icon">
100
  <i class="fab fa-github"></i>
@@ -102,17 +76,17 @@
102
  <span>Code</span>
103
  </a>
104
  </span>
105
- <!-- Dataset Link. -->
106
  <span class="link-block">
107
- <a href="https://github.com/google/nerfies/releases/tag/0.1" target="_blank"
108
  class="external-link button is-normal is-rounded is-dark">
109
  <span class="icon">
110
- <i class="far fa-images"></i>
111
  </span>
112
- <span>Data</span>
113
  </a>
 
114
  </div>
115
-
116
  </div>
117
  </div>
118
  </div>
@@ -123,78 +97,14 @@
123
  <section class="hero teaser">
124
  <div class="container is-max-desktop">
125
  <div class="hero-body">
126
- <video id="teaser" autoplay muted loop playsinline height="100%">
127
- <source src="./static/videos/teaser.mp4"
128
- type="video/mp4">
129
- </video>
130
  <h2 class="subtitle has-text-centered">
131
- <span class="dnerf">Nerfies</span> turns selfie videos from your phone into
132
- free-viewpoint
133
- portraits.
134
  </h2>
135
  </div>
136
  </div>
137
  </section>
138
 
139
-
140
- <section class="hero is-light is-small">
141
- <div class="hero-body">
142
- <div class="container">
143
- <div id="results-carousel" class="carousel results-carousel">
144
- <div class="item item-steve">
145
- <video poster="" id="steve" autoplay controls muted loop playsinline height="100%">
146
- <source src="./static/videos/steve.mp4"
147
- type="video/mp4">
148
- </video>
149
- </div>
150
- <div class="item item-chair-tp">
151
- <video poster="" id="chair-tp" autoplay controls muted loop playsinline height="100%">
152
- <source src="./static/videos/chair-tp.mp4"
153
- type="video/mp4">
154
- </video>
155
- </div>
156
- <div class="item item-shiba">
157
- <video poster="" id="shiba" autoplay controls muted loop playsinline height="100%">
158
- <source src="./static/videos/shiba.mp4"
159
- type="video/mp4">
160
- </video>
161
- </div>
162
- <div class="item item-fullbody">
163
- <video poster="" id="fullbody" autoplay controls muted loop playsinline height="100%">
164
- <source src="./static/videos/fullbody.mp4"
165
- type="video/mp4">
166
- </video>
167
- </div>
168
- <div class="item item-blueshirt">
169
- <video poster="" id="blueshirt" autoplay controls muted loop playsinline height="100%">
170
- <source src="./static/videos/blueshirt.mp4"
171
- type="video/mp4">
172
- </video>
173
- </div>
174
- <div class="item item-mask">
175
- <video poster="" id="mask" autoplay controls muted loop playsinline height="100%">
176
- <source src="./static/videos/mask.mp4"
177
- type="video/mp4">
178
- </video>
179
- </div>
180
- <div class="item item-coffee">
181
- <video poster="" id="coffee" autoplay controls muted loop playsinline height="100%">
182
- <source src="./static/videos/coffee.mp4"
183
- type="video/mp4">
184
- </video>
185
- </div>
186
- <div class="item item-toby">
187
- <video poster="" id="toby" autoplay controls muted loop playsinline height="100%">
188
- <source src="./static/videos/toby2.mp4"
189
- type="video/mp4">
190
- </video>
191
- </div>
192
- </div>
193
- </div>
194
- </div>
195
- </section>
196
-
197
-
198
  <section class="section">
199
  <div class="container is-max-desktop">
200
  <!-- Abstract. -->
@@ -203,210 +113,166 @@
203
  <h2 class="title is-3">Abstract</h2>
204
  <div class="content has-text-justified">
205
  <p>
206
- We present the first method capable of photorealistically reconstructing a non-rigidly
207
- deforming scene using photos/videos captured casually from mobile phones.
208
  </p>
209
  <p>
210
- Our approach augments neural radiance fields
211
- (NeRF) by optimizing an
212
- additional continuous volumetric deformation field that warps each observed point into a
213
- canonical 5D NeRF.
214
- We observe that these NeRF-like deformation fields are prone to local minima, and
215
- propose a coarse-to-fine optimization method for coordinate-based models that allows for
216
- more robust optimization.
217
- By adapting principles from geometry processing and physical simulation to NeRF-like
218
- models, we propose an elastic regularization of the deformation field that further
219
- improves robustness.
220
  </p>
221
  <p>
222
- We show that <span class="dnerf">Nerfies</span> can turn casually captured selfie
223
- photos/videos into deformable NeRF
224
- models that allow for photorealistic renderings of the subject from arbitrary
225
- viewpoints, which we dub <i>"nerfies"</i>. We evaluate our method by collecting data
226
- using a
227
- rig with two mobile phones that take time-synchronized photos, yielding train/validation
228
- images of the same pose at different viewpoints. We show that our method faithfully
229
- reconstructs non-rigidly deforming scenes and reproduces unseen views with high
230
- fidelity.
231
  </p>
232
  </div>
233
  </div>
234
  </div>
235
  <!--/ Abstract. -->
236
 
237
- <!-- Paper video. -->
238
  <div class="columns is-centered has-text-centered">
239
  <div class="column is-four-fifths">
240
- <h2 class="title is-3">Video</h2>
241
- <div class="publication-video">
242
- <iframe src="https://www.youtube.com/embed/MrKrnHhk8IA?rel=0&amp;showinfo=0"
243
- frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
 
 
 
244
  </div>
245
  </div>
246
  </div>
247
- <!--/ Paper video. -->
248
  </div>
249
  </section>
250
 
251
-
252
  <section class="section">
253
  <div class="container is-max-desktop">
254
-
255
  <div class="columns is-centered">
256
-
257
- <!-- Visual Effects. -->
258
- <div class="column">
259
- <div class="content">
260
- <h2 class="title is-3">Visual Effects</h2>
 
261
  <p>
262
- Using <i>nerfies</i> you can create fun visual effects. This Dolly zoom effect
263
- would be impossible without nerfies since it would require going through a wall.
264
  </p>
265
- <video id="dollyzoom" autoplay controls muted loop playsinline height="100%">
266
- <source src="./static/videos/dollyzoom-stacked.mp4"
267
- type="video/mp4">
268
- </video>
269
  </div>
270
- </div>
271
- <!--/ Visual Effects. -->
272
-
273
- <!-- Matting. -->
274
- <div class="column">
275
- <h2 class="title is-3">Matting</h2>
276
  <div class="columns is-centered">
277
- <div class="column content">
278
- <p>
279
- As a byproduct of our method, we can also solve the matting problem by ignoring
280
- samples that fall outside of a bounding box during rendering.
281
- </p>
282
- <video id="matting-video" controls playsinline height="100%">
283
- <source src="./static/videos/matting.mp4"
284
- type="video/mp4">
285
- </video>
286
  </div>
287
-
288
  </div>
289
- </div>
290
- </div>
291
- <!--/ Matting. -->
292
-
293
- <!-- Animation. -->
294
- <div class="columns is-centered">
295
- <div class="column is-full-width">
296
- <h2 class="title is-3">Animation</h2>
297
 
298
- <!-- Interpolating. -->
299
- <h3 class="title is-4">Interpolating states</h3>
300
  <div class="content has-text-justified">
301
  <p>
302
- We can also animate the scene by interpolating the deformation latent codes of two input
303
- frames. Use the slider here to linearly interpolate between the left frame and the right
304
- frame.
305
  </p>
306
  </div>
307
- <div class="columns is-vcentered interpolation-panel">
308
- <div class="column is-3 has-text-centered">
309
- <img src="./static/images/interpolate_start.jpg"
310
- class="interpolation-image"
311
- alt="Interpolate start reference image."/>
312
- <p>Start Frame</p>
313
- </div>
314
- <div class="column interpolation-video-column">
315
- <div id="interpolation-image-wrapper">
316
- Loading...
317
- </div>
318
- <input class="slider is-fullwidth is-large is-info"
319
- id="interpolation-slider"
320
- step="1" min="0" max="100" value="0" type="range">
321
- </div>
322
- <div class="column is-3 has-text-centered">
323
- <img src="./static/images/interpolate_end.jpg"
324
- class="interpolation-image"
325
- alt="Interpolation end reference image."/>
326
- <p class="is-bold">End Frame</p>
327
  </div>
328
  </div>
329
- <br/>
330
- <!--/ Interpolating. -->
331
 
332
- <!-- Re-rendering. -->
333
- <h3 class="title is-4">Re-rendering the input video</h3>
334
  <div class="content has-text-justified">
335
  <p>
336
- Using <span class="dnerf">Nerfies</span>, you can re-render a video from a novel
337
- viewpoint such as a stabilized camera by playing back the training deformations.
338
  </p>
339
  </div>
340
- <div class="content has-text-centered">
341
- <video id="replay-video"
342
- controls
343
- muted
344
- preload
345
- playsinline
346
- width="75%">
347
- <source src="./static/videos/replay.mp4"
348
- type="video/mp4">
349
- </video>
350
  </div>
351
- <!--/ Re-rendering. -->
352
-
353
- </div>
354
- </div>
355
- <!--/ Animation. -->
356
-
357
-
358
- <!-- Concurrent Work. -->
359
- <div class="columns is-centered">
360
- <div class="column is-full-width">
361
- <h2 class="title is-3">Related Links</h2>
362
 
 
 
363
  <div class="content has-text-justified">
364
  <p>
365
- There's a lot of excellent work that was introduced around the same time as ours.
366
- </p>
367
- <p>
368
- <a href="https://arxiv.org/abs/2104.09125" target="_blank">Progressive Encoding for Neural Optimization</a> introduces an idea similar to our windowed position encoding for coarse-to-fine optimization.
369
- </p>
370
- <p>
371
- <a href="https://www.albertpumarola.com/research/D-NeRF/index.html" target="_blank">D-NeRF</a> and <a href="https://gvv.mpi-inf.mpg.de/projects/nonrigid_nerf/" target="_blank">NR-NeRF</a>
372
- both use deformation fields to model non-rigid scenes.
373
  </p>
 
 
 
 
 
 
 
 
 
374
  <p>
375
- Some works model videos with a NeRF by directly modulating the density, such as <a href="https://video-nerf.github.io/" target="_blank">Video-NeRF</a>, <a href="https://www.cs.cornell.edu/~zl548/NSFF/" target="_blank">NSFF</a>, and <a href="https://neural-3d-video.github.io/" target="_blank">DyNeRF</a>
376
  </p>
 
 
 
 
 
 
 
 
 
 
 
377
  <p>
378
- There are probably many more by the time you are reading this. Check out <a href="https://dellaert.github.io/NeRF/" target="_blank">Frank Dellart's survey on recent NeRF papers</a>, and <a href="https://github.com/yenchenlin/awesome-NeRF" target="_blank">Yen-Chen Lin's curated list of NeRF papers</a>.
379
  </p>
380
  </div>
 
 
 
 
 
 
381
  </div>
382
  </div>
383
- <!--/ Concurrent Work. -->
384
-
385
  </div>
386
  </section>
387
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
388
 
389
  <section class="section" id="BibTeX">
390
  <div class="container is-max-desktop content">
391
  <h2 class="title">BibTeX</h2>
392
- <pre><code>@article{park2021nerfies,
393
- author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
394
- title = {Nerfies: Deformable Neural Radiance Fields},
395
- journal = {ICCV},
396
- year = {2021},
 
397
  }</code></pre>
398
  </div>
399
  </section>
400
 
401
-
402
  <footer class="footer">
403
  <div class="container">
404
  <div class="content has-text-centered">
405
- <a class="icon-link" target="_blank"
406
- href="./static/videos/nerfies_paper.pdf">
407
  <i class="fas fa-file-pdf"></i>
408
  </a>
409
- <a class="icon-link" href="https://github.com/keunhong" target="_blank" class="external-link" disabled>
410
  <i class="fab fa-github"></i>
411
  </a>
412
  </div>
@@ -418,13 +284,6 @@
418
  href="http://creativecommons.org/licenses/by-sa/4.0/">Creative
419
  Commons Attribution-ShareAlike 4.0 International License</a>.
420
  </p>
421
- <p>
422
- This means you are free to borrow the <a target="_blank"
423
- href="https://github.com/nerfies/nerfies.github.io">source code</a> of this website,
424
- we just ask that you link back to this page in the footer.
425
- Please remember to remove the analytics code included in the header of the website which
426
- you do not want on your website.
427
- </p>
428
  </div>
429
  </div>
430
  </div>
 
3
  <head>
4
  <meta charset="utf-8">
5
  <meta name="description"
6
+ content="Full-Stack Fine-Tuning for the Q Programming Language: A comprehensive approach to adapting LLMs for specialized domains.">
7
+ <meta name="keywords" content="Q Programming, Language Models, Fine-tuning, Quantitative Finance">
8
  <meta name="viewport" content="width=device-width, initial-scale=1">
9
+ <title>Full-Stack Fine-Tuning for the Q Programming Language</title>
10
 
11
  <link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
12
  rel="stylesheet">
 
33
  <div class="container is-max-desktop">
34
  <div class="columns is-centered">
35
  <div class="column has-text-centered">
36
+ <h1 class="title is-1 publication-title">Full-Stack Fine-Tuning for the Q Programming Language</h1>
37
  <div class="is-size-5 publication-authors">
38
  <span class="author-block">
39
+ <a href="https://x.com/brendanh0gan" target="_blank">Brendan R. Hogan</a><sup>1</sup>,</span>
40
  <span class="author-block">
41
+ <a href="https://x.com/willccbb" target="_blank">Will Brown</a><sup>2</sup>,</span>
42
  <span class="author-block">
43
+ <a href="https://x.com/adel_boyarsky" target="_blank">Adel Boyarsky</a><sup>1</sup>,
44
  </span>
45
  <span class="author-block">
46
+ <a href="#" target="_blank">Anderson Schneider</a><sup>1</sup>,
47
  </span>
48
  <span class="author-block">
49
+ <a href="#" target="_blank">Yuriy Nevmyvaka</a><sup>1</sup>
 
 
 
 
 
 
50
  </span>
51
  </div>
52
 
53
  <div class="is-size-5 publication-authors">
54
+ <span class="author-block"><sup>1</sup>Morgan Stanley, New York, NY</span>
55
+ <span class="author-block"><sup>2</sup>Prime Intellect, San Francisco, CA</span>
56
  </div>
57
 
58
  <div class="column has-text-centered">
59
  <div class="publication-links">
 
60
  <span class="link-block">
61
+ <a href="#" target="_blank"
 
 
 
 
 
 
 
 
 
62
  class="external-link button is-normal is-rounded is-dark">
63
  <span class="icon">
64
  <i class="ai ai-arxiv"></i>
 
66
  <span>arXiv</span>
67
  </a>
68
  </span>
 
 
 
 
 
 
 
 
 
 
69
  <!-- Code Link. -->
70
  <span class="link-block">
71
+ <a href="#" target="_blank"
72
  class="external-link button is-normal is-rounded is-dark">
73
  <span class="icon">
74
  <i class="fab fa-github"></i>
 
76
  <span>Code</span>
77
  </a>
78
  </span>
79
+ <!-- Models Link. -->
80
  <span class="link-block">
81
+ <a href="#" target="_blank"
82
  class="external-link button is-normal is-rounded is-dark">
83
  <span class="icon">
84
+ <i class="fas fa-robot"></i>
85
  </span>
86
+ <span>Models</span>
87
  </a>
88
+ </span>
89
  </div>
 
90
  </div>
91
  </div>
92
  </div>
 
97
  <section class="hero teaser">
98
  <div class="container is-max-desktop">
99
  <div class="hero-body">
100
+ <img src="./static/images/combined_api_qwen_performance.png" alt="Q Model Performance" style="width: 100%; height: auto;">
 
 
 
101
  <h2 class="subtitle has-text-centered">
102
+ Our fully trained models outperform frontier models on Q programming tasks, with our 32B model surpassing Claude Opus-4 by 29.5%.
 
 
103
  </h2>
104
  </div>
105
  </div>
106
  </section>
107
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
  <section class="section">
109
  <div class="container is-max-desktop">
110
  <!-- Abstract. -->
 
113
  <h2 class="title is-3">Abstract</h2>
114
  <div class="content has-text-justified">
115
  <p>
116
+ Even though large language models are becoming increasingly capable, it is still unreasonable to expect them to excel at tasks that are under-represented on the Internet. Leveraging LLMs for specialized applications, particularly in niche programming languages and private domains, remains challenging and largely unsolved.
 
117
  </p>
118
  <p>
119
+ In this work, we address this gap by presenting a comprehensive, open-source approach for adapting LLMs to the Q programming language, a popular tool in quantitative finance. We introduce a new LeetCode-style evaluation dataset for Q, benchmark major frontier models, then perform pretraining, supervised fine-tuning, and reinforcement learning to train a suite of models based on the Qwen-2.5 series, spanning five parameter sizes (1.5B, 3B, 7B, 14B, 32B).
 
 
 
 
 
 
 
 
 
120
  </p>
121
  <p>
122
+ Our best model achieves a pass@1 accuracy of 59% on our Q benchmark, surpassing the best-performing frontier model, Claude Opus-4, by 29.5%. Additionally, all our models, even our 1.5B variant, outperform GPT-4.1 on this task. We provide a detailed blueprint for dataset construction, model pretraining, supervised fine-tuning, and reinforcement learning that is broadly applicable to other specialized domains.
 
 
 
 
 
 
 
 
123
  </p>
124
  </div>
125
  </div>
126
  </div>
127
  <!--/ Abstract. -->
128
 
129
+ <!-- Important Notice -->
130
  <div class="columns is-centered has-text-centered">
131
  <div class="column is-four-fifths">
132
+ <div class="notification is-warning">
133
+ <h3 class="title is-4">Note for Q Practitioners</h3>
134
+ <div class="content has-text-justified">
135
+ <p>
136
+ Our Q benchmark uses a LeetCode-style format that produces "Pythonic Q" code, which does not reflect typical Q usage in practice (database queries and analytics). While our models show strong performance on algorithmic tasks, practitioners may find that our pretrained models (before SFT/RL) provide better general-purpose Q assistance. This work serves as a blueprint for further adaptation to more representative Q datasets.
137
+ </p>
138
+ </div>
139
  </div>
140
  </div>
141
  </div>
142
+ <!--/ Important Notice -->
143
  </div>
144
  </section>
145
 
 
146
  <section class="section">
147
  <div class="container is-max-desktop">
 
148
  <div class="columns is-centered">
149
+ <div class="column is-full-width">
150
+ <h2 class="title is-3">Training Pipeline & Results</h2>
151
+
152
+ <!-- Dataset Construction -->
153
+ <h3 class="title is-4">Building the Dataset</h3>
154
+ <div class="content has-text-justified">
155
  <p>
156
+ We started by creating a verifiable Q dataset through a model-in-the-loop approach. Using LeetCode problems as a foundation, we iteratively generated Q solutions, verified them programmatically, and used successful examples to train better models. This bootstrapping process involved careful separation of solution and test generation to prevent reward hacking, ultimately resulting in a diverse dataset covering multiple problem types and difficulty levels.
 
157
  </p>
 
 
 
 
158
  </div>
 
 
 
 
 
 
159
  <div class="columns is-centered">
160
+ <div class="column is-8">
161
+ <img src="./static/images/dataset_distribution.png" alt="Dataset Distribution" style="width: 100%; height: auto;">
162
+ <p class="has-text-centered is-size-7"><em>Distribution of problem difficulty levels and categories in our Q-LeetCode dataset, showing comprehensive coverage across algorithmic topics.</em></p>
 
 
 
 
 
 
163
  </div>
 
164
  </div>
 
 
 
 
 
 
 
 
165
 
166
+ <!-- Pretraining -->
167
+ <h3 class="title is-4">Domain-Adaptive Pretraining</h3>
168
  <div class="content has-text-justified">
169
  <p>
170
+ We collected and curated Q code from open-source repositories and official documentation, creating a high-quality corpus for domain adaptation. Pretraining on this data gave our models foundational knowledge of Q syntax and idioms, providing a crucial boost that helped break through the bootstrapping plateau and established a strong foundation for subsequent fine-tuning stages.
 
 
171
  </p>
172
  </div>
173
+ <div class="columns is-centered">
174
+ <div class="column is-8">
175
+ <img src="./static/images/pretrain_models_performance.png" alt="Pretraining Performance" style="width: 100%; height: auto;">
176
+ <p class="has-text-centered is-size-7"><em>Performance improvements across all model sizes after domain-adaptive pretraining, with larger models showing greater gains.</em></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
177
  </div>
178
  </div>
 
 
179
 
180
+ <!-- SFT -->
181
+ <h3 class="title is-4">Supervised Fine-Tuning</h3>
182
  <div class="content has-text-justified">
183
  <p>
184
+ Using our curated LeetCode-Q dataset, we performed supervised fine-tuning on multiple task types: description-to-Q, Python-to-Q, and Q-to-Python translation. This stage directly optimized our models for the specific algorithmic challenges in our benchmark, building on the general Q knowledge from pretraining to achieve substantial performance improvements across all model sizes.
 
185
  </p>
186
  </div>
187
+ <div class="columns is-centered">
188
+ <div class="column is-8">
189
+ <img src="./static/images/sft_models_performance.png" alt="SFT Performance" style="width: 100%; height: auto;">
190
+ <p class="has-text-centered is-size-7"><em>Significant performance gains from supervised fine-tuning across all model sizes, demonstrating the value of task-specific training.</em></p>
191
+ </div>
 
 
 
 
 
192
  </div>
 
 
 
 
 
 
 
 
 
 
 
193
 
194
+ <!-- Reinforcement Learning -->
195
+ <h3 class="title is-4">Reinforcement Learning</h3>
196
  <div class="content has-text-justified">
197
  <p>
198
+ Finally, we applied reinforcement learning using programmatic rewards based on test case correctness. We explored both reasoning and non-reasoning variants, with particularly strong results for our 14B and 32B reasoning models. RL provided the final boost in performance, with reasoning models showing the ability to think through complex problems step-by-step before generating code solutions.
 
 
 
 
 
 
 
199
  </p>
200
+ </div>
201
+ <div class="columns is-centered">
202
+ <div class="column is-8">
203
+ <img src="./static/images/rl_models_performance.png" alt="RL Performance" style="width: 100%; height: auto;">
204
+ <p class="has-text-centered is-size-7"><em>Performance improvements from reinforcement learning across model sizes, with reasoning variants showing particularly strong gains for larger models.</em></p>
205
+ </div>
206
+ </div>
207
+
208
+ <div class="content has-text-justified">
209
  <p>
210
+ Interestingly, during RL training, we observed that our reasoning models learned to generate longer, more thoughtful completions over time. This suggests the models discovered that taking more time to reason through problems led to better solutions, naturally developing a "slow thinking" approach for complex algorithmic challenges.
211
  </p>
212
+ </div>
213
+ <div class="columns is-centered">
214
+ <div class="column is-8">
215
+ <img src="./static/images/mean_completion_tokens.png" alt="Completion Length Evolution" style="width: 100%; height: auto;">
216
+ <p class="has-text-centered is-size-7"><em>Evolution of mean completion length during RL training, showing how the reasoning model learned to generate longer, more thoughtful responses for better results.</em></p>
217
+ </div>
218
+ </div>
219
+
220
+ <!-- Final Results -->
221
+ <h3 class="title is-4">Cumulative Impact</h3>
222
+ <div class="content has-text-justified">
223
  <p>
224
+ Each stage of our pipeline contributed meaningfully to the final performance. The chart below shows how pretraining, supervised fine-tuning, and reinforcement learning each added value, with larger models benefiting more from each adaptation stage.
225
  </p>
226
  </div>
227
+ <div class="columns is-centered">
228
+ <div class="column is-10">
229
+ <img src="./static/images/qwen_stacked_value_add.png" alt="Cumulative Gains" style="width: 100%; height: auto;">
230
+ <p class="has-text-centered is-size-7"><em>Stacked chart showing the cumulative contribution of each training stage across all model sizes.</em></p>
231
+ </div>
232
+ </div>
233
  </div>
234
  </div>
 
 
235
  </div>
236
  </section>
237
 
238
+ <section class="section">
239
+ <div class="container is-max-desktop">
240
+ <div class="columns is-centered">
241
+ <div class="column is-full-width">
242
+ <h2 class="title is-3">Key Contributions</h2>
243
+ <div class="content">
244
+ <ul>
245
+ <li><strong>New Q Benchmark:</strong> First LeetCode-style evaluation dataset for Q programming language with rigorous programmatic verification</li>
246
+ <li><strong>Complete Model Suite:</strong> Five model sizes (1.5B-32B) all outperforming GPT-4.1, with top models exceeding Claude Opus-4</li>
247
+ <li><strong>Practical Blueprint:</strong> End-to-end methodology for LLM domain adaptation with all code, data, and training scripts released</li>
248
+ <li><strong>Methodological Insights:</strong> Lessons on reward hacking, evaluation design, and scaling effects in specialized domains</li>
249
+ </ul>
250
+ </div>
251
+ </div>
252
+ </div>
253
+ </div>
254
+ </section>
255
 
256
  <section class="section" id="BibTeX">
257
  <div class="container is-max-desktop content">
258
  <h2 class="title">BibTeX</h2>
259
+ <pre><code>@article{hogan2025fullstack,
260
+ author = {Hogan, Brendan R. and Brown, Will and Boyarsky, Adel and Schneider, Anderson and Nevmyvaka, Yuriy},
261
+ title = {Full-Stack Fine-Tuning for the Q Programming Language},
262
+ journal = {arXiv preprint},
263
+ year = {2025},
264
+ url = {https://arxiv.org/abs/XXXX.XXXXX}
265
  }</code></pre>
266
  </div>
267
  </section>
268
 
 
269
  <footer class="footer">
270
  <div class="container">
271
  <div class="content has-text-centered">
272
+ <a class="icon-link" target="_blank" href="#">
 
273
  <i class="fas fa-file-pdf"></i>
274
  </a>
275
+ <a class="icon-link" href="#" target="_blank" class="external-link">
276
  <i class="fab fa-github"></i>
277
  </a>
278
  </div>
 
284
  href="http://creativecommons.org/licenses/by-sa/4.0/">Creative
285
  Commons Attribution-ShareAlike 4.0 International License</a>.
286
  </p>
 
 
 
 
 
 
 
287
  </div>
288
  </div>
289
  </div>
static/images/combined_api_qwen_performance.png ADDED

Git LFS Details

  • SHA256: 3edb55b7d3017214058a123029da11154060c4cdef6a890d156854d1dc46cac4
  • Pointer size: 131 Bytes
  • Size of remote file: 238 kB
static/images/dataset_distribution.png ADDED

Git LFS Details

  • SHA256: 478a07dd4bcd9f341ac65415e45fed7e21cc7b7a98e4ce67fd3c278aead22fcf
  • Pointer size: 131 Bytes
  • Size of remote file: 302 kB
static/images/mean_completion_tokens.png ADDED

Git LFS Details

  • SHA256: e893712d005bfc8dda8c186a2e5cc27818a8ad5a20c86a67a42a27dfcea0d1a2
  • Pointer size: 131 Bytes
  • Size of remote file: 727 kB
static/images/model_perf.png ADDED

Git LFS Details

  • SHA256: 06feee5407daca7e504e6d037e1855a517a8d85f923ccc99db6ef0ecf7caf04b
  • Pointer size: 132 Bytes
  • Size of remote file: 1.04 MB
static/images/post_pretraining.png ADDED

Git LFS Details

  • SHA256: 5cf1cb3878c8dd7e8e899034a497b1fe27dc8fbf1135dcd98fefec98461dc65d
  • Pointer size: 131 Bytes
  • Size of remote file: 610 kB
static/images/pretrain_models_performance.png ADDED

Git LFS Details

  • SHA256: dea6e01e389ed66474b8b687c8f3a1c76ed6ed0941f88b6fb67de20bed60f193
  • Pointer size: 131 Bytes
  • Size of remote file: 291 kB
static/images/qwen_stacked_value_add.png ADDED

Git LFS Details

  • SHA256: ee8107980ab1002691817272e82dc2fb6744e558407c47c30dcf271f585b348c
  • Pointer size: 131 Bytes
  • Size of remote file: 218 kB
static/images/reasoning_train_reward.png ADDED

Git LFS Details

  • SHA256: 412063f7c801c8d52c6d63f6e66b6d7085421c66ea27e0ba88c560b8f4c262f9
  • Pointer size: 131 Bytes
  • Size of remote file: 857 kB
static/images/rl_models_performance.png ADDED

Git LFS Details

  • SHA256: 422874a746e5f729d0ce1cd7517b3b00b91ca81cacc8af340fb3801610620a5b
  • Pointer size: 131 Bytes
  • Size of remote file: 317 kB
static/images/sft_models_performance.png ADDED

Git LFS Details

  • SHA256: a20f0cc979f697135d69243b6c731cca0bb15c48b2b26fee85f6c24198533622
  • Pointer size: 131 Bytes
  • Size of remote file: 325 kB