--- dataset_info: features: - name: puzzle dtype: string - name: solution dtype: string splits: - name: train num_examples: 1000000 - name: test num_examples: 500 license: mit task_categories: - text-generation tags: - sudoku - constraint-satisfaction - planning - reasoning - diffusion-language-model - reinforcement-learning size_categories: - 1M......`) - **Evaluation metric**: Cell-level accuracy (fraction of correctly filled empty cells) ### Model Input Prompt Example The following is the actual prompt format used in d1/d2/SPG for training and evaluation: ``` [System] Please solve the following 4x4 Sudoku puzzle. The puzzle is provided as a 16-character string reading left-to-right, top-to-bottom, where '0' represents empty cells. Rules: - Fill empty cells with digits 1-4 - Each row must contain digits 1-4 exactly once - Each column must contain digits 1-4 exactly once - Each 2x2 box must contain digits 1-4 exactly once Important: Your solution must be a COMPLETE 16-character string with only the digits 1-4, representing your final solved grid. Respond in this exact format: Your step-by-step solving process [16-character solution string with no spaces or separators] [User] Solve the following Sudoku puzzle: 0321003004002100 ``` **Expected model output (example - actual CoT content varies by model):** ``` The puzzle grid is: 0 3 | 2 1 0 0 | 3 0 ----+---- 0 4 | 0 0 2 1 | 0 0 Row 1: has 3,2,1 → missing 4 → position 0 = 4 Row 2: has 3 → need 1,2,4 Col 0: has 2 → need 1 or 4; Box0: has 3 → R1C0=4, so need 1 → position 4 = 1 Col 1: has 3,4,1 → need 2 → position 5 = 2 Col 3: has 1 → need 2,3,4; Row2 needs 4 → position 7 = 4 Row 3: has 4 → need 1,2,3 Col 2: has 2,3 → need 1 or 4; Row3 needs 1,2,3 → position 10 = 1 Col 3: has 1,4 → need 2 or 3; Box3: has 1 → position 11 = 2 Row 4: has 2,1 → need 3,4 Col 2: has 2,3,1 → need 4 → position 14 = 4 Col 3: has 1,4,2 → need 3 → position 15 = 3 4321123434122143 ``` ## Citation ```bibtex @article{zhao2025d1, title={d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning}, author={Zhao, Yanchen and Diao, Shitian and Bao, Hanze and Nie, Shuai and Wang, Juntao and Zhang, Min}, journal={arXiv preprint arXiv:2504.12216}, year={2025} } @article{wang2026d2, title={d2: Improved Techniques for Training Reasoning Diffusion Language Models}, author={Wang, Guanghan and Turok, Gilad and Schiff, Yair and Arriola, Marianne and Kuleshov, Volodymyr}, journal={arXiv preprint arXiv:2509.21474}, year={2026} } @article{spg2025, title={SPG: Sandwiched Policy Gradient for Masked Diffusion Language Models}, author={Facebook Research}, journal={arXiv preprint arXiv:2510.09541}, year={2025} } ```