Commit
·
c4f10a3
1
Parent(s):
ceaa1e2
upload readme
Browse files
README.md
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# DataBack: Dataset of CNF Formulas and Backbone Variable Phases
|
| 2 |
+
|
| 3 |
+
## What is DataBack
|
| 4 |
+
`DataBack` is a dataset that consists of SAT CNF formulas, each labeled with the phases of its backbone variables. Within `DataBack`, there are two distinct subsets: the pre-training set, named `DataBack-PT`, and the fine-tuning set, named `DataBack-FT`. The state-of-the-art backbone extractor, `CadiBack`, has been employed to obtain the backbone labels. Due to the increased complexity of the fine-tuning formulas, we have allocated a timeout of 1,000 seconds for the pre-training formulas and 5,000 seconds for the fine-tuning ones.
|
| 5 |
+
|
| 6 |
+
The `DataBack` dataset has been employed to both pre-train and fine-tune our `NeuroBack` model, which has demonstrated significant improvements in SAT solving efficiency. For an in-depth exploration of `DataBack`, please refer to [our `NeuroBack` paper](https://arxiv.org/pdf/2110.14053.pdf).
|
| 7 |
+
|
| 8 |
+
## Authors
|
| 9 |
+
Wenxi Wang, Yang Hu, Mohit Tiwari, Sarfraz Khurshid, Ken McMillan, Risto Miikkulainen
|
| 10 |
+
|
| 11 |
+
## Publication
|
| 12 |
+
If you use `DataBack` in your research, please kindly cite [our paper](https://arxiv.org/pdf/2110.14053.pdf):
|
| 13 |
+
```bib
|
| 14 |
+
@article{wang2023neuroback,
|
| 15 |
+
author = {Wang, Wenxi and
|
| 16 |
+
Hu, Yang and
|
| 17 |
+
Tiwari, Mohit and
|
| 18 |
+
Khurshid, Sarfraz and
|
| 19 |
+
McMillan, Kenneth L. and
|
| 20 |
+
Miikkulainen, Risto},
|
| 21 |
+
title = {NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks},
|
| 22 |
+
journal={arXiv preprint arXiv:2110.14053},
|
| 23 |
+
year={2021}
|
| 24 |
+
}
|
| 25 |
+
```
|
| 26 |
+
## Directory Structure
|
| 27 |
+
```
|
| 28 |
+
|- original # Original CNFs and their backbone variable phases
|
| 29 |
+
| |- cnf_pt.tar.gz # CNFs for model pre-training
|
| 30 |
+
| |- bb_pt.tar.gz # Backbone phases for pre-training CNFs
|
| 31 |
+
| |- cnf_ft.tar.gz # CNFs for model fine-tuning
|
| 32 |
+
| |- bb_ft.tar.gz # Backbone phases for fine-tuning CNFs
|
| 33 |
+
|
|
| 34 |
+
|- dual # Dual CNFs and their backbone variable phases
|
| 35 |
+
| |- dual_cnf_pt.tar.gz # Dual CNFs for model pre-training
|
| 36 |
+
| |- dual_bb_pt.tar.gz # Backbone phases for dual pre-training CNFs
|
| 37 |
+
| |- dual_cnf_ft.tar.gz # Dual CNFs for model fine-tuning
|
| 38 |
+
| |- dual_bb_ft.tar.gz # Backbone phases for dual fine-tuning CNFs
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
## File Naming Convention
|
| 42 |
+
In the original directory, each CNF tar file contains compressed CNF files named: `[cnf_name].cnf.[compression_format]`, where `[compression_format]` could be bz2, lzma, xz, gz, etc. Correspondingly, each backbone tar file comprises compressed backbone files named: `[cnf_name].cnf.backbone.xz`. It is important to note that a compressed CNF file will always share its `[cnf_name]` with its associated compressed backbone file.
|
| 43 |
+
|
| 44 |
+
In the dual directory, the naming convention remains consistent, but with an added `d_` prefix for each compressed CNF or backbone file to indicate it pertains to a dual CNF formula.
|
| 45 |
+
|
| 46 |
+
## Contact
|
| 47 |
+
Wenxi Wang (wenxiw@utexas.edu), Yang Hu (huyang@utexas.edu)
|