Update README.md
Browse files
README.md
CHANGED
|
@@ -46,3 +46,15 @@ warp = WarpLJP(args=args)
|
|
| 46 |
for data in dataset:
|
| 47 |
x,_,y = warp.processing_single(data)
|
| 48 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
for data in dataset:
|
| 47 |
x,_,y = warp.processing_single(data)
|
| 48 |
```
|
| 49 |
+
|
| 50 |
+
## code
|
| 51 |
+
We have provided a class of Monte Carlo tree sampling-based inference methods as baselines in the code, which can be directly invoked via scripts:
|
| 52 |
+
```python
|
| 53 |
+
sh test.sh
|
| 54 |
+
```
|
| 55 |
+
If fine-tuning a reward model is required, execute:
|
| 56 |
+
```python
|
| 57 |
+
sh train.sh
|
| 58 |
+
```
|
| 59 |
+
The reward model in inference can be loaded by adding --reward_model_path to the command.
|
| 60 |
+
|