Update README.md
Browse files
README.md
CHANGED
|
@@ -45,4 +45,19 @@ We use the [Alpaca fine-tuning script](https://github.com/tatsu-lab/stanford_alp
|
|
| 45 |
|
| 46 |
Although this project aims to better align current LMs with social norms, inappropriate content and inherent biases in the training data will still impair the alignment of the model.
|
| 47 |
|
| 48 |
-
The model should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
Although this project aims to better align current LMs with social norms, inappropriate content and inherent biases in the training data will still impair the alignment of the model.
|
| 47 |
|
| 48 |
+
The model should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
|
| 49 |
+
|
| 50 |
+
# Citation
|
| 51 |
+
|
| 52 |
+
Please cite our paper if you use the data or code in this repo:
|
| 53 |
+
|
| 54 |
+
```bibtex
|
| 55 |
+
@misc{liu2023sociallyaligned,
|
| 56 |
+
title={Training Socially Aligned Language Models in Simulated Human Society},
|
| 57 |
+
author={Ruibo Liu and Ruixin Yang and Chenyan Jia and Ge Zhang and Denny Zhou and Andrew M. Dai and Diyi Yang and Soroush Vosoughi},
|
| 58 |
+
year={2023},
|
| 59 |
+
eprint={2305.16960},
|
| 60 |
+
archivePrefix={arXiv},
|
| 61 |
+
primaryClass={cs.CL}
|
| 62 |
+
}
|
| 63 |
+
```
|