|
|
--- |
|
|
license: cc0-1.0 |
|
|
--- |
|
|
|
|
|
# Dataset Card for Dataset Name |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
- **Homepage:** N/A |
|
|
- **Repository:** N/A |
|
|
- **Paper:** N/A |
|
|
- **Leaderboard:** N/A |
|
|
- **Point of Contact:** N/A |
|
|
|
|
|
### Dataset Summary |
|
|
|
|
|
Text from Reddit Sydney using convokit to obtain it. |
|
|
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
|
|
N/A |
|
|
|
|
|
### Languages |
|
|
|
|
|
English. Typically Australian English. Will include swearing, profanity, slang and possibly offensive material, as it is taken from Reddit and has not been filtered. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
Plain text |
|
|
|
|
|
### Data Instances |
|
|
|
|
|
N/A |
|
|
|
|
|
### Data Fields |
|
|
|
|
|
N/A |
|
|
|
|
|
### Data Splits |
|
|
|
|
|
N/A. You need to do splits yourself |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
Using this script: |
|
|
|
|
|
```python |
|
|
from convokit import Corpus, download # https://convokit.cornell.edu/documentation/subreddit.html |
|
|
corpus = Corpus(filename=download("subreddit-sydney")) |
|
|
textarr = [] |
|
|
for utt in corpus.iter_utterances(): |
|
|
if utt.text != "[deleted]": |
|
|
textarr.append(utt.text) |
|
|
text = '\n'.join(textarr); |
|
|
text_file = open("input.txt", "w") |
|
|
n = text_file.write(text) |
|
|
text_file.close() |
|
|
``` |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
I don't know what this means. |
|
|
|
|
|
### Source Data |
|
|
|
|
|
Reddit's Sydney subreddit. |
|
|
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
|
|
See script |
|
|
|
|
|
#### Who are the source language producers? |
|
|
|
|
|
See script |
|
|
|
|
|
### Annotations |
|
|
|
|
|
#### Annotation process |
|
|
|
|
|
N/A |
|
|
|
|
|
#### Who are the annotators? |
|
|
|
|
|
N/A |
|
|
|
|
|
### Personal and Sensitive Information |
|
|
|
|
|
Most likely. |
|
|
|
|
|
## Considerations for Using the Data |
|
|
|
|
|
### Social Impact of Dataset |
|
|
|
|
|
There is unfettered discussion. It is probably horrible to release a LLM trained on just this with no safety precautions. |
|
|
|
|
|
### Discussion of Biases |
|
|
|
|
|
This is going to be full of biases. It is raw internet discussion. |
|
|
|
|
|
### Other Known Limitations |
|
|
|
|
|
None |
|
|
|
|
|
## Additional Information |
|
|
|
|
|
None |
|
|
|
|
|
### Dataset Curators |
|
|
|
|
|
Not curated |
|
|
|
|
|
### Licensing Information |
|
|
|
|
|
Public Domain for the Python script and this representation of Reddit data. Original authors and Reddit may have some rights. |
|
|
|
|
|
### Citation Information |
|
|
|
|
|
None |
|
|
|
|
|
### Contributions |
|
|
|
|
|
N/A |
|
|
|