Update README.md
Browse files
README.md
CHANGED
|
@@ -27,7 +27,9 @@ configs:
|
|
| 27 |
|
| 28 |
I picked the subreddits that were the most popular. I did not pick NSFW but there is probably some NSFW language in here.
|
| 29 |
|
| 30 |
-
|
|
|
|
|
|
|
| 31 |
|
| 32 |
* AskReddit
|
| 33 |
* worldnews
|
|
@@ -57,6 +59,69 @@ Here are the subreddits:
|
|
| 57 |
* creepy
|
| 58 |
* nosleep
|
| 59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
## Code to download subredditt's
|
| 61 |
|
| 62 |
dl_subbreddits.sh:
|
|
@@ -311,3 +376,7 @@ combined_df = pd.concat(dfs, ignore_index=True)
|
|
| 311 |
# Save the combined dataframe to a Parquet file
|
| 312 |
combined_df.to_parquet('reddit_top_comments.parquet', index=False)
|
| 313 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
I picked the subreddits that were the most popular. I did not pick NSFW but there is probably some NSFW language in here.
|
| 29 |
|
| 30 |
+
The comment scores were filtered. If the comment was 1 or less then it was not included. This means at least one person would have had to upvote a comment.
|
| 31 |
+
|
| 32 |
+
The subreddits in the dataset are:
|
| 33 |
|
| 34 |
* AskReddit
|
| 35 |
* worldnews
|
|
|
|
| 59 |
* creepy
|
| 60 |
* nosleep
|
| 61 |
|
| 62 |
+
## Loading the dataset
|
| 63 |
+
|
| 64 |
+
Each entry in the dataset includes the following columns:
|
| 65 |
+
- **title**: The title of the Reddit post.
|
| 66 |
+
- **selftext**: The body text of the Reddit post.
|
| 67 |
+
- **top_comment**: The top comment on the Reddit post.
|
| 68 |
+
- **subreddit**: The subreddit where the post was made.
|
| 69 |
+
|
| 70 |
+
### 1. Loading the Entire Dataset
|
| 71 |
+
|
| 72 |
+
To load the entire dataset, use the following code:
|
| 73 |
+
|
| 74 |
+
```python
|
| 75 |
+
from datasets import load_dataset
|
| 76 |
+
|
| 77 |
+
# Load the dataset
|
| 78 |
+
dataset = load_dataset("cowWhySo/reddit_top_comments")
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
### 2. Loading Specific Splits
|
| 82 |
+
To load specific splits of the dataset:
|
| 83 |
+
|
| 84 |
+
```python
|
| 85 |
+
from datasets import load_dataset
|
| 86 |
+
|
| 87 |
+
# Load the train split
|
| 88 |
+
train_dataset = load_dataset("cowWhySo/reddit_top_comments", split="train")
|
| 89 |
+
|
| 90 |
+
# Load the validation split
|
| 91 |
+
validation_dataset = load_dataset("cowWhySo/reddit_top_comments", split="validation")
|
| 92 |
+
|
| 93 |
+
# Load the test split
|
| 94 |
+
test_dataset = load_dataset("cowWhySo/reddit_top_comments", split="test")
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
### 3. Streaming the Dataset
|
| 98 |
+
You can stream the data:
|
| 99 |
+
|
| 100 |
+
```python
|
| 101 |
+
from datasets import load_dataset
|
| 102 |
+
|
| 103 |
+
# Stream the train split
|
| 104 |
+
train_streaming = load_dataset("cowWhySo/reddit_top_comments", split="train", streaming=True)
|
| 105 |
+
|
| 106 |
+
# Iterate through the dataset
|
| 107 |
+
for example in train_streaming:
|
| 108 |
+
print(example)
|
| 109 |
+
break # Just print the first example for demonstration
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
### 4. Loading a Specific Slice
|
| 113 |
+
To load a specific portion of the dataset:
|
| 114 |
+
|
| 115 |
+
```python
|
| 116 |
+
from datasets import load_dataset
|
| 117 |
+
|
| 118 |
+
# Load the first 10% of the train split
|
| 119 |
+
train_slice = load_dataset("your-username/your-dataset-name", split="train[:10%]")
|
| 120 |
+
|
| 121 |
+
# Print the first few examples
|
| 122 |
+
print(train_slice[:5])
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
## Code to download subredditt's
|
| 126 |
|
| 127 |
dl_subbreddits.sh:
|
|
|
|
| 376 |
# Save the combined dataframe to a Parquet file
|
| 377 |
combined_df.to_parquet('reddit_top_comments.parquet', index=False)
|
| 378 |
```
|
| 379 |
+
|
| 380 |
+
## Source
|
| 381 |
+
|
| 382 |
+
https://the-eye.eu/redarcs/ - Covering 2005-2022
|