Commit
·
01740f7
1
Parent(s):
f9cbe93
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Reddit Randomness Dataset
|
| 2 |
+
A dataset I created because I was curious about how "random" r/random really is.
|
| 3 |
+
This data was collected by sending `GET` requests to `https://www.reddit.com/r/random` for a few hours on September 19th, 2021.
|
| 4 |
+
I scraped a bit of metadata about the subreddits as well.
|
| 5 |
+
`randomness_12k_clean.csv` reports the random subreddits as they happened and `summary.csv` lists some metadata about each subreddit.
|
| 6 |
+
|
| 7 |
+
# The Data
|
| 8 |
+
|
| 9 |
+
## `randomness_12k_clean.csv`
|
| 10 |
+
This file serves as a record of the 12,055 successful results I got from r/random.
|
| 11 |
+
Each row represents one result.
|
| 12 |
+
|
| 13 |
+
### Fields
|
| 14 |
+
* `subreddit`: The name of the subreddit that the scraper recieved from r/random (`string`)
|
| 15 |
+
* `response_code`: HTTP response code the scraper recieved when it sent a `GET` request to /r/random (`int`, always `302`)
|
| 16 |
+
|
| 17 |
+
## `summary.csv`
|
| 18 |
+
As the name suggests, this file summarizes `randomness_12k_clean.csv` into the information that I cared about when I analyzed this data.
|
| 19 |
+
Each row represents one of the 3,679 unique subreddits and includes some stats about the subreddit as well as the number of times it appears in the results.
|
| 20 |
+
|
| 21 |
+
### Fields
|
| 22 |
+
* `subreddit`: The name of the subreddit (`string`, unique)
|
| 23 |
+
* `subscribers`: How many subscribers the subreddit had (`int`, max of `99_886`)
|
| 24 |
+
* `current_users`: How many users accessed the subreddit in the past 15 minutes (`int`, max of `999`)
|
| 25 |
+
* `creation_date`: Date that the subreddit was created (`YYYY-MM-DD` or `Error:PrivateSub` or `Error:Banned`)
|
| 26 |
+
* `date_accessed`: Date that I collected the values in `subscribers` and `current_users` (`YYYY-MM-DD`)
|
| 27 |
+
* `time_accessed_UTC`: Time that I collected the values in `subscribers` and `current_users`, reported in UTC+0 (`HH:MM:SS`)
|
| 28 |
+
* `appearances`: How many times the subreddit shows up in `randomness_12k_clean.csv` (`int`, max of `9`)
|
| 29 |
+
|
| 30 |
+
# Missing Values and Quirks
|
| 31 |
+
In the `summary.csv` file, there are three missing values.
|
| 32 |
+
After I collected the number of subscribers and the number of current users, I went back about a week later to collect the creation date of each subreddit.
|
| 33 |
+
In that week, three subreddits had been banned or taken private. I filled in the values with a descriptive string.
|
| 34 |
+
* SomethingWasWrong (`Error:PrivateSub`)
|
| 35 |
+
* HannahowoOnlyfans (`Error:Banned`)
|
| 36 |
+
* JanetGuzman (`Error:Banned`)
|
| 37 |
+
|
| 38 |
+
I think there are a few NSFW subreddits in the results, even though I only queried r/random and not r/randnsfw.
|
| 39 |
+
As a simple example, searching the data for "nsfw" shows that I got the subreddit r/nsfwanimegifs twice.
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
# License
|
| 43 |
+
This dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
|