Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,143 +1,82 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
[
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
| 84 |
-
|
| 85 |
-
#### Annotation process
|
| 86 |
-
|
| 87 |
-
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
| 88 |
-
|
| 89 |
-
[More Information Needed]
|
| 90 |
-
|
| 91 |
-
#### Who are the annotators?
|
| 92 |
-
|
| 93 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
| 94 |
-
|
| 95 |
-
[More Information Needed]
|
| 96 |
-
|
| 97 |
-
#### Personal and Sensitive Information
|
| 98 |
-
|
| 99 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
| 100 |
-
|
| 101 |
-
[More Information Needed]
|
| 102 |
-
|
| 103 |
-
## Bias, Risks, and Limitations
|
| 104 |
-
|
| 105 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 106 |
-
|
| 107 |
-
[More Information Needed]
|
| 108 |
-
|
| 109 |
-
### Recommendations
|
| 110 |
-
|
| 111 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 112 |
-
|
| 113 |
-
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
|
| 114 |
-
|
| 115 |
-
## Citation [optional]
|
| 116 |
-
|
| 117 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
| 118 |
-
|
| 119 |
-
**BibTeX:**
|
| 120 |
-
|
| 121 |
-
[More Information Needed]
|
| 122 |
-
|
| 123 |
-
**APA:**
|
| 124 |
-
|
| 125 |
-
[More Information Needed]
|
| 126 |
-
|
| 127 |
-
## Glossary [optional]
|
| 128 |
-
|
| 129 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
| 130 |
-
|
| 131 |
-
[More Information Needed]
|
| 132 |
-
|
| 133 |
-
## More Information [optional]
|
| 134 |
-
|
| 135 |
-
[More Information Needed]
|
| 136 |
-
|
| 137 |
-
## Dataset Card Authors [optional]
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
-
## Dataset Card Contact
|
| 142 |
-
|
| 143 |
-
[More Information Needed]
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: mit
|
| 5 |
+
task_categories:
|
| 6 |
+
- text-generation
|
| 7 |
+
- question-answering
|
| 8 |
+
tags:
|
| 9 |
+
- reddit
|
| 10 |
+
- education
|
| 11 |
+
- ipmat
|
| 12 |
+
- entrance-exam
|
| 13 |
+
- india
|
| 14 |
+
size_categories:
|
| 15 |
+
- 1K<n<10K
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# r/IPMATtards Reddit Dataset
|
| 19 |
+
|
| 20 |
+
## Dataset Description
|
| 21 |
+
|
| 22 |
+
This dataset contains scraped posts and comments from the **r/IPMATtards** subreddit, a community dedicated to aspirants of the **Integrated Programme in Management Aptitude Test (IPMAT)** in India.
|
| 23 |
+
|
| 24 |
+
The data is structured into two main components:
|
| 25 |
+
1. **Posts**: Top-level submissions including titles, body text, scores, and metadata.
|
| 26 |
+
2. **Comments**: Threaded replies associated with the posts, including recursion depth and parent-child relationships.
|
| 27 |
+
|
| 28 |
+
## Dataset Structure
|
| 29 |
+
|
| 30 |
+
The dataset is provided in **Parquet** format, optimized for the Hugging Face Hub.
|
| 31 |
+
|
| 32 |
+
### 1. Posts (`posts.parquet`)
|
| 33 |
+
|
| 34 |
+
| Feature | Type | Description |
|
| 35 |
+
|:--------|:-----|:------------|
|
| 36 |
+
| `id` | string | Unique Reddit ID for the post (e.g., `t3_xyz`) |
|
| 37 |
+
| `title` | string | Title of the post |
|
| 38 |
+
| `author` | string | Reddit username of the author |
|
| 39 |
+
| `body` | string | The main text content of the post |
|
| 40 |
+
| `score` | int64 | Net upvotes (karma) at time of scraping |
|
| 41 |
+
| `comments_count` | int64 | Number of comments on the post |
|
| 42 |
+
| `created_utc` | string | Creation timestamp (UTC) |
|
| 43 |
+
| `url` | string | Direct URL to the reddit post |
|
| 44 |
+
| `crawled_at` | string | Timestamp when the data was scraped |
|
| 45 |
+
|
| 46 |
+
### 2. Comments (`comments.parquet`)
|
| 47 |
+
|
| 48 |
+
| Feature | Type | Description |
|
| 49 |
+
|:--------|:-----|:------------|
|
| 50 |
+
| `id` | string | Unique Reddit ID for the comment (e.g., `t1_abc`) |
|
| 51 |
+
| `post_id` | string | ID of the parent post (`t3_...`) |
|
| 52 |
+
| `parent_id` | string | ID of the parent comment or post this replies to |
|
| 53 |
+
| `author` | string | Reddit username of the author |
|
| 54 |
+
| `body` | string | Text content of the comment |
|
| 55 |
+
| `score` | int64 | Net upvotes |
|
| 56 |
+
| `depth` | int64 | Nesting level (0 = top-level comment) |
|
| 57 |
+
| `path` | string | Materialized path for reconstructing the tree |
|
| 58 |
+
| `created_utc` | string | Creation timestamp (UTC) |
|
| 59 |
+
|
| 60 |
+
## Usage
|
| 61 |
+
|
| 62 |
+
You can load this dataset directly using the `datasets` library:
|
| 63 |
+
|
| 64 |
+
```python
|
| 65 |
+
from datasets import load_dataset
|
| 66 |
+
|
| 67 |
+
# Load posts
|
| 68 |
+
posts = load_dataset("parquet", data_files="parquet_export/posts.parquet", split="train")
|
| 69 |
+
|
| 70 |
+
# Load comments
|
| 71 |
+
comments = load_dataset("parquet", data_files="parquet_export/comments.parquet", split="train")
|
| 72 |
+
|
| 73 |
+
print(posts[0])
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
## Collection
|
| 77 |
+
|
| 78 |
+
This data was collected using a custom Python scraper utilizing `requests` and `BeautifulSoup`. It targets the old.reddit.com interface for efficiency.
|
| 79 |
+
|
| 80 |
+
## License
|
| 81 |
+
|
| 82 |
+
This dataset is distributed under the MIT License. Please respect Reddit's API cleaning rules and user privacy when using this data.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|