File size: 5,137 Bytes
d118602 ddc7b15 d118602 51f6bf1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 |
---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: mit
multilinguality:
- monolingual
pretty_name: Medium Articles Dataset
size_categories:
- n>1K
source_datasets:
- original
tags:
- medium
- articles
- blog-posts
task_categories:
- text-classification
- text-generation
task_ids:
- topic-classification
- language-modeling
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audioVersionDurationSec
dtype: float64
- name: codeBlock
dtype: string
- name: codeBlockCount
dtype: float64
- name: collectionId
dtype: string
- name: createdDate
dtype: string
- name: createdDatetime
dtype: string
- name: firstPublishedDate
dtype: string
- name: firstPublishedDatetime
dtype: string
- name: imageCount
dtype: float64
- name: isSubscriptionLocked
dtype: bool
- name: language
dtype: string
- name: latestPublishedDate
dtype: string
- name: latestPublishedDatetime
dtype: string
- name: linksCount
dtype: float64
- name: postId
dtype: string
- name: readingTime
dtype: float64
- name: recommends
dtype: float64
- name: responsesCreatedCount
dtype: float64
- name: socialRecommendsCount
dtype: float64
- name: subTitle
dtype: string
- name: tagsCount
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: totalClapCount
dtype: float64
- name: uniqueSlug
dtype: string
- name: updatedDate
dtype: string
- name: updatedDatetime
dtype: string
- name: url
dtype: string
- name: vote
dtype: bool
- name: wordCount
dtype: float64
- name: publicationdescription
dtype: string
- name: publicationdomain
dtype: string
- name: publicationfacebookPageName
dtype: string
- name: publicationfollowerCount
dtype: float64
- name: publicationname
dtype: string
- name: publicationpublicEmail
dtype: string
- name: publicationslug
dtype: string
- name: publicationtags
dtype: string
- name: publicationtwitterUsername
dtype: string
- name: tag_name
dtype: string
- name: slug
dtype: string
- name: name
dtype: string
- name: postCount
dtype: float64
- name: author
dtype: string
- name: bio
dtype: string
- name: userId
dtype: string
- name: userName
dtype: string
- name: usersFollowedByCount
dtype: float64
- name: usersFollowedCount
dtype: float64
- name: scrappedDate
dtype: float64
- name: claps
dtype: string
- name: reading_time
dtype: float64
- name: link
dtype: string
- name: authors
dtype: string
- name: timestamp
dtype: string
- name: tags
dtype: string
splits:
- name: train
num_bytes: 2654611084
num_examples: 444593
download_size: 1482558340
dataset_size: 2654611084
---
# Medium Articles Dataset Generator
This project combines multiple datasets from Kaggle and Hugging Face to create a comprehensive collection of Medium articles. The combined dataset is available on [Hugging Face Hub](https://huggingface.co/datasets/Alaamer/medium-articles-posts-with-content).
## Dataset Description
This dataset is a unique compilation that not only combines multiple sources but also ensures data quality through normalization and deduplication. A key feature is that all entries in the `text` column are unique - there are no duplicate articles in the final dataset.
### Data Sources:
#### Kaggle Sources:
- aiswaryaramachandran/medium-articles-with-content
- hsankesara/medium-articles
- meruvulikith/1300-towards-datascience-medium-articles-dataset
#### Hugging Face Sources:
- fabiochiu/medium-articles
- Falah/medium_articles_posts
## Features
- Combines multiple data sources into a single, unified dataset
- **Ensures uniqueness**: Each article appears only once in the dataset
- **Quality control**:
- Removes duplicate entries based on article text
- Handles missing values
- Normalizes data format
- Saves the final dataset in efficient Parquet format
- Publishes the dataset to Hugging Face Hub
## Requirements
```bash
pip install datasets
pip install kagglehub huggingface_hub tqdm
```
## Usage
1. Set up your Hugging Face authentication token
2. Run the script:
```bash
python combined_medium_ds_generator.py
```
## Data Processing Steps
1. Downloads datasets from Kaggle and Hugging Face
2. Normalizes each dataset by:
- Removing null values
- Eliminating duplicates
- Standardizing column names
3. Combines all datasets into a single DataFrame
4. Saves the result as a Parquet file
5. Uploads the final dataset to Hugging Face Hub
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Author
- [@Alaamer](https://huggingface.co/Alaamer)
## Acknowledgments
Special thanks to the original dataset creators:
- aiswaryaramachandran
- hsankesara
- meruvulikith
- fabiochiu
- Falah
|