Wikipedia-Articles / README.md
BrightData's picture
Update README.md
4cb85cd verified
|
raw
history blame
5.43 kB
metadata
license: other
license_name: bright-data-master-service-agreement
license_link: https://brightdata.com/license
language:
  - en
task_categories:
  - text-classification
  - text-generation
  - text2text-generation
  - summarization
  - question-answering
tags:
  - wikipedia
  - text
  - NLP
  - ML
  - AI
  - Knowledge Extraction
  - Natural Language Processing
  - Information Retrieval
  - LLM

Bright Data Logo

Dataset Card for "BrightData/Wikipedia-Articles"

If you are using this dataset, we would love your feedback: Link to form.

Dataset Summary

Explore a collection of millions of Wikipedia articles with the Wikipedia dataset, comprising over 1.23M structured records and 10 data fields updated and refreshed regularly.

Each entry includes all major data points such as including timestamps, URLs, article titles, raw and cataloged text, images, "see also" references, external links, and a structured table of contents.

For a complete list of data points, please refer to the full "Data Dictionary" provided below.

To explore additional free and premium datasets, visit our website brightdata.com.

Data Dictionary

Column name Description Data type
url URL of the article Url
title Title of the article Text
table_of_contents Table of Contents in the article Array
raw_text Raw article text Text
cataloged_text Cataloged text of the article by titles Array
> title Title of a cataloged section Text
> sub_title Subtitle within a cataloged section Text
> text Text content within a cataloged section Text
> links_in_text Links within the text content Array
>> link_name Name or description of the link Text
>> url URL of the link Url
images Links to the URLs of images in the article Array
> image_text Text description under an image Text
> image_url URL of the image Url
see_also Other recommended articles Array
> title Recommended article title Text
> url URL of the recommended article Url
references References in the article Array
> reference Reference in the article Text
>> urls URLs referenced within the article Array
>>> url_text Text description of the referenced URL Text
>>> url URL of the referenced article or source Url
external_links External links referenced in the article Array
> external_links_name Name or description of the external link Text
> link External link URL Text

Dataset Creation

Data Collection and Processing

The data collection process involved extracting information directly from Goodreads, ensuring comprehensive coverage of the required attributes. Once collected, the data underwent several stages of processing:

  • Parsing: Extracted raw data was parsed to convert it into a structured format.
  • Cleaning: The cleaning process involved removing any irrelevant or erroneous entries to enhance data quality.

Validation

To ensure data integrity, a validation process was implemented. Each entry is checked across various attributes, including:

  • Uniqueness: Each record was checked to ensure it was unique, eliminating any duplicates.
  • Completeness: The dataset was examined to confirm that all necessary fields were populated or filled, with missing data addressed appropriately.
  • Consistency: Cross-validation checks were conducted to ensure consistency across various attributes, including comparison with historical records.
  • Data Types Verification: Ensured that all data types were correctly assigned and consistent with expected formats.
  • Fill Rates and Duplicate Checks: Conducted comprehensive checks to verify fill rates, ensuring no significant gaps in data, and rigorously screened for duplicates. This ensures that the dataset meets the high standards of quality necessary for analysis, research and modeling.

Example JSON

```json [ { "timestamp": "2024-05-09", "url": "https://www.imdb.com/title/tt1533087/", "title": "Soda Springs", "popularity": null, "genres": [ "Drama" } ] ```