reuters / README.md
IsmaelMousa's picture
dataset card
7268bbd verified
metadata
license: apache-2.0
task_categories:
  - text-classification
  - table-question-answering
  - summarization
language:
  - en
tags:
  - news
  - articles
  - year:1987
pretty_name: reuters
size_categories:
  - 10K<n<100K

Reuters News Articles

An open-source dataset designed for information retrieval and natural language processing tasks.

Abstract

This dataset is the processed version of reuters-21578 dataset.

Reuters-21578 text categorization test collection

Distribution 1.0 (v 1.2) 26 September 1997

David D. Lewis

AT&T Labs - Research

lewis@research.att.com

Profile

The dataset was processed as part of our work on the reuters-search-engine project, where it was my primary responsibility.

  • Size: 18,594 articles.
  • Fields: 16 fields.
  • Tasks: information retrieval, natural language processing.

Table 1: Description of each field in the dataset:

  • New indicates that this feature was generated through data mining and feature-engineering.

  • Updated indicates that this feature already existed but was refined, cleaned, and transformed to improve data quality.

Column / Property Description New Updated
date Timestamp indicating when the Reuters article was published. Stored as datetime64[ns]. False True
topics List of topical tags or subject categories assigned to the article (e.g., economic themes, commodities, industries). False True
places List of geographic place tags related to the article’s content, often countries or regions mentioned or relevant to the story. False True
people List of individuals referenced in the article, usually named stakeholders, analysts, officials, or quoted experts. False True
organizations List of organizations mentioned in the text, such as agencies, companies, government bodies, or institutions. False True
exchanges List of financial exchanges or markets referenced in the article (often empty when not applicable). False False
title The headline of the news article as published by Reuters. False True
text Full body text of the article, containing the narrative, quotes, and analysis. False True
keyword A high-level category label (e.g., “business-news”) representing the main topic class of the article. True False
domain A broader thematic domain such as “food-and-drink,” grouping articles into content verticals. True False
type The type or format of the entry (e.g., “news”). Indicates the nature of the document. True False
quality Represents the structural and grammatical quality of the article’s text body. This reflects how well-formed or clean the text in the text field is. True False
geographic_information Structured dictionary containing geolocation metadata extracted for the article (e.g., city, state, country, coordinates, ISO codes). True False
identities List of identity-related descriptors, typically nationalities, religious groups, or political affiliations referenced in the text. True False
title_embedding Embeddings vector generated from the news title reflecting the semantic meaning of its core intent. True False
text_embedding Embeddings vector generated from the news body text reflecting the semantic meaning of its content. True False

Methodology

This work follows an Extract–Transform–Load (ETL) pipeline to produce a structured and retrieval-ready news dataset. All processing steps were implemented in Python and executed with A100 GPU acceleration.

  • Data Preprocessing

    The raw dataset was cleaned using Pandas and BeautifulSoup. Non-essential metadata fields were removed, duplicate articles were eliminated based on body text, and records with missing content were discarded. Text fields were normalized by stripping whitespace, capitalizing sentences, and removing HTML artifacts. Publication dates were parsed and standardized to ISO 8601 format.

  • Semantic Clustering

    Document-level semantic representations were generated using the Sentence-Transformers library with the thenlper/gte-base model. The resulting embeddings were clustered using K-means from scikit-learn into seven predefined topical groups, each mapped to a high-level news category.

  • Automated Classification

    Domain, content type, and quality labels were inferred using NVIDIA NeMo Curator classifiers. Distributed processing was performed via Ray, and data exchange was handled using JSONL readers and writers. To improve memory efficiency, the dataset was split into partitions and processed sequentially.

  • Named Entity and Geographic Enrichment

    Named entities were extracted from article text and datelines using spaCy (en_core_web_lg). Entities were normalized and categorized into organizations, people, places, and identities. Dateline locations were resolved into structured geographic metadata, including latitude and longitude, using Geopy with the Nominatim geocoder.

  • Dense Vector Encoding

    Dense embeddings for article titles and full text were generated using the Sentence-Transformers framework with the google/embeddinggemma-300m model. These embeddings enable semantic similarity search and neural information retrieval.

  • Postprocessing

    Classifier outputs were normalized and renamed, missing dates were backfilled from datelines, and intermediate fields were removed. Documents lacking geographic information were filtered out.

Usage

from datasets import load_dataset

reuters = load_dataset("IsmaelMousa/reuters", split="train")

print(reuters["text"][0])

output:

Plm cos inc said its plm power co
unit broke off merger discussions with sunlaw energy corp of
beverly hills, calif.
    in january plm power entered into a letter of intent to
negotiate a potential acquisition of sunlaw, subject to
substantial due diligence, the company said.
    but it also said the two companies were not able to agree
on mutually satisfactory final terms and conditions.

License

The rights to the reuters are reserved by their respective authors. This dataset is provided under the Apache 2.0 license for both personal and commercial use, with proper attribution.