Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 183 fields in line 4, saw 263

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 325, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 183 fields in line 4, saw 263

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

PatClass2011 Dataset

CLEFIP-2011

Dataset Summary

The PatClass2011 dataset is a comprehensive collection of approximately 719,000 patent documents from the CLEF-IP 2011 Test Collection, focusing on patent classification tasks. Each entry encompasses detailed metadata and textual content, including titles, abstracts, descriptions, and claims. The dataset is structured to facilitate research in patent classification, information retrieval, and natural language processing.

Languages

The dataset contains English, French and German text.

Domain

Patents (intellectual property).

Dataset Curators

The dataset was created by Eleni Kamateri and Tasos Mylonidis

Dataset Structure

The dataset consists of 28 folders which correspond to a specific year, ranging from 1978 to 2005. Within each yearly subdirectory, you'll find a CSV file named in the format clefip2011_en_classification_.csv. These files contain patent data that were all published that year. This structure facilitates year-wise analysis, allowing researchers to study trends and patterns in patent classifications over time. In total, there are 19 data fields for each CSV

Data Fields

The dataset is provided in CSV format and includes the aforementioned fields

  • ucid: Unique identifier for the patent document.
  • patent_number: Patent document number.
  • country: Country code of the patent.
  • kind: Kind code indicating the type of patent document.
  • lang: Language of the patent document.
  • date: Publication date of the patent.
  • application_date: Date when the patent application was filed.
  • date_produced: Date when the data was inserted in the dataset.
  • status: Status of the patent document.
  • main_code: Primary classification code assigned to the patent.
  • further_codes: Additional classification codes.
  • ipcr_codes: International Patent Classification codes.
  • ecla_codes: European Classification codes.
  • title: Title of the patent document.
  • abstract: Abstract summarizing the patent.
  • description: Detailed description of the patent.
  • claims: Claims defining the scope of the patent protection.
  • applicants: Entities or individuals who applied for the patent.
  • inventors: Inventors credited in the patent document.

Usage

Loading the Dataset

Sample ( 1985 March to April )

The following script can be used to load a sample version of the dataset, which contains all the patent applications that were published from March until April in 1985, with all columns and once with selected columns.


from datasets import load_dataset
import pandas as pd
from datetime import datetime
import gc

def load_csvs_from_huggingface(start_date, end_date, columns_to_keep=None):
    """
    Load only the necessary CSV files from a Hugging Face dataset repository.

    :param start_date: str, the start date in 'YYYY-MM-DD' format (inclusive)
    :param end_date: str, the end date in 'YYYY-MM-DD' format (inclusive)
    :param columns_to_keep: list of str, optional. Specific columns to load (e.g., ["date", "lang"]).
    :return: pd.DataFrame, combined data from selected CSVs
    """

    huggingface_dataset_name = "amylonidis/PatClass2011"

    column_types = {
        "ucid": "string[pyarrow]",
        "country": "category",
        "kind": "category",
        "lang": "category",
        "date": "Int32",
        "application_date": "Int32",
        "date_produced": "Int32",
        "status": "category",
        "main_code": "string[pyarrow]",
        "further_codes": "string[pyarrow]",
        "ipcr_codes": "string[pyarrow]",
        "ecla_codes": "string[pyarrow]",
        "title": "string[pyarrow]",
        "abstract": "string[pyarrow]",
        "description": "string[pyarrow]",
        "claims": "string[pyarrow]",
        "applicants": "string[pyarrow]",
        "inventors": "string[pyarrow]",
        "patent_number": "Int64",
    }

    dataset_years = [str(year) for year in range(1978, 2006)]

    start_date_int = int(datetime.strptime(start_date, "%Y-%m-%d").strftime("%Y%m%d"))
    end_date_int = int(datetime.strptime(end_date, "%Y-%m-%d").strftime("%Y%m%d"))

    start_year, end_year = str(start_date_int)[:4], str(end_date_int)[:4]
    given_years = [str(year) for year in range(int(start_year), int(end_year) + 1)]
    matching_years = [f for f in dataset_years if f in given_years]

    if not matching_years:
        raise ValueError(f"No matching CSV files found for {start_date} to {end_date}")

    df_list = []
    
    for year in matching_years:
        filepath = f"data/years/{year}/clefip2011_en_classification_{year}_validated.csv"

        try:
            # 1. Load the dataset (This stays on disk as an Arrow memory-map, NOT in RAM)
            dataset = load_dataset(
                huggingface_dataset_name, 
                data_files=filepath, 
                split="train",
                sep=";",
                on_bad_lines="skip"
            )
            
            # Select Columns Before Chunking
            if columns_to_keep is not None:
                # Safety check: Only select columns that actually exist in this specific file
                valid_columns = [col for col in columns_to_keep if col in dataset.column_names]
                if valid_columns:
                    dataset = dataset.select_columns(valid_columns)

            # 2. Tell HuggingFace to output Pandas dataframes when sliced
            dataset = dataset.with_format("pandas")
            
            # 3. CHUNKING: Process exactly 10,000 rows at a time
            chunk_size = 10000
            for i in range(0, len(dataset), chunk_size):
                # Only these 10,000 rows are loaded into RAM
                df_chunk = dataset[i : i + chunk_size]
                df_chunk.columns = df_chunk.columns.str.strip()
                
                # Filter this specific chunk
                if "date" in df_chunk.columns:
                    temp_dates = pd.to_numeric(df_chunk["date"], errors="coerce")
                    mask = (temp_dates >= start_date_int) & (temp_dates <= end_date_int)
                    df_filtered = df_chunk[mask].copy()
                else:
                    df_filtered = df_chunk.copy()
                
                # Apply types and append
                if not df_filtered.empty:
                    valid_column_types = {col: dtype for col, dtype in column_types.items() if col in df_filtered.columns}
                    df_filtered = df_filtered.astype(valid_column_types)
                    df_list.append(df_filtered)
                
                # Clear chunk from memory immediately
                del df_chunk, df_filtered
                gc.collect()

            # Clear the dataset object from memory before the next year
            del dataset
            gc.collect()

        except Exception as e:
            print(f"Error processing {filepath}: {e}")


    if not df_list:
        return pd.DataFrame()
    
    # Combine chunks
    final_df = pd.concat(df_list, ignore_index=True)
    
    # Destroy the list
    del df_list
    gc.collect()
    
    return final_df

Load All Columns


start_date = "1985-03-01"
end_date = "1985-04-30"

df = load_csvs_from_huggingface(start_date, end_date)

Load Selected Columns


columns_to_keep = [
    "applicants",
    "lang"
]

start_date = "1985-03-01"
end_date = "1985-04-30"

df = load_csvs_from_huggingface(start_date, end_date, columns_to_keep)

Google Colab Analytics

You can also use the following Google Colab notebooks to explore the Analytics that were performed to the dataset.

Dataset Creation

Source Data

The PatClass2011 dataset aggregates the patent documents from the CLEF-IP 2011 Test Collection using a parsing script. The data includes both metadata and full-text fields, facilitating a wide range of research applications.

Annotations

The dataset does not contain any human-written or computer-generated annotations beyond those produced by patent documents of the Source Data.

Licensing Information

This dataset is distributed under the MIT License. Users are free to use, modify, and distribute the dataset, provided that the original authors are credited.

Citation

If you utilize the PatClass2011 dataset in your research or applications, please cite it appropriately.


Downloads last month
366