|
|
--- |
|
|
license: mit |
|
|
language: |
|
|
- en, |
|
|
tags: |
|
|
- legal |
|
|
- patents |
|
|
pretty_name: PatClass2011 |
|
|
size_categories: |
|
|
- 10B<n<100B |
|
|
--- |
|
|
# PatClass2011 Dataset |
|
|
|
|
|
 |
|
|
|
|
|
## Dataset Summary |
|
|
|
|
|
The **PatClass2011** dataset is a comprehensive collection of approximately 719,000 patent documents from the CLEF-IP 2011 Test Collection, |
|
|
focusing on patent classification tasks. Each entry encompasses detailed metadata and textual content, including titles, abstracts, descriptions, and claims. |
|
|
The dataset is structured to facilitate research in patent classification, information retrieval, and natural language processing. |
|
|
|
|
|
## Languages |
|
|
|
|
|
The dataset contains English, French and German text. |
|
|
|
|
|
## Domain |
|
|
|
|
|
Patents (intellectual property). |
|
|
|
|
|
## Dataset Curators |
|
|
|
|
|
The dataset was created by Eleni Kamateri and Tasos Mylonidis |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset consists of 28 folders which correspond to a specific year, ranging from 1978 to 2005. Within each yearly subdirectory, you'll find a CSV file named in the format |
|
|
clefip2011_en_classification_<year>.csv. These files contain patent data that were all published that year. |
|
|
This structure facilitates year-wise analysis, allowing researchers to study trends and patterns in patent classifications over time. In total, there are 19 data fields for each CSV |
|
|
|
|
|
|
|
|
### Data Fields |
|
|
|
|
|
The dataset is provided in CSV format and includes the aforementioned fields |
|
|
|
|
|
- `ucid`: Unique identifier for the patent document. |
|
|
- `doc_number`: Patent document number. |
|
|
- `country`: Country code of the patent. |
|
|
- `kind`: Kind code indicating the type of patent document. |
|
|
- `lang`: Language of the patent document. |
|
|
- `date`: Publication date of the patent. |
|
|
- `application_date`: Date when the patent application was filed. |
|
|
- `date_produced`: Date when the data was inserted in the dataset. |
|
|
- `status`: Status of the patent document. |
|
|
- `main_code`: Primary classification code assigned to the patent. |
|
|
- `further_codes`: Additional classification codes. |
|
|
- `ipcr_codes`: International Patent Classification codes. |
|
|
- `ecla_codes`: European Classification codes. |
|
|
- `title`: Title of the patent document. |
|
|
- `abstract`: Abstract summarizing the patent. |
|
|
- `description`: Detailed description of the patent. |
|
|
- `claims`: Claims defining the scope of the patent protection. |
|
|
- `applicants`: Entities or individuals who applied for the patent. |
|
|
- `inventors`: Inventors credited in the patent document. |
|
|
|
|
|
## Usage |
|
|
|
|
|
## Loading the Dataset |
|
|
|
|
|
### Sample ( 1985 March to April ) |
|
|
|
|
|
The following script can be used to load a sample version of the dataset, which contains all the patent applications |
|
|
that were published from March until April in 1985. |
|
|
|
|
|
|
|
|
```python |
|
|
|
|
|
from datasets import load_dataset |
|
|
import pandas as pd |
|
|
from datetime import datetime |
|
|
import gc |
|
|
|
|
|
def load_csvs_from_huggingface(start_date, end_date): |
|
|
""" |
|
|
Load only the necessary CSV files from a Hugging Face dataset repository. |
|
|
|
|
|
:param start_date: str, the start date in 'YYYY-MM-DD' format (inclusive) |
|
|
:param end_date: str, the end date in 'YYYY-MM-DD' format (inclusive) |
|
|
|
|
|
:return: pd.DataFrame, combined data from selected CSVs |
|
|
""" |
|
|
|
|
|
huggingface_dataset_name = "amylonidis/PatClass2011" |
|
|
|
|
|
column_types = { |
|
|
"ucid": "string", |
|
|
"country": "category", |
|
|
"doc_number": "int64", |
|
|
"kind": "category", |
|
|
"lang": "category", |
|
|
"date": "int32", |
|
|
"application_date": "int32", |
|
|
"date_produced": "int32", |
|
|
"status": "category", |
|
|
"main_code": "string", |
|
|
"further_codes": "string", |
|
|
"ipcr_codes": "string", |
|
|
"ecla_codes": "string", |
|
|
"title": "string", |
|
|
"abstract": "string", |
|
|
"description": "string", |
|
|
"claims": "string", |
|
|
"applicants": "string", |
|
|
"inventors": "string", |
|
|
} |
|
|
|
|
|
dataset_years = ['1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986', |
|
|
'1987', '1988', '1989', '1990', '1991', '1992', '1993', '1994', '1995', |
|
|
'1996','1997', '1998', '1999', '2000', '2001', '2002','2003', '2004', '2005'] |
|
|
|
|
|
start_date_int = int(datetime.strptime(start_date, "%Y-%m-%d").strftime("%Y%m%d")) |
|
|
end_date_int = int(datetime.strptime(end_date, "%Y-%m-%d").strftime("%Y%m%d")) |
|
|
|
|
|
start_year, end_year = str(start_date_int)[:4], str(end_date_int)[:4] |
|
|
given_years = [str(year) for year in range(int(start_year), int(end_year) + 1)] |
|
|
matching_years = [f for f in dataset_years for year in given_years if f==year] |
|
|
|
|
|
if not matching_years: |
|
|
raise ValueError(f"No matching CSV files found for {start_date} to {end_date}") |
|
|
|
|
|
df_list = [] |
|
|
for year in matching_years: |
|
|
filepath = f"data/years/{year}/clefip2011_en_classification_{year}_validated.csv" |
|
|
|
|
|
try: |
|
|
dataset = load_dataset(huggingface_dataset_name, data_files=filepath, split="train") |
|
|
df = dataset.to_pandas().astype(column_types) |
|
|
mask = (df["date"] >= start_date_int) & (df["date"] <= end_date_int) |
|
|
df_filtered = df[mask].copy() |
|
|
|
|
|
|
|
|
if not df_filtered.empty: |
|
|
df_list.append(df_filtered) |
|
|
|
|
|
del df, dataset, df_filtered, mask |
|
|
gc.collect() |
|
|
|
|
|
except Exception as e: |
|
|
print(f"Error processing {filepath}: {e}") |
|
|
|
|
|
return pd.concat(df_list, ignore_index=True) if df_list else pd.DataFrame() |
|
|
|
|
|
|
|
|
``` |
|
|
|
|
|
```python |
|
|
|
|
|
start_date = "1985-03-01" |
|
|
end_date = "1985-04-30" |
|
|
|
|
|
df = load_csvs_from_huggingface(start_date, end_date) |
|
|
|
|
|
|
|
|
``` |
|
|
|
|
|
### Full |
|
|
|
|
|
To load the complete dataset using the Hugging Face `datasets` library: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("amylonidis/PatClass2011") |
|
|
``` |
|
|
|
|
|
This will load the dataset into a `DatasetDict` object, please make sure you have enough disk space. |
|
|
|
|
|
## Google Colab Analytics |
|
|
|
|
|
|
|
|
You can also use the following Google Colab notebooks to explore the Analytics that were performed to the dataset. |
|
|
|
|
|
- [Date Analytics](https://colab.research.google.com/drive/1N2w5F1koWmZOyQaf0ZTB3gighPTXtUzD?usp=sharing) |
|
|
- [Applicant - Inventor Name Analytics](https://colab.research.google.com/drive/1y1nEGrl40IjsUrFKgVsmYyuHyve40mfk?usp=sharing) |
|
|
- [Main Codes Analytics](https://colab.research.google.com/drive/1fhAgrSAsO5Q2TzFlFywI6Bl0D2w-cfoL?usp=sharing) |
|
|
- [Section Title Analytics](https://colab.research.google.com/drive/126zqwzdt2nZDF5N7mdl4n8XadCnwav12?usp=sharing) |
|
|
- [Section Abstract Analytics](https://colab.research.google.com/drive/1Z9bviT14EbpBL7gz41fqby0yq8vU5-lE?usp=sharing) |
|
|
- [Section Claims Analytics](https://colab.research.google.com/drive/1526ZNQeEgeteBFlPFa2VJUF3UaPFBGkc?usp=sharing) |
|
|
- [Section Description Analytics](https://colab.research.google.com/drive/10ESV1Z7jyeVnmDgtPdRukpYUQVLgpo-Y?usp=sharing) |
|
|
|
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Source Data |
|
|
|
|
|
The PatClass2011 dataset aggregates the patent documents from the CLEF-IP 2011 Test Collection using a parsing script. The data includes both metadata and full-text fields, facilitating a wide range of research applications. |
|
|
|
|
|
### Annotations |
|
|
|
|
|
The dataset does not contain any human-written or computer-generated annotations beyond those produced by patent documents of the Source Data. |
|
|
|
|
|
## Licensing Information |
|
|
|
|
|
This dataset is distributed under the [MIT License](https://opensource.org/licenses/MIT). Users are free to use, modify, and distribute the dataset, provided that the original authors are credited. |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you utilize the PatClass2011 dataset in your research or applications, please cite it appropriately. |
|
|
|
|
|
--- |