mitulshah's picture
feat: consolidate dataset into optimized parquet format
5b22dd7
---
annotations_creators:
- machine-generated
language:
- en
license: mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- finance
- credit-card
- transactions
- categorization
- nlp
- machine-learning
- banking
- fintech
- supervised-learning
- csv
task_categories:
- text-classification
pretty_name: Transaction Categorization Dataset
---
# Financial Transaction Categorization Dataset
A comprehensive worldwide dataset for financial transaction categorization with 4.5+ million records across 10 categories, 5 countries, and 5 currencies.
## ๐Ÿ“Š Dataset Overview
- **Total Records**: 4,501,043 transactions
- **Categories**: 10 financial categories
- **Countries**: 5 countries (USA, UK, Canada, Australia, India)
- **Currencies**: 5 currencies (USD, GBP, CAD, AUD, INR)
- **File Format**: Parquet (optimized for fast loading and storage)
- **Total Size**: ~71 MB (compressed)
## ๐Ÿ“ Dataset Structure
The dataset is now consolidated into a single optimized parquet file:
| File | Records | Size | Description |
|------|---------|------|-------------|
| `default/train/0000.parquet` | 4,501,043 | 71 MB | Complete dataset in parquet format |
| `categories.json` | - | 5.3 KB | Category definitions and keywords |
| `dataset_info.json` | - | 1.0 KB | Dataset metadata and statistics |
## ๐Ÿท๏ธ Categories
The dataset includes 10 comprehensive financial categories:
1. **Food & Dining** - Restaurants, groceries, fast food, coffee shops, food delivery
2. **Transportation** - Gas, rideshare, airlines, public transport, car rental
3. **Shopping & Retail** - Online shopping, electronics, retail, fashion, home & garden
4. **Entertainment & Recreation** - Streaming, gaming, movies, music, sports
5. **Healthcare & Medical** - Medical, pharmacy, dental, vision, fitness
6. **Utilities & Services** - Electricity, water, gas, internet & phone, cable
7. **Financial Services** - Banking, insurance, credit cards, investments, taxes
8. **Income** - Salary, freelance, business, investments, government benefits
9. **Government & Legal** - Taxes, licenses, legal services, government fees
10. **Charity & Donations** - Charitable, religious, community, political donations
## ๐ŸŒ Geographic Coverage
| Country | Currency | Sample Transactions |
|---------|----------|-------------------|
| USA | USD | McDonald's, Uber, Amazon, Netflix |
| UK | GBP | Tesco, Shell, ASDA, BBC iPlayer |
| Canada | CAD | Tim Hortons, Petro-Canada, Loblaws |
| Australia | AUD | Coles, Woolworths, Bunnings, Telstra |
| India | INR | Big Bazaar, Ola, Flipkart, Zomato |
## ๐Ÿ“‹ Dataset Schema
Each record contains the following fields:
```json
{
"transaction_description": "string",
"category": "string",
"country": "string",
"currency": "string"
}
```
### Example Records
```csv
transaction_description,category,country,currency
McDonald's #1234,Food & Dining,USA,USD
Uber Ride,Transportation,UK,GBP
Amazon Purchase,Shopping & Retail,CANADA,CAD
Netflix Subscription,Entertainment & Recreation,AUSTRALIA,AUD
Pharmacy Purchase,Healthcare & Medical,INDIA,INR
```
## ๐Ÿš€ Usage
### Loading the Dataset
#### Python (Pandas)
```python
import pandas as pd
# Load the complete dataset
df = pd.read_parquet('default/train/0000.parquet')
print(f"Total records: {len(df):,}")
print(f"Columns: {list(df.columns)}")
```
#### Python (Hugging Face Datasets)
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("mitulshah/transaction-categorization")
# Access the data
train_data = dataset['train']
print(f"Dataset size: {len(train_data):,}")
```
#### Chunked Processing (Memory Efficient)
```python
import pandas as pd
# Process large parquet file in chunks
for chunk in pd.read_parquet('default/train/0000.parquet', chunksize=10000):
# Process each chunk
print(f"Processing {len(chunk)} records...")
# Your analysis code here
```
### Data Analysis Examples
#### Category Distribution
```python
import pandas as pd
# Load and analyze category distribution
df = pd.read_parquet('default/train/0000.parquet')
category_counts = df['category'].value_counts()
print(category_counts)
```
#### Country Analysis
```python
# Analyze transactions by country
country_analysis = df.groupby(['country', 'category']).size().unstack(fill_value=0)
print(country_analysis)
```
## ๐ŸŽฏ Use Cases
This dataset is perfect for:
- **Machine Learning**: Train classification models for transaction categorization
- **Financial Analysis**: Study spending patterns across different regions
- **NLP Research**: Text classification and merchant name analysis
- **Data Science**: Exploratory data analysis and visualization
- **Business Intelligence**: Market research and consumer behavior analysis
- **Academic Research**: Financial behavior studies and economic research
## ๐Ÿ“ˆ Dataset Statistics
### Record Distribution
- **Total Records**: 4,501,043
- **Unique Descriptions**: ~1.4M unique transaction descriptions
- **Category Balance**: Well-distributed across all 10 categories
- **Geographic Distribution**: Balanced representation across 5 countries
### File Sizes
- **default/train/0000.parquet**: 71 MB (4,501,043 records)
- **Total**: ~71 MB (compressed)
## ๐Ÿ”ง Technical Details
### Data Quality
- โœ… **No Duplicates**: All records are unique
- โœ… **Consistent Schema**: All files follow the same structure
- โœ… **Valid Categories**: All categories match the defined taxonomy
- โœ… **Country-Currency Pairs**: Validated country-currency combinations
### Performance Optimizations
- **Parquet Format**: Optimized columnar storage for fast loading and analysis
- **Compression**: Built-in compression reduces file size by ~66%
- **Chunked Processing**: Support for memory-efficient processing
- **Fast Queries**: Columnar format enables efficient filtering and aggregation
## ๐Ÿ“š Dataset Creation & Methodology
### Curation Rationale
This dataset was created to address the need for a comprehensive, standardized dataset for financial transaction categorization that:
1. Covers multiple countries and currencies
2. Uses consistent categorization schema
3. Includes high-quality, manually curated data
4. Is suitable for both research and production use
### Source Data
The dataset combines data from multiple sources:
- **Synthetic Generation**: 4.5M+ records generated using comprehensive merchant templates
- **External Integration**: Real transaction data from external Hugging Face datasets
- **Country-specific Data**: Curated data for USA, UK, Canada, Australia, and India
- **Quality Validation**: Duplicate prevention and data integrity checks
### Data Quality Assurance
- โœ… **No Duplicates**: Hash-based duplicate detection implemented
- โœ… **Schema Consistency**: All files follow the same structure
- โœ… **Category Validation**: All categories match the defined taxonomy
- โœ… **Country-Currency Pairs**: Validated country-currency combinations
- โœ… **Anonymized Data**: No personally identifiable information
## ๐ŸŽฏ Use Cases & Applications
### Direct Use
This dataset can be used directly for:
- Training transaction classification models
- Building personal finance applications
- Developing banking transaction categorization systems
- Research in financial NLP and text classification
### Downstream Applications
Potential downstream applications include:
- Fraud detection systems
- Expense tracking applications
- Budgeting and financial planning tools
- Business intelligence and analytics
- Academic research in fintech and financial behavior
### Out-of-Scope Use
This dataset should not be used for:
- Identifying specific individuals or accounts
- Training models that could compromise financial privacy
- Any application that requires access to actual financial data
## โš ๏ธ Bias, Risks, and Limitations
### Known Limitations
1. **Geographic Bias**: The dataset focuses on 5 major countries and may not represent all global financial patterns
2. **Currency Bias**: Only 5 currencies are represented
3. **Category Granularity**: The 10-category schema may be too broad for some specialized applications
4. **Temporal Bias**: Data represents a specific time period and may not reflect current trends
### Recommendations
Users should:
- Validate model performance on their specific data
- Consider fine-tuning for domain-specific applications
- Be aware of potential geographic and cultural biases
- Regularly update models with new data
## ๐Ÿ“Š Training & Evaluation
### Recommended Metrics
For model evaluation, consider these metrics:
- **Accuracy**: Overall classification accuracy
- **F1-score**: Macro and weighted F1-scores
- **Precision and Recall**: Per-category performance
- **Confusion Matrix**: Detailed error analysis
### Model Training Tips
```python
# Example training setup
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import RandomForestClassifier
# Load and prepare data
df = pd.read_parquet('default/train/0000.parquet')
X = df['transaction_description']
y = df['category']
# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y)
# Vectorize text
vectorizer = TfidfVectorizer(max_features=10000)
X_train_vec = vectorizer.fit_transform(X_train)
X_test_vec = vectorizer.transform(X_test)
# Train model
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_train_vec, y_train)
```
## ๐Ÿ“š Additional Resources
- **Categories**: See `categories.json` for detailed category definitions and keywords
- **Metadata**: See `dataset_info.json` for complete dataset statistics
## ๐Ÿค Contributing
This dataset is actively maintained. If you find issues or have suggestions:
1. Check the existing issues
2. Create a new issue with detailed description
3. Follow the contribution guidelines
## ๐Ÿ“„ Citation
If you use this dataset in your research, please cite it as:
```bibtex
@dataset{financial_transaction_categorization_2025,
title={Financial Transaction Categorization Dataset},
author={Mitul Shah},
year={2025},
url={https://huggingface.co/datasets/mitulshah/transaction-categorization},
note={A comprehensive worldwide dataset for financial transaction categorization with 4.5M+ records}
}
```
## ๐Ÿ“„ License
This dataset is released under the MIT License. See the license file for details.
## ๐Ÿ™ Acknowledgments
- **Data Sources**: Synthetic generation + real transaction data from external sources
- **Categories**: Based on comprehensive financial transaction taxonomy
- **Validation**: Duplicate prevention and data quality checks implemented
## ๐Ÿ“ž Contact
- **Dataset Maintainer**: Mitul Shah
- **Repository**: [mitulshah/transaction-categorization](https://huggingface.co/datasets/mitulshah/transaction-categorization)
- **Last Updated**: October 14, 2025
---
**โญ If you find this dataset useful, please consider giving it a star!**