Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
(MaxRetryError("HTTPSConnectionPool(host='cas-bridge-direct.xethub.hf.co', port=443): Max retries exceeded with url: /xet-bridge-us/68be9fb22b529d24a7aef5c7/8708e41bce51058da83c8c9693c62d4522f7fe74ac3fe553b0faedfefa3fb808?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20251020%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251020T172430Z&X-Amz-Expires=3600&X-Amz-Signature=04cacbf1d5d884ae0ba823b2190a7249b98a02f8e093a3a96ecfad569051ee7c&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=app%3A6241c288797aadd4ac9dd1a9&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27train.csv%3B%20filename%3D%22train.csv%22%3B&response-content-type=text%2Fcsv&x-id=GetObject (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f50b854e600>, 'Connection to cas-bridge-direct.xethub.hf.co timed out. (connect timeout=10)'))"), '(Request ID: ff3f51ab-7bf0-41c5-a67c-912f1eae4558)')
Error code:   UnexpectedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

user_id
int64
item_id
int64
rating
int64
timestamp
int64
0
5,872
8
1,511,514,181
0
5,907
3
1,511,515,107
0
5,871
10
1,511,516,999
0
5,890
2
1,511,520,912
0
5,838
1
1,511,521,274
0
5,839
4
1,511,521,608
0
5,840
1
1,511,522,095
0
5,841
2
1,511,522,618
0
5,854
1
1,511,522,919
0
5,913
2
1,511,523,146
0
5,914
1
1,511,523,293
0
5,856
2
1,511,523,492
0
5,864
3
1,511,523,700
0
5,865
3
1,511,523,753
0
5,866
1
1,511,524,118
0
5,918
3
1,511,524,154
0
5,867
2
1,511,524,212
0
5,917
2
1,511,524,247
0
5,860
1
1,511,524,374
0
5,861
2
1,511,524,453
0
5,868
1
1,511,524,560
0
5,869
2
1,511,524,569
0
199
17
1,511,524,681
0
5,882
2
1,511,524,931
0
5,881
1
1,511,525,509
0
5,883
2
1,511,525,833
0
5,884
1
1,511,526,710
0
5,885
2
1,511,527,158
0
5,886
1
1,511,527,295
0
5,008
3
1,511,527,339
0
5,931
1
1,511,546,221
0
5,932
1
1,511,546,429
0
5,933
1
1,511,547,345
0
5,934
1
1,511,547,598
0
5,935
1
1,511,547,782
0
5,936
3
1,511,548,121
0
5,937
2
1,511,548,340
0
5,938
1
1,511,548,614
0
5,939
1
1,511,548,944
0
5,940
5
1,511,549,096
0
5,941
1
1,511,549,882
0
5,942
1
1,511,550,098
0
5,943
2
1,511,550,472
0
5,944
3
1,511,550,877
0
5,945
1
1,511,551,358
0
5,946
3
1,511,551,531
0
5,947
1
1,511,552,010
0
5,948
1
1,511,552,557
0
5,949
3
1,511,552,953
0
5,950
1
1,511,553,263
0
5,951
1
1,511,553,503
0
5,952
2
1,511,553,724
0
44
3
1,511,733,045
0
5,983
3
1,511,760,805
0
5,984
4
1,511,761,834
0
5,985
3
1,511,764,179
0
5,986
5
1,511,764,373
0
5,146
8
1,511,771,686
0
3,339
5
1,511,771,836
0
25
2
1,511,772,741
0
5,925
1
1,511,773,929
0
5,988
7
1,511,774,305
0
5,987
10
1,511,774,617
0
5,989
18
1,511,778,034
0
5,888
1
1,511,778,064
0
5,887
1
1,511,778,098
0
5,990
4
1,511,778,503
0
5,992
5
1,511,784,420
0
5,991
22
1,511,784,675
0
5,994
20
1,511,791,435
0
5,993
4
1,511,793,851
0
5,996
5
1,511,796,012
0
5,995
3
1,511,796,976
0
5,874
5
1,511,797,165
0
5,875
2
1,511,797,343
0
5,876
4
1,511,797,378
0
1
1
1,511,797,579
0
5,877
2
1,511,797,600
0
5,878
6
1,511,797,645
0
5,879
2
1,511,797,833
0
111
1
1,511,798,678
0
5,961
2
1,511,798,965
0
5,998
16
1,511,800,125
0
2,436
1
1,511,802,019
0
5,962
4
1,511,802,135
0
5,997
6
1,511,803,036
0
5,963
4
1,511,805,699
0
6,016
10
1,511,835,678
0
5,953
1
1,511,854,835
0
5,954
1
1,511,855,079
0
5,955
1
1,511,855,760
0
5,956
2
1,511,855,879
0
5,957
2
1,511,856,044
0
5,958
1
1,511,856,270
0
5,959
1
1,511,856,410
0
5,960
1
1,511,856,603
0
5,964
2
1,511,857,130
0
5,965
2
1,511,858,173
0
5,966
3
1,511,858,981
0
5,967
3
1,511,859,282
End of preview.
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Tomplay - Processed for Classic Recommenders

Dataset Description

This is the processed version of the tomplay dataset, specifically prepared for classic recommendation algorithms like SVD (Singular Value Decomposition) and NMF (Non-negative Matrix Factorization).

Processing Pipeline

The original dataset has been processed with the following steps:

  1. Data Cleaning: Removed invalid entries and outliers
  2. ID Mapping: Created sequential user and item IDs starting from 0
  3. Format Standardization: Converted to standard (user_id, item_id, rating, timestamp) format
  4. Rating Normalization: Ensured ratings are on 1-5 scale

Dataset Structure

ratings_processed.csv

Main rating data with columns:

  • user_id: Sequential user ID (0-based)
  • item_id: Sequential item ID (0-based)
  • rating: Rating value raw interaction counts (implicit)
  • timestamp: Unix timestamp of interaction

train.csv, val.csv, test.csv

Chronological splits (per user leave-last strategy) with the same columns as above. Validation and test contain only users and items present in train (warm-start guarantee).

user_mapping.csv

Mapping between original and processed user IDs:

  • original_id: Original user identifier
  • mapped_id: Sequential user ID used in processed dataset

item_mapping.csv

Mapping between original and processed item IDs:

  • original_id: Original item identifier
  • mapped_id: Sequential item ID used in processed dataset

statistics.csv

Dataset statistics and metadata

"### Additional Files" "- items_metadata.csv: Item metadata with ID mappings" "- interaction_details.csv: Detailed interaction counts for analysis"

Statistics

  • Users: 35,028
  • Items: 33,397
  • Ratings: 1,843,194
  • Rating Scale: raw interaction counts (implicit)
  • Sparsity: 0.9984
  • Average Rating: 4.07

Algorithm Compatibility

This processed dataset is optimized for:

SVD (Singular Value Decomposition)

  • Sequential integer IDs for efficient matrix operations
  • Standard user-item-rating format
  • Proper handling of missing values (implicit zeros)

NMF (Non-negative Matrix Factorization)

  • Non-negative ratings (all values ≥ 0)
  • Dense user-item interaction format
  • Suitable for implicit feedback modeling

Usage Example

import pandas as pd
from sklearn.decomposition import TruncatedSVD
from scipy.sparse import csr_matrix

# Load processed data
ratings = pd.read_csv("ratings_processed.csv")

# Create user-item matrix for SVD
user_item_matrix = ratings.pivot(index='user_id', columns='item_id', values='rating').fillna(0)

# Apply SVD
svd = TruncatedSVD(n_components=50)
user_factors = svd.fit_transform(user_item_matrix)

Original Dataset

This processed dataset is derived from the original tomplay dataset. Please refer to the original dataset repository for source information and proper citation requirements.

License

Inherits the license terms from the original dataset. Please ensure compliance with original dataset usage restrictions.

Citation

When using this processed dataset, please cite both this processed version and the original dataset:

@dataset{tomplay_processed_2024,
  title={Tomplay Dataset - Processed for Classic Recommenders},
  author={LLM as Recommender Research Team},
  year={2024},
  note={Processed version for SVD and NMF algorithms}
}
Downloads last month
18

Collection including OloriBern/tomplay-classic-recommender