Datasets:
File size: 5,078 Bytes
32cfde5 e0d4e72 32cfde5 e0d4e72 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- recommendation
- cross-domain
- evaluation
- kitrec
- test-data
size_categories:
- 10K<n<100K
---
# KitREC Test Dataset - Set A
Evaluation test dataset for the KitREC (Knowledge-Instruction Transfer for Recommendation) cross-domain recommendation system.
## Dataset Description
This test dataset is designed for evaluating fine-tuned LLMs on cross-domain recommendation tasks across 10 different user types.
### Dataset Summary
| Attribute | Value |
|-----------|-------|
| **Candidate Set** | Set A (Hybrid (Hard negatives + Random)) |
| **Total Samples** | 30,000 |
| **Source Domain** | Books |
| **Target Domains** | Movies & TV, Music |
| **User Types** | 10 (5 per target domain) |
| **Rating Range** | 1.0 - 5.0 |
| **Mean Rating** | 4.259 |
### Set A vs Set B
- **Set A (Hybrid)**: Contains hard negative candidates + random candidates for challenging evaluation
- **Set B (Random)**: Contains only random candidates for fair baseline comparison
Both sets use the same ground truth items but differ in candidate composition.
### User Type Distribution
| User Type | Count | Percentage |
|-----------|-------|------------|
| cold_start_2core_movies | 3,000 | 10.00% |
| cold_start_2core_music | 3,000 | 10.00% |
| cold_start_3core_movies | 3,000 | 10.00% |
| cold_start_3core_music | 3,000 | 10.00% |
| cold_start_4core_movies | 3,000 | 10.00% |
| cold_start_4core_music | 3,000 | 10.00% |
| overlapping_books_movies | 3,000 | 10.00% |
| overlapping_books_music | 3,000 | 10.00% |
| source_only_movies | 3,000 | 10.00% |
| source_only_music | 3,000 | 10.00% |
### User Type Definitions
| User Type | Description |
|-----------|-------------|
| `overlapping_books_movies` | Users with history in both Books and Movies & TV |
| `overlapping_books_music` | Users with history in both Books and Music |
| `cold_start_2core_movies` | Movies cold-start users with 2 target interactions |
| `cold_start_2core_music` | Music cold-start users with 2 target interactions |
| `cold_start_3core_movies` | Movies cold-start users with 3 target interactions |
| `cold_start_3core_music` | Music cold-start users with 3 target interactions |
| `cold_start_4core_movies` | Movies cold-start users with 4 target interactions |
| `cold_start_4core_music` | Music cold-start users with 4 target interactions |
| `source_only_movies` | Users with ONLY Books history (extreme cold-start for Movies) |
| `source_only_music` | Users with ONLY Books history (extreme cold-start for Music) |
## Dataset Structure
### Data Fields
- `instruction` (string): The recommendation prompt including user history
- `input` (string): Candidate items for recommendation (100 items per sample)
- `gt_item_id` (string): Ground truth item ID
- `gt_title` (string): Ground truth item title
- `gt_rating` (float): User's actual rating for the ground truth item (1-5 scale)
- `user_id` (string): Unique user identifier
- `user_type` (string): User category (10 types)
- `candidate_set` (string): A or B
- `source_domain` (string): Books
- `target_domain` (string): Movies & TV or Music
- `candidate_count` (int): Number of candidate items (100)
### Data Split
| Split | Samples | Description |
|-------|---------|-------------|
| test | 30,000 | Evaluation test set |
## Usage
```python
from datasets import load_dataset
# Load test dataset
dataset = load_dataset("Younggooo/kitrec-test-seta")
# Access test data
test_data = dataset["test"]
print(f"Test samples: {len(test_data)}")
# Example: Filter by user type
overlapping_movies = test_data.filter(
lambda x: x["user_type"] == "overlapping_books_movies"
)
print(f"Overlapping Movies users: {len(overlapping_movies)}")
# Example: Calculate metrics by user type
from collections import defaultdict
user_type_metrics = defaultdict(list)
for sample in test_data:
user_type_metrics[sample["user_type"]].append(sample["gt_rating"])
```
## Evaluation Protocol
### Metrics
- **Hit@K** (K=1, 5, 10): Whether GT item is in top-K predictions
- **MRR**: Mean Reciprocal Rank
- **NDCG@10**: Normalized Discounted Cumulative Gain
### Stratified Analysis
Evaluate separately for each of the 10 user types to understand model performance across different scenarios.
### RQ4: Confidence-Rating Alignment
Use `gt_rating` field to analyze correlation between model's confidence scores and actual user ratings.
## Research Questions Addressed
| RQ | Question | Relevant Fields |
|----|----------|-----------------|
| RQ1 | KitREC structure effectiveness | All user types |
| RQ2 | Comparison with baselines | All metrics |
| RQ3 | Cold-start performance | cold_start_* user types |
| RQ4 | Confidence-rating alignment | gt_rating |
## Citation
```bibtex
@misc{kitrec2024,
title={KitREC: Knowledge-Instruction Transfer for Cross-Domain Recommendation},
author={KitREC Research Team},
year={2024},
note={Test dataset for cross-domain recommendation evaluation}
}
```
## License
This dataset is released under the Apache 2.0 License.
|