Change
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -14,4 +14,101 @@ datasets:
|
|
| 14 |
metrics:
|
| 15 |
- recall
|
| 16 |
- ndcg
|
| 17 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
metrics:
|
| 15 |
- recall
|
| 16 |
- ndcg
|
| 17 |
+
---
|
| 18 |
+
# 📖 Model Card: [REARM]
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
**"[Refining Contrastive Learning and Homography Relations for Multi-Modal Recommendation]"**,
|
| 22 |
+
*Shouxing Ma, Yawen Zeng, Shiqing Wu, and Guandong Xu*
|
| 23 |
+
Published in *[ACM MM]*, 2025.
|
| 24 |
+
[[Paper Link](https://arxiv.org/abs/2508.13745)] [[Code Repository](https://github.com/MrShouxingMa/REARM)]
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## ✨ Overview
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
- We propose a novel multi-modal contrastive recommendation framework (REARM), which preserves recommendation-relevant modal-shared and valuable modal-unique information through meta-network and orthogonal constraint strategies, respectively.
|
| 31 |
+
|
| 32 |
+
- We jointly incorporate co-occurrence and similarity graphs of users and items, allowing more effective capturing of the underlying structural patterns and semantic (interest) relationships, thereby enhancing recommendation performance.
|
| 33 |
+
|
| 34 |
+
- Extensive experiments are conducted on three publicly available datasets to evaluate our proposed method. The experimental results show that our proposed framework outperforms several state-of-the-art recommendation baselines.
|
| 35 |
+
---
|
| 36 |
+
## 🧩 Environment Requirement
|
| 37 |
+
The code has been tested running under Python 3.6. The required packages are as follows:
|
| 38 |
+
|
| 39 |
+
* pytorch == 1.13.0
|
| 40 |
+
* numpy == 1.24.4
|
| 41 |
+
* scipy == 1.10.1
|
| 42 |
+
|
| 43 |
+
## Data
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
## Dataset
|
| 47 |
+
|
| 48 |
+
We provide three processed datasets: Baby, Sports, and Clothing.
|
| 49 |
+
|
| 50 |
+
| #Dataset | #Interactions | #Users|#Items|Sparsity|
|
| 51 |
+
| ---- | ---- | ---- |---- |---- |
|
| 52 |
+
|Baby|160,792|19,445|7,050|99.88%|
|
| 53 |
+
|Sports|296,337|35,598|18,357|99.96%|
|
| 54 |
+
|Clothing|278,677|39,387|23,033|99.97%|
|
| 55 |
+
|
| 56 |
+
## 🚀 Example to Run the Codes
|
| 57 |
+
|
| 58 |
+
The instructions for the commands are clearly stated in the codes.
|
| 59 |
+
|
| 60 |
+
* Baby dataset
|
| 61 |
+
```
|
| 62 |
+
python main.py --dataset='baby' --num_layer=4 --reg_weight=0.0005 --rank=3 --s_drop=0.4 --m_drop=0.6 --u_mm_image_weight=0.2 --i_mm_image_weight=0 --uu_co_weight=0.4 --ii_co_weight=0.2 --user_knn_k=40 --item_knn_k=10 --n_ii_layers=1 --n_uu_layers=1 --cl_tmp=0.6 --cl_loss_weight=5e-6 --diff_loss_weight=5e-5
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
* Sports dataset
|
| 66 |
+
|
| 67 |
+
```
|
| 68 |
+
python main.py --dataset='sports' --num_layer=5 --reg_weight=0.05 --rank=7 --s_drop=1 --m_drop=0.2 --u_mm_image_weight=0 --i_mm_image_weight=0.2 --uu_co_weight=0.9 --ii_co_weight=0.2 --user_knn_k=25 --item_knn_k=5 --n_ii_layers=2 --n_uu_layers=2 --cl_tmp=1.5 --cl_loss_weight=1e-3 --diff_loss_weight=5e-4
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
* Clothing dataset
|
| 72 |
+
|
| 73 |
+
```
|
| 74 |
+
python main.py --dataset='clothing' --num_layer=4 --reg_weight=0.00001 --rank=3 --s_drop=0.4 --m_drop=0.1 --u_mm_image_weight=0.1 --i_mm_image_weight=0.1 --uu_co_weight=0.7 --ii_co_weight=0.1 --user_knn_k=45 --item_knn_k=10 --n_ii_layers=1 --n_uu_layers=1 --cl_tmp=0.03 --cl_loss_weight=1e-6 --diff_loss_weight=1e-5
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
## REARM
|
| 79 |
+
The released code consists of the following files.
|
| 80 |
+
```
|
| 81 |
+
--data
|
| 82 |
+
--baby
|
| 83 |
+
--clothing
|
| 84 |
+
--sports
|
| 85 |
+
--utils
|
| 86 |
+
--configurator
|
| 87 |
+
--data_loader
|
| 88 |
+
--evaluator
|
| 89 |
+
--helper
|
| 90 |
+
--logger
|
| 91 |
+
--metrics
|
| 92 |
+
--parser
|
| 93 |
+
--main
|
| 94 |
+
--model
|
| 95 |
+
--trainer
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
## Citation
|
| 99 |
+
If you want to use our codes and datasets in your research, please cite:
|
| 100 |
+
|
| 101 |
+
```
|
| 102 |
+
@inproceedings{REARM,
|
| 103 |
+
title = {Refining Contrastive Learning and Homography Relations for Multi-Modal Recommendation,
|
| 104 |
+
author = {Ma, Shouxing and
|
| 105 |
+
Zeng, Yawen and
|
| 106 |
+
Wu, Shiqing and
|
| 107 |
+
Xu, Guandong},
|
| 108 |
+
booktitle = {Proceedings of the 33th ACM International Conference on Multimedia},
|
| 109 |
+
year = {2025}
|
| 110 |
+
}
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
|
| 114 |
+
|