Update README.md
Browse files
README.md
CHANGED
|
@@ -25,6 +25,10 @@ configs:
|
|
| 25 |
data_files: "merged_comments.parquet"
|
| 26 |
- config_name: grammy_videos_lyrics
|
| 27 |
data_files: "grammy_videos_lyrics.parquet"
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
---
|
| 29 |
|
| 30 |
# Preprocessed Data for the ADA Project 2025
|
|
@@ -42,3 +46,16 @@ List of Files:
|
|
| 42 |
* *timeseries_grammy.parquet*: This dataset describes the Timeseries evolution of channels belonging to Grammy Authors. We obtain this by filtering the Youniverse's `df_timeseries_en` split with the unique channels that can be found in the `metadata_grammy.parquet` dataset.
|
| 43 |
* *merged_comments.parquet*: This dataset is given by filtering the `youtube_comments.tsv.gz` based on the feature `video_id`, that must be in the `music_video_ids` set.
|
| 44 |
* *grammy_videos_lyrics.parquet*: This dataset is given by expanding the Grammy Dataset to also include the lyrics.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
data_files: "merged_comments.parquet"
|
| 26 |
- config_name: grammy_videos_lyrics
|
| 27 |
data_files: "grammy_videos_lyrics.parquet"
|
| 28 |
+
- config_name: item_factors
|
| 29 |
+
data_files: "item_factors.parquet"
|
| 30 |
+
- config_name: user_factors
|
| 31 |
+
data_files: "user_factors.parquet"
|
| 32 |
---
|
| 33 |
|
| 34 |
# Preprocessed Data for the ADA Project 2025
|
|
|
|
| 46 |
* *timeseries_grammy.parquet*: This dataset describes the Timeseries evolution of channels belonging to Grammy Authors. We obtain this by filtering the Youniverse's `df_timeseries_en` split with the unique channels that can be found in the `metadata_grammy.parquet` dataset.
|
| 47 |
* *merged_comments.parquet*: This dataset is given by filtering the `youtube_comments.tsv.gz` based on the feature `video_id`, that must be in the `music_video_ids` set.
|
| 48 |
* *grammy_videos_lyrics.parquet*: This dataset is given by expanding the Grammy Dataset to also include the lyrics.
|
| 49 |
+
|
| 50 |
+
### Collaborative Filtering Data
|
| 51 |
+
|
| 52 |
+
The following items are obtained by performing Matrix Factorization on the `merged_comments.tsv.gz` split of the dataset.
|
| 53 |
+
As a first step the Sparse matrix of shape `(users, items) is created, with each entry corresponding to either a *0* or a *1* (where 1 means that the specific user commented in that specific video).
|
| 54 |
+
Afterwards, we perform Matrix Factorization using the `implicit` library.
|
| 55 |
+
The following files are the results of that:
|
| 56 |
+
|
| 57 |
+
* *item_factors.parquet*: Latent Space representation of every `video_id`.
|
| 58 |
+
* *user_factors.parquet*: Latent Space representation of every `author`.
|
| 59 |
+
* *als_model.pkl*: Weights for the trained ALS model.
|
| 60 |
+
* *user_id_map.pkl*: Mapping to go from the **original** `video_id` to the specific ID used for training in the ALS model.
|
| 61 |
+
* *item_id_map.pkl*: Mapping to go from the **original** `author` to the specific ID used for training in the ALS model.
|