matcav commited on
Commit
7aba31f
·
verified ·
1 Parent(s): ff797ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -36
README.md CHANGED
@@ -9,53 +9,53 @@ configs:
9
  data_files: "channels_clean.parquet"
10
  - config_name: grammy_raw
11
  data_files: "grammy_raw.parquet"
12
- - config_name: grammy_videos
13
- data_files: "grammy_videos.parquet"
14
- - config_name: metadata_grammy
15
- data_files: "metadata_grammy.parquet"
16
- - config_name: metadata_grammy_lyrics
17
- data_files: "metadata_grammy_lyrics.parquet"
 
 
18
  - config_name: music_metadata
19
  data_files: "music_metadata.parquet"
20
  - config_name: music_video_ids
21
  data_files: "music_video_ids.parquet"
22
- - config_name: timeseries_grammy
23
- data_files: "timeseries_grammy.parquet"
24
- - config_name: merged_comments
25
- data_files: "merged_comments.parquet"
26
- - config_name: grammy_videos_lyrics
27
- data_files: "grammy_videos_lyrics.parquet"
28
- - config_name: item_factors
29
- data_files: "item_factors.parquet"
30
- - config_name: user_factors
31
- data_files: "user_factors.parquet"
32
  ---
33
 
34
  # Preprocessed Data for the ADA Project 2025
35
 
36
  ## By DataCookers
37
 
38
- List of Files:
39
- * *channels_clean.parquet*: This dataset is obtained by preprocessing the `df_channels_en` split of the Youniverse Dataset. The main problem could be found int the missing values in certain rows relative to the **Category** column, that led to the remaining columns that were shifted one position to the left.
40
- * *grammy_raw.parquet*: The original dataset that can be found at [This Kaggle link](https://www.kaggle.com/datasets/johnpendenque/grammy-winners-and-nominees-from-1965-to-2024).
41
- * *grammy_videos.parquet*: Equivalent to the `grammy_raw.parquet` dataset, but expanded to also have the `video_id` column, using a scraping API (**YT-DLP**).
42
- * *metadata_grammy.parquet*: This dataset is obtained by merging the original Grammy Dataset with the Youniverse's `yt_metadata_en` split. The merging is done through a set of scraping and heuristic functions.
43
- * *metadata_grammy_lyrics.parquet*: A specialized dataset containing the lyrics for Grammy-nominated or winning songs, enabling text or sentiment analysis.
44
- * *music_metadata.parquet*: This dataset is obtained by filtering the original `yt_metadata_en` split on the **Music** Category.
45
- * *music_video_ids.parquet*: This dataset is obtained by filtering the ´yt_metadata_en´ split of the Youniverse dataset to include just those videos that are listed in the **Music** category.
46
- * *timeseries_grammy.parquet*: This dataset describes the Timeseries evolution of channels belonging to Grammy Authors. We obtain this by filtering the Youniverse's `df_timeseries_en` split with the unique channels that can be found in the `metadata_grammy.parquet` dataset.
47
- * *merged_comments.parquet*: This dataset is given by filtering the `youtube_comments.tsv.gz` based on the feature `video_id`, that must be in the `music_video_ids` set.
48
- * *grammy_videos_lyrics.parquet*: This dataset is given by expanding the Grammy Dataset to also include the lyrics.
49
 
50
  ### Collaborative Filtering Data
51
 
52
- The following items are obtained by performing Matrix Factorization on the `merged_comments.tsv.gz` split of the dataset.
53
- As a first step the Sparse matrix of shape `(users, items)` is created, with each entry corresponding to either a **0** or a **1** (where 1 means that the specific user commented in that specific video).
54
- Afterwards, we perform Matrix Factorization using the `implicit` library.
55
- The following files are the results of that:
 
 
56
 
57
- * *item_factors.parquet*: Latent Space representation of every `video_id`.
58
- * *user_factors.parquet*: Latent Space representation of every `author`.
59
- * *als_model.pkl*: Weights for the trained ALS model.
60
- * *user_id_map.pkl*: Mapping to go from the **original** `video_id` to the specific ID used for training in the ALS model.
61
- * *item_id_map.pkl*: Mapping to go from the **original** `author` to the specific ID used for training in the ALS model.
 
9
  data_files: "channels_clean.parquet"
10
  - config_name: grammy_raw
11
  data_files: "grammy_raw.parquet"
12
+ - config_name: grammy_metadata
13
+ data_files: "grammy_metadata.parquet"
14
+ - config_name: grammy_channels
15
+ data_files: "grammy_channels.parquet"
16
+ - config_name: grammy_metadata_extended
17
+ data_files: "grammy_metadata_extended.parquet"
18
+ - config_name: grammy_timeseries
19
+ data_files: "grammy_timeseries.parquet"
20
  - config_name: music_metadata
21
  data_files: "music_metadata.parquet"
22
  - config_name: music_video_ids
23
  data_files: "music_video_ids.parquet"
24
+ - config_name: music_comments
25
+ data_files: "music_comments.parquet"
26
+ - config_name: cf_item_factors
27
+ data_files: "CF_item_factors.parquet"
28
+ - config_name: cf_user_factors
29
+ data_files: "CF_user_factors.parquet"
 
 
 
 
30
  ---
31
 
32
  # Preprocessed Data for the ADA Project 2025
33
 
34
  ## By DataCookers
35
 
36
+ ### General Datasets
37
+
38
+ * **channels_clean.parquet**: A cleaned version of the `df_channels_en` split from the Youniverse Dataset. This dataset corrects parsing errors where missing values in the **Category** column caused subsequent columns to shift one position to the left.
39
+ * **grammy_raw.parquet**: The foundational dataset containing Grammy winners and nominees (1965–2024), sourced from [Kaggle](https://www.kaggle.com/datasets/johnpendenque/grammy-winners-and-nominees-from-1965-to-2024).
40
+ * **grammy_metadata.parquet**: A merged dataset combining the *grammy_raw* data with the `yt_metadata_en` split from Youniverse. It is further enriched with web-scraped song lyrics.
41
+ * **grammy_channels.parquet**: A filtered subset of `df_channels_en` containing only those YouTube channels that have published a Grammy-nominated or winning song.
42
+ * **grammy_metadata_extended.parquet**: An expansion of the metadata containing all videos belonging to the channels identified in *grammy_channels* (not just the Grammy-winning videos).
43
+ * **grammy_timeseries.parquet**: Temporal data (time series) specifically associated with the channels found in the *grammy_channels* dataset.
44
+ * **music_metadata.parquet**: A subset of the original `yt_metadata_en` Youniverse split, filtered to include only entries classified under the **Music** Category.
45
+ * **music_video_ids.parquet**: A lightweight dataset containing the specific video IDs filtered from the `yt_metadata_en` split, strictly for videos listed in the **Music** category.
46
+ * **music_comments.parquet**: A collection of user comments obtained by filtering `youtube_comments.tsv.gz`. It retains only comments posted on videos present in the *music_video_ids* dataset.
47
 
48
  ### Collaborative Filtering Data
49
 
50
+ The following files represent the output of Matrix Factorization performed on the `merged_comments.tsv.gz` split.
51
+
52
+ **Methodology:** 1. We constructed a sparse interaction matrix of shape `(users, items)` with binary entries (**0** or **1**), where a **1** indicates the user commented on a specific video.
53
+ 2. We performed Matrix Factorization using the Alternating Least Squares (ALS) algorithm via the `implicit` library.
54
+
55
+ **Resulting Files:**
56
 
57
+ * **CF_item_factors.parquet**: The latent space vectors representing every `video_id` (Item Factors).
58
+ * **CF_user_factors.parquet**: The latent space vectors representing every `author` (User Factors).
59
+ * **CF_als_model.pkl**: The serialized weights of the trained ALS model.
60
+ * **CF_user_id_map.pkl**: A dictionary mapping the **original** `author` (string) to the integer ID used during ALS model training.
61
+ * **CF_item_id_map.pkl**: A dictionary mapping the **original** `video_id` (string) to the integer ID used during ALS model training.