Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 91, in _split_generators
                  inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 6319, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: Unable to merge: Field npz has incompatible types: struct<embeddings: list<item: list<item: double>>, face_nums: list<item: int64>, frame_nums: list<item: int64>> vs struct<embeddings: list<item: list<item: float>>, face_nums: list<item: int32>, frame_nums: list<item: int32>>: Unable to merge: Field embeddings has incompatible types: list<item: list<item: double>> vs list<item: list<item: float>>: Unable to merge: Field item has incompatible types: list<item: double> vs list<item: float>: Unable to merge: Field item has incompatible types: double vs float
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

CineFace

CineFace is a comprehensive ecosystem for facial analysis in entertainment media. It consists of:

  1. The CineFace Dataset: A massive collection of detections and embeddings from over 6,000 movies and TV series.
  2. The CineFace Toolkit: Pipeline for large-scale facial detection, encoding, and identification in TV and Film.

📊 View Dashboard | 🤗 Hugging Face Dataset

Dataset

The CineFace database contains metadata and facial detections for over 6,000 titles. You can download the components directly from Hugging Face:

  • Film List: film_list.csv — Comprehensive list of all movies and series in the DB.
  • Detections: faces.tar.gz — Bounding boxes and identifications.
  • Encodings: embeddings.tar.gz — Pre-computed face embeddings.
  • Relational DB: CineFaceDW.db — SQLite version of the dataset.

Using the Encodings

The encodings are saved as .npz files. Since the encoded faces are stored in sequence, you can join them to the detection metadata by loading the corresponding CSV and adding the array as a column:

import numpy as np
import pandas as pd

# Load metadata and embeddings
df = pd.read_csv("movie_12345.csv")
embeddings = np.load("movie_12345.npz")['embeddings']

# Join (sequence based)
df['encoding'] = list(embeddings)

Toolkit (Installation and Usage)

Requirements

CineFace relies on Docker and Qdrant. To install Qdrant, just run with Docker. It will download the image automatically

docker run -p 6333:6333 qdrant/qdrant

Install

Simply download the source code

git clone https://github.com/astaileyyoung/CineFace.git

Then install the required dependencies

pip install -r requirements.txt

Finally, install CineFace

pip install -e .

CineFace uses Visage as a backend for accurate, high-performance facial detection and encoding. Visage can also be used independently.

**Be advised that the associated docker image is quite large (17GB) since it relies on heavy ML libraries built from source, so it will take a while to download (10-15 minutes).

Usage

Running CineFace is straightforward.

Basic Command

cineface <src> <dst> [options]
  • <src>: Path to the input video file
  • <dst>: Path to the output file

Command-Line Arguments

Argument Type Default Description
src str (required) Path to input video file or directory.
dst str (required) Path to output directory or results file.
imdb_id int (required) IMDb ID (just the numbers).
--faces_dir str None Directory to save face images to
--encoding_col str 'embedding' Column name for face embeddings.
--image str 'astaileyyoung/visage' Container/image name (for debugging/development).
--frameskip int 24 Number of frames to skip between detections.
--threshold, -t float 0.5 Recognition confidence threshold.
--timeout int 60 Timeout (in seconds) for matching.
--batch_size int 256 Batch size for matching.
--season int None Season number (required for matching tv show).
--episode int None Episode number (requird for matching tv show).
--qdrant_client str 'localhost' Qdrant client address (vector DB).
--qdrant_port int 6333 Qdrant port.

**Automatic tv/movie identification by filename is no longer working due to change in the IMDb API that has broken Cinemagoer search, which automatic identification depends on. If analyzing a movie, you must enter the imdb_id. If analyzing a TV show, you must enter the imdb_id, season, and episode.

Research and Analysis

Notebooks analyzing the dataset can be found in CineFace/notebooks/research. Feel free to submit a ticket if you encounter bugs or have feature requests for the dashboard.

Downloads last month
37