Datasets:
image
imagewidth (px) 373
7k
| label
class label 0
classes |
|---|---|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
|
null |
Touhou Project Dataset (Images + WD-ConvNeXt Features)
[ English ] | 中文
Dataset Description (En)
This dataset contains images of characters from the Touhou Project series, paired with pre-computed feature embeddings and logistic regression classifiers. It is intended for multi-label classification research, linear probing experiments, and character recognition tasks.
Key Features
- Embeddings are extracted from the penultimate layer of
SmilingWolf/wd-convnext-tagger-v3. - Includes
.npyfiles for training/validation splits, allowing for immediate model training without heavy image preprocessing. - Includes
joblibfiles containing trained scikit-learn One-vs-Rest logistic regression models for the included characters.
Statistics
- Total Samples: 10,786
- Split:
- Train: 8,687
- Validation: 2,099
- Classes (Characters): 134 unique tags
Distribution Imbalance
The dataset is unbalanced. Popular characters have significantly more samples than niche characters.
- Top 3: Hakurei Reimu (1,022), Kirisame Marisa (833), Cirno (696).
- Bottom 3: Motoori Kosuzu (224), Ruukoto (102), Satsuki Rin (100).
Directory Structure
.
├── images/ # Source images (various formats)
├── embeddings_backbone/ # Pre-computed features
│ ├── X_train.npy # Training features
│ ├── y_train.npy # Training labels (multi-hot encoded)
│ ├── X_val.npy # Validation features
│ ├── y_val.npy # Validation labels
│ ├── multi_label_binarizer.joblib # Sklearn LabelBinarizer object
│ └── touhou_classifier_br_list.joblib # List of trained LogisticRegression models
└── labels.csv # Metadata (id, filename, split, raw_tags)
Notes
- Missing Data: Due to network issues during collection, the character list is not exhaustive. Some Touhou characters are not present in this version.
- The dataset includes 100 samples of Satsuki Rin, representing the smallest class.
🚀 Quick Start: Inference with Pre-computed Embeddings
You can use the pre-computed validation embeddings (X_val.npy) and the trained classifier list (touhou_classifier_br_list.joblib) to evaluate the model without downloading images or loading the heavy backbone model.
Requirements: pip install numpy scikit-learn joblib huggingface_hub
import joblib
import numpy as np
from huggingface_hub import hf_hub_download
# 1. Download necessary artifacts (or use local paths if cloned)
REPO_ID = "Preacher-26/touhou-embeddings-dataset" # Replace with your actual repo ID
SUBFOLDER = "embeddings_backbone"
print("Loading artifacts...")
# Load Classifiers (List of 134 LogisticRegression models)
clf_path = hf_hub_download(
repo_id=REPO_ID, subfolder=SUBFOLDER, filename="touhou_classifier_br_list.joblib"
)
classifiers = joblib.load(clf_path)
# Load Label Binarizer (Maps indices to Character Names)
mlb_path = hf_hub_download(
repo_id=REPO_ID, subfolder=SUBFOLDER, filename="multi_label_binarizer.joblib"
)
mlb = joblib.load(mlb_path)
# Load a sample batch of features (Validation Set)
x_val_path = hf_hub_download(repo_id=REPO_ID, subfolder=SUBFOLDER, filename="X_val.npy")
X_val = np.load(x_val_path)
# 2. Perform Inference on a random sample
# Pick a random sample index from the validation set
sample_idx = np.random.randint(0, len(X_val))
sample_embedding = X_val[sample_idx].reshape(1, -1) # Shape: (1, 1024)
print(f"\nRunning inference on sample index: {sample_idx}")
# The model is a list of One-vs-Rest classifiers. We must iterate through them.
probs = []
for clf in classifiers:
# Handle DummyClassifier (used for classes with 0 training samples)
if hasattr(clf, "predict_proba"):
prob = clf.predict_proba(sample_embedding)[0, 1]
else:
prob = 0.0
probs.append(prob)
probs = np.array(probs)
# 3. Decode and Print Results
# Using threshold 0.2 (optimized for F1-Score as per report)
THRESHOLD = 0.2
active_indices = np.where(probs >= THRESHOLD)[0]
print(f"--- Predictions (Threshold: {THRESHOLD}) ---")
if len(active_indices) == 0:
print("No characters detected above threshold.")
else:
for idx in active_indices:
tag_name = mlb.classes_[idx]
confidence = probs[idx]
print(f"Character: {tag_name:<25} | Confidence: {confidence:.4f}")
# (Optional) Show Top-3 Raw Probabilities
top3_indices = np.argsort(probs)[-3:][::-1]
print("\n--- Top 3 Raw Probabilities ---")
for idx in top3_indices:
print(f"{mlb.classes_[idx]:<25}: {probs[idx]:.4f}")
📚 Citation
If you use this dataset in your research, please cite it as follows:
@misc{touhou-embeddings-dataset,
author = {Preacher-26},
title = {touhou-embeddings-dataset},
year = {2025},
数据集说明 (Zh)
本数据集包含《东方 Project》系列角色的图片,以及基于 WD-ConvNeXt Tagger 提取的特征向量和预训练分类器。该数据集主要用于多标签分类研究、Linear Probing 实验及角色识别任务。
主要特性
- 嵌入向量 (Embeddings) 提取自
SmilingWolf/wd-convnext-tagger-v3模型的倒数第二层。 - 包含已处理好的
.npy格式训练/验证集张量,无需进行繁重的图像预处理即可直接用于下游模型训练。 - 包含已训练好的 scikit-learn One-vs-Rest 逻辑回归模型 (
.joblib)。
统计信息
- 样本总数: 10,786 张
- 数据集划分:
- 训练集 (Train): 8,687
- 验证集 (Val): 2,099
- 类别数 (角色): 134 个独立标签
数据分布
数据分布存在不均衡现象,热门角色样本量远多于长尾角色。
- 头部角色 (Top 3): 博丽灵梦 (1,022), 雾雨魔理沙 (833), 琪露诺 (696)。
- 尾部角色 (Bottom 3): 本居小铃 (224), Ruukoto (102), 冴月麟 (100)。
目录结构说明
images/: 原始图片文件。embeddings_backbone/:X_train.npy/X_val.npy: 提取出的特征向量。y_train.npy/y_val.npy: 经过 Multi-hot 编码的标签。touhou_classifier_br_list.joblib: 包含 100+个二分类逻辑回归模型的列表。
labels.csv: 元数据表,包含 ID、文件名、划分情况及原始标签。
注意事项
- 数据缺失: 受限于采集时的网络状况,本数据集并未覆盖东方 Project 的所有角色。
- 即使是样本最少的类别(如冴月麟),也保留了约 100 张样本,具备一定的 Few-shot 学习价值。
- Downloads last month
- 23