datasetId large_stringlengths 6 121 | card_raw large_stringlengths 10 25.3M | card_text large_stringlengths 0 25.3M | downloads int64 0 2.26M | likes int64 0 9.39k | tags large listlengths 1 7.92k | created_at large_stringdate 2022-03-02 23:29:22 2025-11-12 17:47:45 | last_modified large_stringdate 2021-02-16 03:58:06 2025-11-12 17:57:42 | trending_score float32 0 90 |
|---|---|---|---|---|---|---|---|---|
hcooch2ch3/eval_wood_sticks |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "omx_follower",
"total_episodes": 3,
"total_frames": 5110,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "omx_follower",
"total_episodes": 3,
"total_frames": 5110,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 21 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | 2025-11-11T17:55:22+00:00 | 2025-11-12T17:57:22+00:00 | 0 |
ts0pwo/20K_real_and_deepfake_images | This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2600 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the original images are stored. | This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2600 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the original images are stored. | 397 | 0 | [
"task_categories:image-classification",
"language:en",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"deepfake"
] | 2025-11-05T16:30:27+00:00 | 2025-11-12T17:57:42+00:00 | 0 |
KakologArchives/KakologArchives |
# ニコニコ実況 過去ログアーカイブ
ニコニコ実況 過去ログアーカイブは、[ニコニコ実況](https://jk.nicovideo.jp) のサービス開始から現在までのすべての過去ログコメントを収集したデータセットです。
去る2020年12月、ニコニコ実況は [ニコニコ生放送内の一公式チャンネルとしてリニューアル](https://blog.nicovideo.jp/niconews/143148.html) されました。
これに伴い、2009年11月から運用されてきた旧システムは提供終了となり(事実上のサービス終了)、torne や BRAVIA などの家電への対応が軒並み終了する中、当時の生の声が詰まった約11年分の過去ログも同時に失われることとなってしまいました。
そこで 5ch の DTV 板の住民が中心となり、旧ニコニコ実況が終了するまでに11年分の全チャンネルの過去ログをアーカイブする計画が立ち上がりました。紆余曲折あり Nekopanda 氏が約11年分のラジオや BS も含めた全チャンネルの過去ログを完璧に取得してくださったおかげで、11年分の過去ログが電子の海に消えていく事態は回避できました。
しかし、旧 API が廃止されてしまったため過去ログを API 経由で取得することができなくなり、またアーカイブされた過去ログから見たい範囲のログを探す場合も、アーカイブのサイズが合計約 150GB もあることから、とても以前のように手軽に過去ログに触れることはできなくなってしまいました。
一方、ニコニコ生放送内の一公式チャンネルとして移行した新ニコニコ実況では、タイムシフト(旧ニコニコ実況での過去ログに相当)の視聴期限は3週間までとなっているため、その期限を過ぎると過去ログは視聴できなくなってしまいます。
また一般会員は事前にタイムシフト予約をしておく必要があるなど、以前のような利便性は失われています。
私たちは、ニコニコ実況に投稿された日本のテレビ放送についてのコメントは、当時の世相や時代背景を端的に表す、歴史的価値のある資料だと考えています。
このデータセットでは、ニコニコ実況のすべての過去ログを後世に残すべく、Nekopanda 氏が配布されていた旧ニコニコ実況の 2020/12/15 までのすべての過去ログに加え、コミュニティでの実況番組も含めた新ニコニコ実況、さらに 2024/06/10 からは実況用代替コメントサーバーである [NX-Jikkyo](https://nx-jikkyo.tsukumijima.net/) の当日分の過去ログを5分に1回収集し、随時反映しています。
過去ログをかんたんに取得するための [API](https://jikkyo.tsukumijima.net/) もあります。
よろしければそちらもご活用ください。
## Dataset Structure
### Builder Config
| Key | Value Type | Default Value | Description |
| --------------- | ---------- | ------------- | ----------- |
| channel_id | string | None | 過去ログを取得するニコニコ実況チャンネルの ID (省略時はすべてのチャンネル) |
| year | int | None | 取得する過去ログの年 (省略時はすべての年) |
| number_of_files | int | None | 取得する過去ログファイルの数 (省略時はすべてのファイル) |
### Data Splits
| Split | Approximate Size | Description |
| ------- | ---------------- | ----------- |
| sample | 1GB | サンプルとして、2022年中に投稿された TOKYO MX (ID: jk9) のすべての過去ログコメントを取得します。1GB ほどあります。 |
| all | 190GB | 全チャンネル/全期間のすべての過去ログコメントを取得します。190GB 以上あるため注意してください。 |
### Data Fields
| Field | Type | Description |
| --------------- | -------- | ----------- |
| thread | string | コメントのスレッド ID |
| no | int64 | コメント番号 (コメ番) |
| vpos | int64 | スレッド ID から起算したコメントの再生位置 (1/100秒) |
| date | int64 | コメント投稿時間の UNIX タイムスタンプ |
| date_usec | int64 | コメント投稿時間の小数点以下の時間 |
| user_id | string | ユーザー ID (コマンドに 184 が指定されている場合は匿名化され、1週間ほどでシャッフルされる) |
| mail | string | コメントのコマンド (184, red naka big など、省略されることもある) |
| premium | boolean | コメントしたユーザーがプレミアム会員であれば True |
| anonymity | boolean | 匿名コメントであれば True |
| content | string | コメント本文 (AA など、まれに複数行コメントがあるので注意) |
## Example
```python
from datasets import load_dataset
dataset = load_dataset('KakologArchives/KakologArchives', 'all', channel_id='jk211', year=2023, number_of_files=10)
for data in dataset['train']:
print(data)
```
## Licensing Information
[MIT License](https://opensource.org/license/mit/)
|
# ニコニコ実況 過去ログアーカイブ
ニコニコ実況 過去ログアーカイブは、[ニコニコ実況](https://jk.nicovideo.jp) のサービス開始から現在までのすべての過去ログコメントを収集したデータセットです。
去る2020年12月、ニコニコ実況は [ニコニコ生放送内の一公式チャンネルとしてリニューアル](https://blog.nicovideo.jp/niconews/143148.html) されました。
これに伴い、2009年11月から運用されてきた旧システムは提供終了となり(事実上のサービス終了)、torne や BRAVIA などの家電への対応が軒並み終了する中、当時の生の声が詰まった約11年分の過去ログも同時に失われることとなってしまいました。
そこで 5ch の DTV 板の住民が中心となり、旧ニコニコ実況が終了するまでに11年分の全チャンネルの過去ログをアーカイブする計画が立ち上がりました。紆余曲折あり Nekopanda 氏が約11年分のラジオや BS も含めた全チャンネルの過去ログを完璧に取得してくださったおかげで、11年分の過去ログが電子の海に消えていく事態は回避できました。
しかし、旧 API が廃止されてしまったため過去ログを API 経由で取得することができなくなり、またアーカイブされた過去ログから見たい範囲のログを探す場合も、アーカイブのサイズが合計約 150GB もあることから、とても以前のように手軽に過去ログに触れることはできなくなってしまいました。
一方、ニコニコ生放送内の一公式チャンネルとして移行した新ニコニコ実況では、タイムシフト(旧ニコニコ実況での過去ログに相当)の視聴期限は3週間までとなっているため、その期限を過ぎると過去ログは視聴できなくなってしまいます。
また一般会員は事前にタイムシフト予約をしておく必要があるなど、以前のような利便性は失われています。
私たちは、ニコニコ実況に投稿された日本のテレビ放送についてのコメントは、当時の世相や時代背景を端的に表す、歴史的価値のある資料だと考えています。
このデータセットでは、ニコニコ実況のすべての過去ログを後世に残すべく、Nekopanda 氏が配布されていた旧ニコニコ実況の 2020/12/15 までのすべての過去ログに加え、コミュニティでの実況番組も含めた新ニコニコ実況、さらに 2024/06/10 からは実況用代替コメントサーバーである [NX-Jikkyo](https://nx-jikkyo.tsukumijima.net/) の当日分の過去ログを5分に1回収集し、随時反映しています。
過去ログをかんたんに取得するための [API](https://jikkyo.tsukumijima.net/) もあります。
よろしければそちらもご活用ください。
## Dataset Structure
### Builder Config
| Key | Value Type | Default Value | Description |
| --------------- | ---------- | ------------- | ----------- |
| channel_id | string | None | 過去ログを取得するニコニコ実況チャンネルの ID (省略時はすべてのチャンネル) |
| year | int | None | 取得する過去ログの年 (省略時はすべての年) |
| number_of_files | int | None | 取得する過去ログファイルの数 (省略時はすべてのファイル) |
### Data Splits
| Split | Approximate Size | Description |
| ------- | ---------------- | ----------- |
| sample | 1GB | サンプルとして、2022年中に投稿された TOKYO MX (ID: jk9) のすべての過去ログコメントを取得します。1GB ほどあります。 |
| all | 190GB | 全チャンネル/全期間のすべての過去ログコメントを取得します。190GB 以上あるため注意してください。 |
### Data Fields
| Field | Type | Description |
| --------------- | -------- | ----------- |
| thread | string | コメントのスレッド ID |
| no | int64 | コメント番号 (コメ番) |
| vpos | int64 | スレッド ID から起算したコメントの再生位置 (1/100秒) |
| date | int64 | コメント投稿時間の UNIX タイムスタンプ |
| date_usec | int64 | コメント投稿時間の小数点以下の時間 |
| user_id | string | ユーザー ID (コマンドに 184 が指定されている場合は匿名化され、1週間ほどでシャッフルされる) |
| mail | string | コメントのコマンド (184, red naka big など、省略されることもある) |
| premium | boolean | コメントしたユーザーがプレミアム会員であれば True |
| anonymity | boolean | 匿名コメントであれば True |
| content | string | コメント本文 (AA など、まれに複数行コメントがあるので注意) |
## Example
```python
from datasets import load_dataset
dataset = load_dataset('KakologArchives/KakologArchives', 'all', channel_id='jk211', year=2023, number_of_files=10)
for data in dataset['train']:
print(data)
```
## Licensing Information
[MIT License](https://opensource.org/license/mit/)
| 79,867 | 16 | [
"task_categories:text-classification",
"language:ja",
"license:mit",
"region:us"
] | 2023-05-12T13:31:56+00:00 | 2025-11-12T17:56:42+00:00 | 0 |
chrisrca/clash-royale-tv-replays |
# Clash Royale TV Replays
Frame-by-frame gameplay recordings (~10 fps) from Clash Royale's TV Royale, covering all 31 arenas. Automated recording using tools from our [github repository](https://github.com/chrisrca/CS541-Deep-Learning-Clash-Royale-Project/tree/emulation).
## Structure
```
arena_{XX}/{replay_uuid}/
├── frames.parquet # Frame data
└── preview.jpg # First frame thumbnail
```
**Parquet Schema:**
- `frame_id` (int64): Frame number
- `image` (Image): PNG bytes
- `hash` (string): MD5 for deduplication
## Usage
```python
from huggingface_hub import hf_hub_download
import pyarrow.parquet as pq
path = hf_hub_download(
repo_id="chrisrca/clash-royale-tv-replays",
filename="arena_{arena_id}/{replay_id}/frames.parquet",
repo_type="dataset",
token=HF_TOKEN
)
table = pq.read_table(path)
print(f"Loaded {len(table)} frames from {path}")
```
## Details
- **Resolution**: 540x960
- **Format**: PNG frames (ZSTD compressed)
- **Deduplication**: Only unique frames saved
- **Collection**: Automated via Android emulator |
# Clash Royale TV Replays
Frame-by-frame gameplay recordings (~10 fps) from Clash Royale's TV Royale, covering all 31 arenas. Automated recording using tools from our [github repository](https://github.com/chrisrca/CS541-Deep-Learning-Clash-Royale-Project/tree/emulation).
## Structure
```
arena_{XX}/{replay_uuid}/
├── frames.parquet # Frame data
└── preview.jpg # First frame thumbnail
```
**Parquet Schema:**
- `frame_id` (int64): Frame number
- `image` (Image): PNG bytes
- `hash` (string): MD5 for deduplication
## Usage
```python
from huggingface_hub import hf_hub_download
import pyarrow.parquet as pq
path = hf_hub_download(
repo_id="chrisrca/clash-royale-tv-replays",
filename="arena_{arena_id}/{replay_id}/frames.parquet",
repo_type="dataset",
token=HF_TOKEN
)
table = pq.read_table(path)
print(f"Loaded {len(table)} frames from {path}")
```
## Details
- **Resolution**: 540x960
- **Format**: PNG frames (ZSTD compressed)
- **Deduplication**: Only unique frames saved
- **Collection**: Automated via Android emulator | 6,014 | 1 | [
"task_categories:feature-extraction",
"license:mit",
"region:us",
"clash-royale",
"replays",
"gaming",
"computer-vision",
"parquet",
"image-dataset",
"video-frames",
"mobile-gaming"
] | 2025-11-10T02:39:02+00:00 | 2025-11-12T17:56:35+00:00 | 1 |
oxe-aug/language_table_train_160000_165000_augmented | # language_table_train_160000_165000_augmented
## Overview
- **Codebase version**: `v2.1`
- **Robots**: google_robot, images, jaco, kinova3, kuka_iiwa, panda, sawyer, ur5e
- **FPS**: 10
- **Episodes**: 5,000
- **Frames**: 79,439
- **Videos**: 40,000
- **Chunks**: 5
- **Splits**:
- `train`: `0:5000`
## Data Layout
```text
data_path : data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet
video_path: videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4
```
## Features
| Feature | dtype | shape | description |
|---|---:|---:|---|
| `observation.images.google_robot` | `video` | `360×640×3` | Augmented image for google_robot robot |
| `observation.images.image` | `video` | `360×640×3` | Source robot's image from original dataset |
| `observation.images.jaco` | `video` | `360×640×3` | Augmented image for jaco robot |
| `observation.images.kinova3` | `video` | `360×640×3` | Augmented image for kinova3 robot |
| `observation.images.kuka_iiwa` | `video` | `360×640×3` | Augmented image for kuka_iiwa robot |
| `observation.images.panda` | `video` | `360×640×3` | Augmented image for panda robot |
| `observation.images.sawyer` | `video` | `360×640×3` | Augmented image for sawyer robot |
| `observation.images.ur5e` | `video` | `360×640×3` | Augmented image for ur5e robot |
| `episode_index` | `int64` | `1` | - |
| `frame_index` | `int64` | `1` | - |
| `index` | `int64` | `1` | - |
| `natural_language_instruction` | `int32` | `512` | - |
| `observation.ee_pose` | `float32` | `7` | Source robot's eef position |
| `observation.google_robot.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.google_robot.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.google_robot.ee_error` | `float32` | `7` | The eef difference between the augmented google_robot robot and the original robot |
| `observation.google_robot.ee_pose` | `float32` | `7` | The eef position of google_robot robot |
| `observation.google_robot.joints` | `float32` | `8` | The joint position of google_robot robot |
| `observation.jaco.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.jaco.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.jaco.ee_error` | `float32` | `7` | The eef difference between the augmented jaco robot and the original robot |
| `observation.jaco.ee_pose` | `float32` | `7` | The eef position of jaco robot |
| `observation.jaco.joints` | `float32` | `7` | The joint position of jaco robot |
| `observation.joints` | `float32` | `8` | Joint angle of source robot |
| `observation.kinova3.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.kinova3.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.kinova3.ee_error` | `float32` | `7` | The eef difference between the augmented kinova3 robot and the original robot |
| `observation.kinova3.ee_pose` | `float32` | `7` | The eef position of kinova3 robot |
| `observation.kinova3.joints` | `float32` | `8` | The joint position of kinova3 robot |
| `observation.kuka_iiwa.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.kuka_iiwa.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.kuka_iiwa.ee_error` | `float32` | `7` | The eef difference between the augmented kuka_iiwa robot and the original robot |
| `observation.kuka_iiwa.ee_pose` | `float32` | `7` | The eef position of kuka_iiwa robot |
| `observation.kuka_iiwa.joints` | `float32` | `8` | The joint position of kuka_iiwa robot |
| `observation.panda.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.panda.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.panda.ee_error` | `float32` | `7` | The eef difference between the augmented panda robot and the original robot |
| `observation.panda.ee_pose` | `float32` | `7` | The eef position of panda robot |
| `observation.panda.joints` | `float32` | `8` | The joint position of panda robot |
| `observation.sawyer.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.sawyer.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.sawyer.ee_error` | `float32` | `7` | The eef difference between the augmented sawyer robot and the original robot |
| `observation.sawyer.ee_pose` | `float32` | `7` | The eef position of sawyer robot |
| `observation.sawyer.joints` | `float32` | `8` | The joint position of sawyer robot |
| `observation.state` | `float32` | `2` | Copy of the state field in source robot's RLDS dataset |
| `observation.ur5e.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.ur5e.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.ur5e.ee_error` | `float32` | `7` | The eef difference between the augmented ur5e robot and the original robot |
| `observation.ur5e.ee_pose` | `float32` | `7` | The eef position of ur5e robot |
| `observation.ur5e.joints` | `float32` | `7` | The joint position of ur5e robot |
| `task_index` | `int64` | `1` | - |
| `timestamp` | `float32` | `1` | - |
## Website
- Website page: [https://oxe-aug.github.io/](https://oxe-aug.github.io/)
- Project repository: [https://github.com/GuanhuaJi/oxe-aug](https://github.com/GuanhuaJi/oxe-aug)
## Paper
- [https://arxiv.org/abs/2210.06407](https://arxiv.org/abs/2210.06407)
## Citation Policy
If you use **OXE-Aug** datasets, please cite **both** our dataset and the **upstream datasets**.
## Upstream Dataset Citation (original dataset)
```bibtex
@article{lynch2022interactive,
title = {Interactive Language: Talking to Robots in Real Time},
author = {Corey Lynch and Ayzaan Wahid and Jonathan Tompson and Tianli Ding and James Betker and Robert Baruch and Travis Armstrong and Pete Florence},
journal = {arXiv preprint arXiv:2210.06407},
year = {2022},
url = {https://arxiv.org/abs/2210.06407}
}
```
## OXE-Aug Dataset Citation (ours)
```bibtex
@misc{
ji2025oxeaug,
title = {OXE-Aug: A Large-Scale Robot Augmentation of OXE for Scaling Cross-Embodiment Policy Learning},
author = {Ji, Guanhua and Polavaram, Harsha and Chen, Lawrence Yunliang and Bajamahal, Sandeep and Ma, Zehan and Adebola, Simeon and Xu, Chenfeng and Goldberg, Ken},
year = {2025},
note = {Manuscript}
}
```
| # language_table_train_160000_165000_augmented
## Overview
- **Codebase version**: `v2.1`
- **Robots**: google_robot, images, jaco, kinova3, kuka_iiwa, panda, sawyer, ur5e
- **FPS**: 10
- **Episodes**: 5,000
- **Frames**: 79,439
- **Videos**: 40,000
- **Chunks**: 5
- **Splits**:
- `train`: `0:5000`
## Data Layout
```text
data_path : data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet
video_path: videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4
```
## Features
| Feature | dtype | shape | description |
|---|---:|---:|---|
| `observation.images.google_robot` | `video` | `360×640×3` | Augmented image for google_robot robot |
| `observation.images.image` | `video` | `360×640×3` | Source robot's image from original dataset |
| `observation.images.jaco` | `video` | `360×640×3` | Augmented image for jaco robot |
| `observation.images.kinova3` | `video` | `360×640×3` | Augmented image for kinova3 robot |
| `observation.images.kuka_iiwa` | `video` | `360×640×3` | Augmented image for kuka_iiwa robot |
| `observation.images.panda` | `video` | `360×640×3` | Augmented image for panda robot |
| `observation.images.sawyer` | `video` | `360×640×3` | Augmented image for sawyer robot |
| `observation.images.ur5e` | `video` | `360×640×3` | Augmented image for ur5e robot |
| `episode_index` | `int64` | `1` | - |
| `frame_index` | `int64` | `1` | - |
| `index` | `int64` | `1` | - |
| `natural_language_instruction` | `int32` | `512` | - |
| `observation.ee_pose` | `float32` | `7` | Source robot's eef position |
| `observation.google_robot.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.google_robot.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.google_robot.ee_error` | `float32` | `7` | The eef difference between the augmented google_robot robot and the original robot |
| `observation.google_robot.ee_pose` | `float32` | `7` | The eef position of google_robot robot |
| `observation.google_robot.joints` | `float32` | `8` | The joint position of google_robot robot |
| `observation.jaco.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.jaco.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.jaco.ee_error` | `float32` | `7` | The eef difference between the augmented jaco robot and the original robot |
| `observation.jaco.ee_pose` | `float32` | `7` | The eef position of jaco robot |
| `observation.jaco.joints` | `float32` | `7` | The joint position of jaco robot |
| `observation.joints` | `float32` | `8` | Joint angle of source robot |
| `observation.kinova3.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.kinova3.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.kinova3.ee_error` | `float32` | `7` | The eef difference between the augmented kinova3 robot and the original robot |
| `observation.kinova3.ee_pose` | `float32` | `7` | The eef position of kinova3 robot |
| `observation.kinova3.joints` | `float32` | `8` | The joint position of kinova3 robot |
| `observation.kuka_iiwa.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.kuka_iiwa.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.kuka_iiwa.ee_error` | `float32` | `7` | The eef difference between the augmented kuka_iiwa robot and the original robot |
| `observation.kuka_iiwa.ee_pose` | `float32` | `7` | The eef position of kuka_iiwa robot |
| `observation.kuka_iiwa.joints` | `float32` | `8` | The joint position of kuka_iiwa robot |
| `observation.panda.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.panda.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.panda.ee_error` | `float32` | `7` | The eef difference between the augmented panda robot and the original robot |
| `observation.panda.ee_pose` | `float32` | `7` | The eef position of panda robot |
| `observation.panda.joints` | `float32` | `8` | The joint position of panda robot |
| `observation.sawyer.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.sawyer.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.sawyer.ee_error` | `float32` | `7` | The eef difference between the augmented sawyer robot and the original robot |
| `observation.sawyer.ee_pose` | `float32` | `7` | The eef position of sawyer robot |
| `observation.sawyer.joints` | `float32` | `8` | The joint position of sawyer robot |
| `observation.state` | `float32` | `2` | Copy of the state field in source robot's RLDS dataset |
| `observation.ur5e.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.ur5e.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.ur5e.ee_error` | `float32` | `7` | The eef difference between the augmented ur5e robot and the original robot |
| `observation.ur5e.ee_pose` | `float32` | `7` | The eef position of ur5e robot |
| `observation.ur5e.joints` | `float32` | `7` | The joint position of ur5e robot |
| `task_index` | `int64` | `1` | - |
| `timestamp` | `float32` | `1` | - |
## Website
- Website page: [https://oxe-aug.github.io/](https://oxe-aug.github.io/)
- Project repository: [https://github.com/GuanhuaJi/oxe-aug](https://github.com/GuanhuaJi/oxe-aug)
## Paper
- [https://arxiv.org/abs/2210.06407](https://arxiv.org/abs/2210.06407)
## Citation Policy
If you use **OXE-Aug** datasets, please cite **both** our dataset and the **upstream datasets**.
## Upstream Dataset Citation (original dataset)
```bibtex
@article{lynch2022interactive,
title = {Interactive Language: Talking to Robots in Real Time},
author = {Corey Lynch and Ayzaan Wahid and Jonathan Tompson and Tianli Ding and James Betker and Robert Baruch and Travis Armstrong and Pete Florence},
journal = {arXiv preprint arXiv:2210.06407},
year = {2022},
url = {https://arxiv.org/abs/2210.06407}
}
```
## OXE-Aug Dataset Citation (ours)
```bibtex
@misc{
ji2025oxeaug,
title = {OXE-Aug: A Large-Scale Robot Augmentation of OXE for Scaling Cross-Embodiment Policy Learning},
author = {Ji, Guanhua and Polavaram, Harsha and Chen, Lawrence Yunliang and Bajamahal, Sandeep and Ma, Zehan and Adebola, Simeon and Xu, Chenfeng and Goldberg, Ken},
year = {2025},
note = {Manuscript}
}
```
| 0 | 0 | [
"task_categories:robotics",
"license:cc-by-4.0",
"arxiv:2210.06407",
"region:us",
"robotics",
"lerobot",
"oxe-aug",
"dataset"
] | 2025-11-12T13:41:09+00:00 | 2025-11-12T17:56:08+00:00 | 0 |
ts0pwo/20K_real_and_deepfake_images_ELA | This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2300 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contrains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the ELA images are stored. | This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2300 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contrains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the ELA images are stored. | 0 | 0 | [
"task_categories:image-classification",
"language:en",
"size_categories:10K<n<100K",
"region:us",
"deepfake"
] | 2025-11-12T16:01:17+00:00 | 2025-11-12T17:54:18+00:00 | 0 |
mixture-vitae/MixtureVitae-2TT | # Aurora-M2
We are still uploading data...
This is a **multilingual, permissive, partially synthetic, decontaminated pre-training** dataset. It consists of cc-by, public domain, or governmental websites. This dataset will eventually contain approximately 2 trillion tokens.
We have an overlap with many of the other permissively licensed datasets, such as common corpus, common pile, OLC, KL3M, etc., but we performed different filtering, collated similar data together to form around 4K tokens per example, and included a large amount of synthetic data (derived from permissve data or licensed permissively).
About half of the dataset is synthetic, with a large portion being permissively licensed code, math, and science reasoning traces. We took care to investigate whether the model that was used to generate the data and the ultimate source of the data are permissively usable.
Note that there are concerns of model collapse in using synthetic datasets in pretraining, and you may wish to use techniques to mitigate this.
This dataset is intended for pretraining a foundational LLM. Includes:
- Business & politics - Mostly from SEC filings, along with contracts from CUAD, and Parliament debates from Aurora-M1 dataset
- Fineweb - of .gov.* and cc-by websites, from FineFineweb. We attach domain labels to web files to improve training.
- Formatted Text (JSON, Yaml, HTML, etc from startcoder v1, plus websights)
- Law from OLC
- MAGACorpus synthetic dervied from .gov.* and cc-by websites,
- Math - from DM math and a small of procedurally generated math problems by the authors
- Nemo high synthetic derived from .gov.* and cc-by websites,
- News from OLC
- Science and Tech - Eruo-pat with synthetic image captions, and USPTO from Pile and TXT360, with Arxiv abstracts and CC-BY papers and pubmed, peS2o from common-pile, OLC and elsevier-oa-cc-by.
- Software of select langauges (Python, Java, etc.) from starcoder v1.
* We use starcoder v1 instead of starcoder v2 because of the additional licensing requirements from the Heritage Foundation. While Starcoder v2 is excellent, MixtureVitae is an excercise in creating a dataset that is easy to use with less licensing hurdles.
- Stackexchange - Mostly from TXT360 and RedPajama v1
- Wiki - MegaWiki, and Wikipedia copy from TXT360. There is also a substantial portion of Wikipedia in the Fineweb subset as well. We have also included a reformatted version of meta-active-reading.
- Youtube - Common Corpus, Finevideo and VALID. For the VALID dataset, we included image captions of key frames along with Q/A at the end of some videos about the video.
- Synthetic & Instructions - From permisvelly licensed data (CC-BY-SA, Apache, etc.) - Ling-coder, Ring-Lite, Glaive reasoning, Nemo Math and Science, Open Thoughts, Prism-math, p3 dataset converted to few-shot format
* We have avoided datasets generated by commercial models, as well as the Llama models, and other models with licenses that has restrictions on commercial usage. We do use outputs of certain Apache licensed Qwen models, Phi models, R1 models. Where there is a clear mixture of output - instruction from qwen 70b under the Qwen license and output by R1, we stripped out the problematic Qwen generated instructions. The input for these synthetic data are also, to our knowledge, from permissive sources.
* More synthetic data than the 211BT mixture
- Multilingual .gov, cc-by website from Dcad (which is based on Fineweb2), and CulutraY
- Aya multilingual (without English subset)
Please be aware that we use the <|endoftext|> token to separate documents in each example. We recommend replacing this token with your appropriate eos token from the target tokenizer used for training your model. Also we have used in some reasoning datasets, `<think>` and `</think>` tokens. You may wish to add these special tokens.
All of our work that is not derived from the underlying data, such as our organization, tagging, and data formatting is licensed by us under ODC-By license.
**Please note:** We have found in early ablation studies that a small percentage of instruction data added to our 5BT ablation, 10BT ablations and 15BT ablations pretraining, does convey instruction following skills. This allows trainers to probe their models with instructions, among other things. However, we found that adding refusals for alignment caused the model to overly refuse during pretraining.
Users shoud experiment with various proportions for their purposes, but we believe a random sample of this dataset could form a "fair" comparsion to other similar datasets.
Since this is a working version, and not the final version, there may be errors tagging, or formatting. Also, this version is NOT an aligned version of the dataset. We will release an aligned version which performs more rigoruos debiasing and anonymization.
Under the MixtureVitae datasets, we consider data that is in the public domain, out of copyright, cc-by-*, software under open source (but non GPL licenses), or other open licensed content, as well as certain .gov. data (which we believe there is a strong fair use argument for) as low copyright risk. Permissive, here, means we think there is lower risk for a researcher to train on the data.
But we believe that the risks for infringement for training exists in a continum and can vary by the type and purpose of usage, with content created solely by authors of this dataset the least risky, cc-by content with some intermediate risk, and .gov. content being more risky then open source content even under a fair use analysis. Risks can also vary by jurisdictions.
Even when content is cc-by licensed or published on a government website, this doesn't mean there is not copyright risk. For example, a government website may cite a copyrighted work, an open source github repo may include 3rd party copyrighted content of, for example, product description, in a markdown page, or a Wikipedia cc-by-sa page may include quotes from movies. See our blog here https://aurora-lm.github.io/posts/mixturevitae/ for a longer discussion. See https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf for a US oriented analysis. Laws are constantly changing, especial AI laws, so it is best to keep abreast of the current legal risks with your attorneys.
We also think that the risk of infringement during training is different than that of inference. For example, training might be fair use because it is more transformative at least in the US, but outputing verbatim text could very well be infringement if the content was not permissively licensed or allowed to be distributed.
While we have done extensive work to create a permissively usable training dataset, please consult your own attorneys for any legal risks in using this dataset.
TODO:
We will include multimodal tokens. The multimodal data is tokenized SNAC, SEED2 and jpg data.
## Web data from Common Crawl
A portion of our data through the various subsets are dervied from Common Crawl, and thus subject to the Common Crawl terms of use. https://commoncrawl.org/terms-of-use
Common crawl resepcts the robots.txt prohibition. But common-crawl includes many commercial websites available on the Internet. To limit copyright risks we performed the following filters.
We start with FineFineweb which is a domain labeled version of Fineweb, which in turn is a filtered version of Common Crawl.
We filtered based on a list of potential government and NGO websites/URL patterns:
- `.mil/`
- `.vlada.mk`
- `.vlada.cz`
- `.kormany.hu`
- `regeringen.` (matches domains like regeringen.se, regeringen.no, etc.)
- `.rijksoverheid.nl`
- `.government.nl`
- `.regeringen.se`
- `.regeringen.dk`
- `.regeringen.no`
- `.bund.de`
- `.bundesregierung.de`
- `.government.ru`
- `.gc.ca`
- `.admin.ch`
- `www.gob.cl/`
- `www.gob.ec/`
- `guatemala.gob.gt/`
- `presidencia.gob.hn/`
- `www.gob.mx/`
- `presidencia.gob.pa/`
- `www.gob.pe/`
- `gob.es/`
- `argentina.gob.ar/`
- `tanzania.go.tz/`
- `.indonesia.go.id/`
- `.go.kr/`
- `.go.jp/`
- `thailand.go.th/`
- `.europa.eu/`
- `.un/`
- `.int/`
- `.govt.`
- `www.gub.uy`
- `.gov` (as suffix, e.g. `idx.endswith(".gov")`)
- `.gov/`
- `.gov.`
- `.gouv.`
And we created a list of around 50 websites that we know to be cc-by-* or public domain websites. We chose general Wiki's as well as software and technology sites, law related sites and other known sites. We read the terms of use to confirm they provided permissie usage, to the extent we could, before adding these 50 or so domains.
- `.free.law/`
- `.europeana.eu/`
- `.publicdomainreview.org/`
- `.wisdomcommons.org/`
- `.intratext.com/`
- `.mediawiki.org/`
- `.wikimedia.org/`
- `.wikidata.org/`
- `.wikipedia.org/` *
- `.wikisource.org/`
- `.wikifunctions.org/`
- `.wikiquote.org/`
- `.wikinews.org/`
- `.wikivoyage.org/`
- `.wiktionary.org/`
- `.wikibooks.org/`
- `.courtlistener.com/`
- `.case.law/`
- `pressbooks.oer.hawaii.edu/`
- `.huggingface.co/docs/`
- `.opencourselibrary.org/`
- `.medbiq.org/`
- `.doabooks.org/`
- `.bccampus.ca/`
- `open.umn.edu/opentextbooks/`
- `www.gutenberg.org/`
- `.mozilla.org/`
- `www.eclipse.org/`
- `.apache.org/`
- `.python.org/`
- `.pytorch.org/`
- `.numpy.org/`
- `.scipy.org/`
- `.opencv.org/`
- `.scikit-learn.org/`
- `.pydata.org/`
- `.matplotlib.org/`
- `.palletsprojects.com/`
- `.sqlalchemy.org/`
- `.pypi.org/`
- `.sympy.org/`
- `.nltk.org/`
- `.scrapy.org/`
- `.owasp.org/`
- `.creativecommons.org/`
- `.wikia.com/`
- `.foodista.com/`
- `.fandom.com/`
- `.attack.mitre.org/`
While we do include wikipedia in the above, we do not include stackexchange, because Wikipedia has many subdomains that might more diverse in a webcrawl, and we already have a highly formatted subset of stack excahnge. In future interations, we may also include the webcrawled version of stackexchange from Common Crawl.
We also searched for keywords, such as "cc-by-sa" in the header and footer of FineFine web pages and applied heuristics to filter out instances where
Terms of use of the above sites might for example provide 'unless otherwise stated, the contents are licensed under cc-by-sa...' Because of caveats like these, we also had heuristic filters, such as filtering documents that includes "all rights reserved."
We also had a block list of sites which we don't use, even if there might be cc-by content, including common news websites.
Note that we included both Wikipedia from the TXT360 subset, as well as Megawiki and FineFineweb, so there will be duplicated Wikipedia pages.
For the TXT360 Wikipedia subset, we filtered out pages about people are are still alive using patterns "... born March 1, 1999) is an German ...". The reason is that we wish to minimize memorization of personal information. Note, we further perform other forms fo anonymization in our aligned MixtureVitae dataset.
Most of our dataset includes government websites or Wiki's.
Table 2. The top domains in our web (FineFineweb) subset:
| **Domain** | **Count** |
|------------------------------------|---------|
| m.wikipedia.org * | 167428 |
| nlm.nih.gov | 113078 |
| www.federalregister.gov | 98579 |
| nsw.gov.au | 67952 |
| vic.gov.au | 59044 |
| ec.europa.eu | 43850 |
| m.wikisource.org | 38916 |
| www.justice.gov | 38377 |
| qld.gov.au | 35866 |
| jst.go.jp | 34033 |
| ars.usda.gov | 31073 |
| wa.gov.au | 28598 |
| www.cdc.gov | 28115 |
| www.gov.uk | 26916 |
| www.nps.gov | 26298 |
| www.gov.scot | 26145 |
| eric.ed.gov | 25102 |
| reliefweb.int | 24877 |
| clinicaltrials.gov | 24611 |
| sa.gov.au | 20603 |
| chroniclingamerica.loc.gov | 20242 |
| www.army.mil | 20003 |
| history.state.gov | 19195 |
| cordis.europa.eu | 18856 |
| nal.usda.gov | 17032 |
| www.wipo.int | 17021 |
| www.mass.gov | 14921 |
| www.fda.gov | 14853 |
| ukurier.gov.ua | 14808 |
| founders.archives.gov | 14266 |
| act.gov.au | 13822 |
| mn.gov | 13767 |
| www.sec.gov | 13501 |
| bugzilla.mozilla.org | 13004 |
| fhwa.dot.gov | 12738 |
| www.gao.gov | 12690 |
| djvu.wikisource.org | 11954 |
| leg.wa.gov | 11766 |
| www.state.gov | 11468 |
| fs.usda.gov | 11383 |
| aph.gov.au | 11151 |
| apps.dtic.mil | 11097 |
| mail.python.org | 10554 |
| gov.bc.ca | 10514 |
| usace.army.mil | 9973 |
| www.congress.gov | 9882 |
| 2009-2017.state.gov | 9581 |
| military-history.fandom.com | 9313 |
| www.nysenate.gov | 9306 |
| www.epa.gov | 9001 |
| abs.gov.au | 8824 |
| tas.gov.au | 8784 |
| m.wikibooks.org | 8736 |
| gov.on.ca | 8696 |
| gsfc.nasa.gov | 8586 |
| www.fws.gov | 8386 |
| www.ntsb.gov | 8130 |
| blog.gov.uk | 8091 |
| legis.wisconsin.gov | 8070 |
| www.nasa.gov | 8067 |
| cfpub.epa.gov | 7943 |
| www.loc.gov | 7742 |
| www.usgs.gov | 7688 |
| www.clinicaltrials.gov | 7517 |
| natlib.govt.nz | 7465 |
| www.michigan.gov | 7395 |
| ato.gov.au | 7279 |
| sp.gov.br | 7208 |
| www.nist.gov | 7173 |
| obamawhitehouse.archives.gov | 7170 |
| www.nyc.gov | 7111 |
| justice.gc.ca | 7086 |
| service.gov.uk | 7085 |
| nationalarchives.gov.uk | 7082 |
| www.sbir.gov | 7012 |
| www.akleg.gov | 6969 |
| www.defense.gov | 6941 |
| nt.gov.au | 6878 |
| m.wikiquote.org | 6869 |
| niehs.nih.gov | 6867 |
| revisor.mn.gov | 6800 |
| www.dol.gov | 6632 |
| gouv.qc.ca | 6545 |
| statcan.gc.ca | 6509 |
| wwwnc.cdc.gov | 6389 |
| ons.gov.uk | 6301 |
| legislation.gov.uk | 6207 |
| research.va.gov | 6198 |
| eurofound.europa.eu | 6034 |
| portal.ct.gov | 5909 |
| nla.gov.au | 5905 |
| codes.ohio.gov | 5807 |
| www.energy.gov | 5805 |
| oai.dtic.mil | 5757 |
| georgewbush-whitehouse.archives.gov | 5674 |
| health.gov.au | 5554 |
| dec.ny.gov | 5448 |
| www.ftc.gov | 5404 |
| forecast.weather.gov | 5398 |
| aspe.hhs.gov | 5358 |
Table 3. Overlap with common-pile. There is about a .1% overalp with common-pile's cccc subset, which unsuprisingly includes government websites:
| **Domain** | **Count** |
|------------------------------------|-------|
| nsw.gov.au | 67952 |
| qld.gov.au | 35866 |
| abs.gov.au | 8824 |
| addons.mozilla.org | 4045 |
| conicet.gov.ar | 3020 |
| awm.gov.au | 2142 |
| eea.europa.eu | 1966 |
...
### Analysis of data
Notice the compression rate vs the cotnamination rate.
Table 1. Raw sizes of various subsets and their compressed size, and compression ratio.
(This table is not yet complete...)
| Folder | Uncompressed Size | Compressed Size (Sum of files) | Compression Ratio |
| ----------------------- | ----------------- | ------------------------------ | ----------------- |
| **synthetic\_instruct** | 615 GB | 142.16 GB | **4.33×** |
| **software** | 120 GB | 29.49 GB | **4.07×** |
| **wiki** | 215 GB | 55.75 GB | **3.86×** |
| **nemo** | 49 GB | 13.15 GB | **3.73×** |
| **math** | 11 GB | 2.97 GB | **3.70×** |
| **maga** | 33 GB | 9.5 GB | **3.47×** |
| **formatted\_text** | 50 GB | 14.98 GB | **3.34×** |
| **business** | 884 MB | 266 MB | **3.32×** |
| **youtube** | 23 GB | 6.71 GB | **3.43×** |
| **stackexchange** | 94 GB | 32.31 GB | **2.91×** |
| **law** | 82 GB | 28.28 GB | **2.90×** |
| **fineweb** | 88 GB | 30.68 GB | **2.87×** |
| **news** | 1.1 GB | 387 MB | **2.84×** |
Decontaminated following phi-4 like method (13 gram overalp, except in cases where the 13grams are also in train set, wikipedia, public domain books) against:
- Agieval
- ARC
- MBPP
- MBPPPlus
- MMLU
- Gsm8k
- MATH
- ToxiGen
- COPA
- OpenBookQA
- Winogrande
- BoolQ
- HellaSwag
- PIQA
- CommonsenseQA
- Humaneval
- HumanevalPlus
- ALERT
- SimpleQA
- DoNotAnswer
- Ifeval
- LAMBADA
- GPQA
- AIME2024
- AIME2025
- HMMT_Feb_2025
- USAMO
- BRUMO
- MMLU_Redux
- MMLU_Pro
- MATH500
- AdvBench
- MuSR
- BBH
Removed contaminated data in bytes by file:
| File Name | Size |
|------------|-------|
| nemo_science_math-1_contaminated.jsonl | 322M |
| ring-lite-sft-0_contaminated.jsonl | 198M |
| ring-lite-sft-1_contaminated.jsonl | 196M |
| prism_math_contaminated.jsonl | 156M |
| nemo_science_math-0_contaminated.jsonl | 141M |
| open_thoughts-0_contaminated.jsonl | 135M |
| misc_instruct_contaminated.jsonl | 114M |
| open_thoughts-1_contaminated.jsonl | 90M |
| nemo_science_math-2_contaminated.jsonl | 85M |
| open_thoughts-2_contaminated.jsonl | 73M |
| school_math_contaminated.jsonl | 72M |
| ring-lite-sft-2_contaminated.jsonl | 71M |
| open_thoughts-3_contaminated.jsonl | 67M |
| math_sft-1_contaminated.jsonl | 63M |
| open_thoughts-4_contaminated.jsonl | 59M |
| nemo_science_math-3_contaminated.jsonl | 58M |
| math_sft-0_contaminated.jsonl | 57M |
| math_sft-2_contaminated.jsonl | 54M |
| open_thoughts-5_contaminated.jsonl | 49M |
| math_reasoning_contaminated.jsonl | 48M |
| prism_science_contaminated.jsonl | 46M |
| ring-lite-sft-3_contaminated.jsonl | 45M |
| reasoning_instruct_contaminated.jsonl | 44M |
| nemo_science_math-4_contaminated.jsonl | 43M |
| math_sft-3_contaminated.jsonl | 43M |
| ring-lite-sft-4_contaminated.jsonl | 40M |
| open_thoughts-6_contaminated.jsonl | 38M |
| open_thoughts-7_contaminated.jsonl | 37M |
| nemo_science_math-5_contaminated.jsonl | 36M |
| open_thoughts-8_contaminated.jsonl | 34M |
| ring-lite-sft-5_contaminated.jsonl | 34M |
| open_thoughts-9_contaminated.jsonl | 32M |
| math_sft-4_contaminated.jsonl | 32M |
| nemo_science_math-6_contaminated.jsonl | 31M |
| open_thoughts-10_contaminated.jsonl | 29M |
| ring-lite-sft-6_contaminated.jsonl | 28M |
| open_thoughts-11_contaminated.jsonl | 27M |
| math_sft-5_contaminated.jsonl | 26M |
| open_thoughts-12_contaminated.jsonl | 24M |
| ring-lite-sft-7_contaminated.jsonl | 24M |
| open_thoughts-13_contaminated.jsonl | 22M |
| open_thoughts-14_contaminated.jsonl | 21M |
| nemo_science_math-7_contaminated.jsonl | 20M |
| math_sft-6_contaminated.jsonl | 20M |
| open_thoughts-15_contaminated.jsonl | 18M |
| open_thoughts-16_contaminated.jsonl | 17M |
| ring-lite-sft-8_contaminated.jsonl | 17M |
| open_thoughts-17_contaminated.jsonl | 16M |
| math_sft-7_contaminated.jsonl | 15M |
| nemo_science_math-8_contaminated.jsonl | 15M |
| open_thoughts-18_contaminated.jsonl | 14M |
| ring-lite-sft-9_contaminated.jsonl | 14M |
| open_thoughts-19_contaminated.jsonl | 13M |
| math_sft-8_contaminated.jsonl | 13M |
| open_thoughts-20_contaminated.jsonl | 12M |
| nemo_science_math-9_contaminated.jsonl | 11M |
| open_thoughts-21_contaminated.jsonl | 10M |
| math_sft-9_contaminated.jsonl | 10M |
| open_thoughts-22_contaminated.jsonl | 9M |
| ring-lite-sft-10_contaminated.jsonl | 9M |
| open_thoughts-23_contaminated.jsonl | 8M |
| math_sft-10_contaminated.jsonl | 8M |
| nemo_science_math-10_contaminated.jsonl | 7M |
| open_thoughts-24_contaminated.jsonl | 7M |
| ring-lite-sft-11_contaminated.jsonl | 7M |
| open_thoughts-25_contaminated.jsonl | 6M |
| math_sft-11_contaminated.jsonl | 6M |
| open_thoughts-26_contaminated.jsonl | 6M |
| nemo_science_math-11_contaminated.jsonl | 6M |
| open_thoughts-27_contaminated.jsonl | 5M |
| ring-lite-sft-12_contaminated.jsonl | 5M |
| math_sft-12_contaminated.jsonl | 5M |
| open_thoughts-28_contaminated.jsonl | 4M |
| nemo_science_math-12_contaminated.jsonl | 4M |
| open_thoughts-29_contaminated.jsonl | 3M |
| ring-lite-sft-13_contaminated.jsonl | 3M |
| open_thoughts-30_contaminated.jsonl | 3M |
| math_sft-13_contaminated.jsonl | 3M |
| open_thoughts-31_contaminated.jsonl | 2M |
| nemo_science_math-13_contaminated.jsonl | 2M |
| open_thoughts-32_contaminated.jsonl | 2M |
| ring-lite-sft-14_contaminated.jsonl | 2M |
| math_sft-14_contaminated.jsonl | 2M |
| open_thoughts-33_contaminated.jsonl | 1M |
| open_thoughts-34_contaminated.jsonl | 1M |
| nemo_science_math-14_contaminated.jsonl | 1M |
| open_thoughts-35_contaminated.jsonl | 1M |
| ring-lite-sft-15_contaminated.jsonl | 1M |
| math_sft-15_contaminated.jsonl | 1M |
| # Aurora-M2
We are still uploading data...
This is a **multilingual, permissive, partially synthetic, decontaminated pre-training** dataset. It consists of cc-by, public domain, or governmental websites. This dataset will eventually contain approximately 2 trillion tokens.
We have an overlap with many of the other permissively licensed datasets, such as common corpus, common pile, OLC, KL3M, etc., but we performed different filtering, collated similar data together to form around 4K tokens per example, and included a large amount of synthetic data (derived from permissve data or licensed permissively).
About half of the dataset is synthetic, with a large portion being permissively licensed code, math, and science reasoning traces. We took care to investigate whether the model that was used to generate the data and the ultimate source of the data are permissively usable.
Note that there are concerns of model collapse in using synthetic datasets in pretraining, and you may wish to use techniques to mitigate this.
This dataset is intended for pretraining a foundational LLM. Includes:
- Business & politics - Mostly from SEC filings, along with contracts from CUAD, and Parliament debates from Aurora-M1 dataset
- Fineweb - of .gov.* and cc-by websites, from FineFineweb. We attach domain labels to web files to improve training.
- Formatted Text (JSON, Yaml, HTML, etc from startcoder v1, plus websights)
- Law from OLC
- MAGACorpus synthetic dervied from .gov.* and cc-by websites,
- Math - from DM math and a small of procedurally generated math problems by the authors
- Nemo high synthetic derived from .gov.* and cc-by websites,
- News from OLC
- Science and Tech - Eruo-pat with synthetic image captions, and USPTO from Pile and TXT360, with Arxiv abstracts and CC-BY papers and pubmed, peS2o from common-pile, OLC and elsevier-oa-cc-by.
- Software of select langauges (Python, Java, etc.) from starcoder v1.
* We use starcoder v1 instead of starcoder v2 because of the additional licensing requirements from the Heritage Foundation. While Starcoder v2 is excellent, MixtureVitae is an excercise in creating a dataset that is easy to use with less licensing hurdles.
- Stackexchange - Mostly from TXT360 and RedPajama v1
- Wiki - MegaWiki, and Wikipedia copy from TXT360. There is also a substantial portion of Wikipedia in the Fineweb subset as well. We have also included a reformatted version of meta-active-reading.
- Youtube - Common Corpus, Finevideo and VALID. For the VALID dataset, we included image captions of key frames along with Q/A at the end of some videos about the video.
- Synthetic & Instructions - From permisvelly licensed data (CC-BY-SA, Apache, etc.) - Ling-coder, Ring-Lite, Glaive reasoning, Nemo Math and Science, Open Thoughts, Prism-math, p3 dataset converted to few-shot format
* We have avoided datasets generated by commercial models, as well as the Llama models, and other models with licenses that has restrictions on commercial usage. We do use outputs of certain Apache licensed Qwen models, Phi models, R1 models. Where there is a clear mixture of output - instruction from qwen 70b under the Qwen license and output by R1, we stripped out the problematic Qwen generated instructions. The input for these synthetic data are also, to our knowledge, from permissive sources.
* More synthetic data than the 211BT mixture
- Multilingual .gov, cc-by website from Dcad (which is based on Fineweb2), and CulutraY
- Aya multilingual (without English subset)
Please be aware that we use the <|endoftext|> token to separate documents in each example. We recommend replacing this token with your appropriate eos token from the target tokenizer used for training your model. Also we have used in some reasoning datasets, `<think>` and `</think>` tokens. You may wish to add these special tokens.
All of our work that is not derived from the underlying data, such as our organization, tagging, and data formatting is licensed by us under ODC-By license.
**Please note:** We have found in early ablation studies that a small percentage of instruction data added to our 5BT ablation, 10BT ablations and 15BT ablations pretraining, does convey instruction following skills. This allows trainers to probe their models with instructions, among other things. However, we found that adding refusals for alignment caused the model to overly refuse during pretraining.
Users shoud experiment with various proportions for their purposes, but we believe a random sample of this dataset could form a "fair" comparsion to other similar datasets.
Since this is a working version, and not the final version, there may be errors tagging, or formatting. Also, this version is NOT an aligned version of the dataset. We will release an aligned version which performs more rigoruos debiasing and anonymization.
Under the MixtureVitae datasets, we consider data that is in the public domain, out of copyright, cc-by-*, software under open source (but non GPL licenses), or other open licensed content, as well as certain .gov. data (which we believe there is a strong fair use argument for) as low copyright risk. Permissive, here, means we think there is lower risk for a researcher to train on the data.
But we believe that the risks for infringement for training exists in a continum and can vary by the type and purpose of usage, with content created solely by authors of this dataset the least risky, cc-by content with some intermediate risk, and .gov. content being more risky then open source content even under a fair use analysis. Risks can also vary by jurisdictions.
Even when content is cc-by licensed or published on a government website, this doesn't mean there is not copyright risk. For example, a government website may cite a copyrighted work, an open source github repo may include 3rd party copyrighted content of, for example, product description, in a markdown page, or a Wikipedia cc-by-sa page may include quotes from movies. See our blog here https://aurora-lm.github.io/posts/mixturevitae/ for a longer discussion. See https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf for a US oriented analysis. Laws are constantly changing, especial AI laws, so it is best to keep abreast of the current legal risks with your attorneys.
We also think that the risk of infringement during training is different than that of inference. For example, training might be fair use because it is more transformative at least in the US, but outputing verbatim text could very well be infringement if the content was not permissively licensed or allowed to be distributed.
While we have done extensive work to create a permissively usable training dataset, please consult your own attorneys for any legal risks in using this dataset.
TODO:
We will include multimodal tokens. The multimodal data is tokenized SNAC, SEED2 and jpg data.
## Web data from Common Crawl
A portion of our data through the various subsets are dervied from Common Crawl, and thus subject to the Common Crawl terms of use. https://commoncrawl.org/terms-of-use
Common crawl resepcts the robots.txt prohibition. But common-crawl includes many commercial websites available on the Internet. To limit copyright risks we performed the following filters.
We start with FineFineweb which is a domain labeled version of Fineweb, which in turn is a filtered version of Common Crawl.
We filtered based on a list of potential government and NGO websites/URL patterns:
- `.mil/`
- `.vlada.mk`
- `.vlada.cz`
- `.kormany.hu`
- `regeringen.` (matches domains like regeringen.se, regeringen.no, etc.)
- `.rijksoverheid.nl`
- `.government.nl`
- `.regeringen.se`
- `.regeringen.dk`
- `.regeringen.no`
- `.bund.de`
- `.bundesregierung.de`
- `.government.ru`
- `.gc.ca`
- `.admin.ch`
- `www.gob.cl/`
- `www.gob.ec/`
- `guatemala.gob.gt/`
- `presidencia.gob.hn/`
- `www.gob.mx/`
- `presidencia.gob.pa/`
- `www.gob.pe/`
- `gob.es/`
- `argentina.gob.ar/`
- `tanzania.go.tz/`
- `.indonesia.go.id/`
- `.go.kr/`
- `.go.jp/`
- `thailand.go.th/`
- `.europa.eu/`
- `.un/`
- `.int/`
- `.govt.`
- `www.gub.uy`
- `.gov` (as suffix, e.g. `idx.endswith(".gov")`)
- `.gov/`
- `.gov.`
- `.gouv.`
And we created a list of around 50 websites that we know to be cc-by-* or public domain websites. We chose general Wiki's as well as software and technology sites, law related sites and other known sites. We read the terms of use to confirm they provided permissie usage, to the extent we could, before adding these 50 or so domains.
- `.free.law/`
- `.europeana.eu/`
- `.publicdomainreview.org/`
- `.wisdomcommons.org/`
- `.intratext.com/`
- `.mediawiki.org/`
- `.wikimedia.org/`
- `.wikidata.org/`
- `.wikipedia.org/` *
- `.wikisource.org/`
- `.wikifunctions.org/`
- `.wikiquote.org/`
- `.wikinews.org/`
- `.wikivoyage.org/`
- `.wiktionary.org/`
- `.wikibooks.org/`
- `.courtlistener.com/`
- `.case.law/`
- `pressbooks.oer.hawaii.edu/`
- `.huggingface.co/docs/`
- `.opencourselibrary.org/`
- `.medbiq.org/`
- `.doabooks.org/`
- `.bccampus.ca/`
- `open.umn.edu/opentextbooks/`
- `www.gutenberg.org/`
- `.mozilla.org/`
- `www.eclipse.org/`
- `.apache.org/`
- `.python.org/`
- `.pytorch.org/`
- `.numpy.org/`
- `.scipy.org/`
- `.opencv.org/`
- `.scikit-learn.org/`
- `.pydata.org/`
- `.matplotlib.org/`
- `.palletsprojects.com/`
- `.sqlalchemy.org/`
- `.pypi.org/`
- `.sympy.org/`
- `.nltk.org/`
- `.scrapy.org/`
- `.owasp.org/`
- `.creativecommons.org/`
- `.wikia.com/`
- `.foodista.com/`
- `.fandom.com/`
- `.attack.mitre.org/`
While we do include wikipedia in the above, we do not include stackexchange, because Wikipedia has many subdomains that might more diverse in a webcrawl, and we already have a highly formatted subset of stack excahnge. In future interations, we may also include the webcrawled version of stackexchange from Common Crawl.
We also searched for keywords, such as "cc-by-sa" in the header and footer of FineFine web pages and applied heuristics to filter out instances where
Terms of use of the above sites might for example provide 'unless otherwise stated, the contents are licensed under cc-by-sa...' Because of caveats like these, we also had heuristic filters, such as filtering documents that includes "all rights reserved."
We also had a block list of sites which we don't use, even if there might be cc-by content, including common news websites.
Note that we included both Wikipedia from the TXT360 subset, as well as Megawiki and FineFineweb, so there will be duplicated Wikipedia pages.
For the TXT360 Wikipedia subset, we filtered out pages about people are are still alive using patterns "... born March 1, 1999) is an German ...". The reason is that we wish to minimize memorization of personal information. Note, we further perform other forms fo anonymization in our aligned MixtureVitae dataset.
Most of our dataset includes government websites or Wiki's.
Table 2. The top domains in our web (FineFineweb) subset:
| **Domain** | **Count** |
|------------------------------------|---------|
| m.wikipedia.org * | 167428 |
| nlm.nih.gov | 113078 |
| www.federalregister.gov | 98579 |
| nsw.gov.au | 67952 |
| vic.gov.au | 59044 |
| ec.europa.eu | 43850 |
| m.wikisource.org | 38916 |
| www.justice.gov | 38377 |
| qld.gov.au | 35866 |
| jst.go.jp | 34033 |
| ars.usda.gov | 31073 |
| wa.gov.au | 28598 |
| www.cdc.gov | 28115 |
| www.gov.uk | 26916 |
| www.nps.gov | 26298 |
| www.gov.scot | 26145 |
| eric.ed.gov | 25102 |
| reliefweb.int | 24877 |
| clinicaltrials.gov | 24611 |
| sa.gov.au | 20603 |
| chroniclingamerica.loc.gov | 20242 |
| www.army.mil | 20003 |
| history.state.gov | 19195 |
| cordis.europa.eu | 18856 |
| nal.usda.gov | 17032 |
| www.wipo.int | 17021 |
| www.mass.gov | 14921 |
| www.fda.gov | 14853 |
| ukurier.gov.ua | 14808 |
| founders.archives.gov | 14266 |
| act.gov.au | 13822 |
| mn.gov | 13767 |
| www.sec.gov | 13501 |
| bugzilla.mozilla.org | 13004 |
| fhwa.dot.gov | 12738 |
| www.gao.gov | 12690 |
| djvu.wikisource.org | 11954 |
| leg.wa.gov | 11766 |
| www.state.gov | 11468 |
| fs.usda.gov | 11383 |
| aph.gov.au | 11151 |
| apps.dtic.mil | 11097 |
| mail.python.org | 10554 |
| gov.bc.ca | 10514 |
| usace.army.mil | 9973 |
| www.congress.gov | 9882 |
| 2009-2017.state.gov | 9581 |
| military-history.fandom.com | 9313 |
| www.nysenate.gov | 9306 |
| www.epa.gov | 9001 |
| abs.gov.au | 8824 |
| tas.gov.au | 8784 |
| m.wikibooks.org | 8736 |
| gov.on.ca | 8696 |
| gsfc.nasa.gov | 8586 |
| www.fws.gov | 8386 |
| www.ntsb.gov | 8130 |
| blog.gov.uk | 8091 |
| legis.wisconsin.gov | 8070 |
| www.nasa.gov | 8067 |
| cfpub.epa.gov | 7943 |
| www.loc.gov | 7742 |
| www.usgs.gov | 7688 |
| www.clinicaltrials.gov | 7517 |
| natlib.govt.nz | 7465 |
| www.michigan.gov | 7395 |
| ato.gov.au | 7279 |
| sp.gov.br | 7208 |
| www.nist.gov | 7173 |
| obamawhitehouse.archives.gov | 7170 |
| www.nyc.gov | 7111 |
| justice.gc.ca | 7086 |
| service.gov.uk | 7085 |
| nationalarchives.gov.uk | 7082 |
| www.sbir.gov | 7012 |
| www.akleg.gov | 6969 |
| www.defense.gov | 6941 |
| nt.gov.au | 6878 |
| m.wikiquote.org | 6869 |
| niehs.nih.gov | 6867 |
| revisor.mn.gov | 6800 |
| www.dol.gov | 6632 |
| gouv.qc.ca | 6545 |
| statcan.gc.ca | 6509 |
| wwwnc.cdc.gov | 6389 |
| ons.gov.uk | 6301 |
| legislation.gov.uk | 6207 |
| research.va.gov | 6198 |
| eurofound.europa.eu | 6034 |
| portal.ct.gov | 5909 |
| nla.gov.au | 5905 |
| codes.ohio.gov | 5807 |
| www.energy.gov | 5805 |
| oai.dtic.mil | 5757 |
| georgewbush-whitehouse.archives.gov | 5674 |
| health.gov.au | 5554 |
| dec.ny.gov | 5448 |
| www.ftc.gov | 5404 |
| forecast.weather.gov | 5398 |
| aspe.hhs.gov | 5358 |
Table 3. Overlap with common-pile. There is about a .1% overalp with common-pile's cccc subset, which unsuprisingly includes government websites:
| **Domain** | **Count** |
|------------------------------------|-------|
| nsw.gov.au | 67952 |
| qld.gov.au | 35866 |
| abs.gov.au | 8824 |
| addons.mozilla.org | 4045 |
| conicet.gov.ar | 3020 |
| awm.gov.au | 2142 |
| eea.europa.eu | 1966 |
...
### Analysis of data
Notice the compression rate vs the cotnamination rate.
Table 1. Raw sizes of various subsets and their compressed size, and compression ratio.
(This table is not yet complete...)
| Folder | Uncompressed Size | Compressed Size (Sum of files) | Compression Ratio |
| ----------------------- | ----------------- | ------------------------------ | ----------------- |
| **synthetic\_instruct** | 615 GB | 142.16 GB | **4.33×** |
| **software** | 120 GB | 29.49 GB | **4.07×** |
| **wiki** | 215 GB | 55.75 GB | **3.86×** |
| **nemo** | 49 GB | 13.15 GB | **3.73×** |
| **math** | 11 GB | 2.97 GB | **3.70×** |
| **maga** | 33 GB | 9.5 GB | **3.47×** |
| **formatted\_text** | 50 GB | 14.98 GB | **3.34×** |
| **business** | 884 MB | 266 MB | **3.32×** |
| **youtube** | 23 GB | 6.71 GB | **3.43×** |
| **stackexchange** | 94 GB | 32.31 GB | **2.91×** |
| **law** | 82 GB | 28.28 GB | **2.90×** |
| **fineweb** | 88 GB | 30.68 GB | **2.87×** |
| **news** | 1.1 GB | 387 MB | **2.84×** |
Decontaminated following phi-4 like method (13 gram overalp, except in cases where the 13grams are also in train set, wikipedia, public domain books) against:
- Agieval
- ARC
- MBPP
- MBPPPlus
- MMLU
- Gsm8k
- MATH
- ToxiGen
- COPA
- OpenBookQA
- Winogrande
- BoolQ
- HellaSwag
- PIQA
- CommonsenseQA
- Humaneval
- HumanevalPlus
- ALERT
- SimpleQA
- DoNotAnswer
- Ifeval
- LAMBADA
- GPQA
- AIME2024
- AIME2025
- HMMT_Feb_2025
- USAMO
- BRUMO
- MMLU_Redux
- MMLU_Pro
- MATH500
- AdvBench
- MuSR
- BBH
Removed contaminated data in bytes by file:
| File Name | Size |
|------------|-------|
| nemo_science_math-1_contaminated.jsonl | 322M |
| ring-lite-sft-0_contaminated.jsonl | 198M |
| ring-lite-sft-1_contaminated.jsonl | 196M |
| prism_math_contaminated.jsonl | 156M |
| nemo_science_math-0_contaminated.jsonl | 141M |
| open_thoughts-0_contaminated.jsonl | 135M |
| misc_instruct_contaminated.jsonl | 114M |
| open_thoughts-1_contaminated.jsonl | 90M |
| nemo_science_math-2_contaminated.jsonl | 85M |
| open_thoughts-2_contaminated.jsonl | 73M |
| school_math_contaminated.jsonl | 72M |
| ring-lite-sft-2_contaminated.jsonl | 71M |
| open_thoughts-3_contaminated.jsonl | 67M |
| math_sft-1_contaminated.jsonl | 63M |
| open_thoughts-4_contaminated.jsonl | 59M |
| nemo_science_math-3_contaminated.jsonl | 58M |
| math_sft-0_contaminated.jsonl | 57M |
| math_sft-2_contaminated.jsonl | 54M |
| open_thoughts-5_contaminated.jsonl | 49M |
| math_reasoning_contaminated.jsonl | 48M |
| prism_science_contaminated.jsonl | 46M |
| ring-lite-sft-3_contaminated.jsonl | 45M |
| reasoning_instruct_contaminated.jsonl | 44M |
| nemo_science_math-4_contaminated.jsonl | 43M |
| math_sft-3_contaminated.jsonl | 43M |
| ring-lite-sft-4_contaminated.jsonl | 40M |
| open_thoughts-6_contaminated.jsonl | 38M |
| open_thoughts-7_contaminated.jsonl | 37M |
| nemo_science_math-5_contaminated.jsonl | 36M |
| open_thoughts-8_contaminated.jsonl | 34M |
| ring-lite-sft-5_contaminated.jsonl | 34M |
| open_thoughts-9_contaminated.jsonl | 32M |
| math_sft-4_contaminated.jsonl | 32M |
| nemo_science_math-6_contaminated.jsonl | 31M |
| open_thoughts-10_contaminated.jsonl | 29M |
| ring-lite-sft-6_contaminated.jsonl | 28M |
| open_thoughts-11_contaminated.jsonl | 27M |
| math_sft-5_contaminated.jsonl | 26M |
| open_thoughts-12_contaminated.jsonl | 24M |
| ring-lite-sft-7_contaminated.jsonl | 24M |
| open_thoughts-13_contaminated.jsonl | 22M |
| open_thoughts-14_contaminated.jsonl | 21M |
| nemo_science_math-7_contaminated.jsonl | 20M |
| math_sft-6_contaminated.jsonl | 20M |
| open_thoughts-15_contaminated.jsonl | 18M |
| open_thoughts-16_contaminated.jsonl | 17M |
| ring-lite-sft-8_contaminated.jsonl | 17M |
| open_thoughts-17_contaminated.jsonl | 16M |
| math_sft-7_contaminated.jsonl | 15M |
| nemo_science_math-8_contaminated.jsonl | 15M |
| open_thoughts-18_contaminated.jsonl | 14M |
| ring-lite-sft-9_contaminated.jsonl | 14M |
| open_thoughts-19_contaminated.jsonl | 13M |
| math_sft-8_contaminated.jsonl | 13M |
| open_thoughts-20_contaminated.jsonl | 12M |
| nemo_science_math-9_contaminated.jsonl | 11M |
| open_thoughts-21_contaminated.jsonl | 10M |
| math_sft-9_contaminated.jsonl | 10M |
| open_thoughts-22_contaminated.jsonl | 9M |
| ring-lite-sft-10_contaminated.jsonl | 9M |
| open_thoughts-23_contaminated.jsonl | 8M |
| math_sft-10_contaminated.jsonl | 8M |
| nemo_science_math-10_contaminated.jsonl | 7M |
| open_thoughts-24_contaminated.jsonl | 7M |
| ring-lite-sft-11_contaminated.jsonl | 7M |
| open_thoughts-25_contaminated.jsonl | 6M |
| math_sft-11_contaminated.jsonl | 6M |
| open_thoughts-26_contaminated.jsonl | 6M |
| nemo_science_math-11_contaminated.jsonl | 6M |
| open_thoughts-27_contaminated.jsonl | 5M |
| ring-lite-sft-12_contaminated.jsonl | 5M |
| math_sft-12_contaminated.jsonl | 5M |
| open_thoughts-28_contaminated.jsonl | 4M |
| nemo_science_math-12_contaminated.jsonl | 4M |
| open_thoughts-29_contaminated.jsonl | 3M |
| ring-lite-sft-13_contaminated.jsonl | 3M |
| open_thoughts-30_contaminated.jsonl | 3M |
| math_sft-13_contaminated.jsonl | 3M |
| open_thoughts-31_contaminated.jsonl | 2M |
| nemo_science_math-13_contaminated.jsonl | 2M |
| open_thoughts-32_contaminated.jsonl | 2M |
| ring-lite-sft-14_contaminated.jsonl | 2M |
| math_sft-14_contaminated.jsonl | 2M |
| open_thoughts-33_contaminated.jsonl | 1M |
| open_thoughts-34_contaminated.jsonl | 1M |
| nemo_science_math-14_contaminated.jsonl | 1M |
| open_thoughts-35_contaminated.jsonl | 1M |
| ring-lite-sft-15_contaminated.jsonl | 1M |
| math_sft-15_contaminated.jsonl | 1M |
| 439 | 0 | [
"license:odc-by",
"size_categories:100K<n<1M",
"modality:text",
"region:us"
] | 2025-09-26T23:35:57+00:00 | 2025-11-12T17:53:07+00:00 | 0 |
nvail23/BlueSnap-Task |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"robot_type": "so101_follower",
"codebase_version": "v3.0",
"total_episodes": 50,
"total_frames": 27779,
"total_tasks": 2,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"fps": 30
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"fps": 30
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null,
"fps": 30
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
}
},
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"robot_type": "so101_follower",
"codebase_version": "v3.0",
"total_episodes": 50,
"total_frames": 27779,
"total_tasks": 2,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"fps": 30
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"fps": 30
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null,
"fps": 30
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
}
},
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T17:40:33+00:00 | 2025-11-12T17:52:01+00:00 | 0 |
Kiy-K/pretraining-corpus |
# 🧠 Kiy-K Synthetic Pretraining Corpus
**Author:** [Khoi K. (@Kiy-K)](https://huggingface.co/Kiy-K)
**License:** Apache 2.0
**Last Updated:** 2025-10-30
---
## 📘 Overview
The **Kiy-K Synthetic Pretraining Corpus** is a large-scale collection of **synthetically generated English text** designed for **language model pretraining and instruction-tuning research**.
All data is **synthetic**, created using open-source large language models such as **GPT-OSS**, **NVIDIA Nemotron**, and **DeepSeek**, under full control of the author.
No real user, copyrighted, or sensitive information is included.
---
## 🧩 Structure
Each record contains:
- `id` — unique identifier
- `text` — generated document text
- `meta` — optional metadata such as domain, length, or generation model
The corpus covers diverse domains including:
- Technology and programming
- Science and education
- General conversation and reasoning
- Instructional and QA-style texts
---
## ⚙️ Intended Uses
- Pretraining of small to medium-scale LLMs
- Instruction-tuning and alignment experiments
- Data efficiency and synthetic pipeline research
Not intended for:
- Real-world decision making
- Sensitive or personal data analysis
---
## 🧮 Dataset Statistics
| Field | Description |
|-------|--------------|
| Records | ~xxx,xxx |
| Avg. length | ~xxx tokens |
| Generation models | GPT-OSS, Nemotron, DeepSeek |
| License | Apache 2.0 |
*(Update the numbers once scaling finishes.)*
---
## 🔖 Citation
If you use this dataset in your research or project, please cite:
```bibtex
@dataset{kiy_k_2025_pretraining_corpus,
author = {Khoi K.},
title = {Kiy-K Synthetic Pretraining Corpus},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/Kiy-K/pretraining-corpus}
}- text-generation
language:
- en
---
```
💼 About the Author
This dataset is part of the Kiy-K Synthetic Data Studio Project — an initiative to provide high-quality, customizable synthetic data for research and commercial use.
👉 Interested in custom synthetic datasets?
Contact me on Hugging Face or open an Issue/Discussion on this repository.
---
📜 License
This dataset is licensed under Apache License 2.0, meaning you are free to use, modify, and distribute it — with proper attribution.
--- |
# 🧠 Kiy-K Synthetic Pretraining Corpus
**Author:** [Khoi K. (@Kiy-K)](https://huggingface.co/Kiy-K)
**License:** Apache 2.0
**Last Updated:** 2025-10-30
---
## 📘 Overview
The **Kiy-K Synthetic Pretraining Corpus** is a large-scale collection of **synthetically generated English text** designed for **language model pretraining and instruction-tuning research**.
All data is **synthetic**, created using open-source large language models such as **GPT-OSS**, **NVIDIA Nemotron**, and **DeepSeek**, under full control of the author.
No real user, copyrighted, or sensitive information is included.
---
## 🧩 Structure
Each record contains:
- `id` — unique identifier
- `text` — generated document text
- `meta` — optional metadata such as domain, length, or generation model
The corpus covers diverse domains including:
- Technology and programming
- Science and education
- General conversation and reasoning
- Instructional and QA-style texts
---
## ⚙️ Intended Uses
- Pretraining of small to medium-scale LLMs
- Instruction-tuning and alignment experiments
- Data efficiency and synthetic pipeline research
Not intended for:
- Real-world decision making
- Sensitive or personal data analysis
---
## 🧮 Dataset Statistics
| Field | Description |
|-------|--------------|
| Records | ~xxx,xxx |
| Avg. length | ~xxx tokens |
| Generation models | GPT-OSS, Nemotron, DeepSeek |
| License | Apache 2.0 |
*(Update the numbers once scaling finishes.)*
---
## 🔖 Citation
If you use this dataset in your research or project, please cite:
```bibtex
@dataset{kiy_k_2025_pretraining_corpus,
author = {Khoi K.},
title = {Kiy-K Synthetic Pretraining Corpus},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/Kiy-K/pretraining-corpus}
}- text-generation
language:
- en
---
```
💼 About the Author
This dataset is part of the Kiy-K Synthetic Data Studio Project — an initiative to provide high-quality, customizable synthetic data for research and commercial use.
👉 Interested in custom synthetic datasets?
Contact me on Hugging Face or open an Issue/Discussion on this repository.
---
📜 License
This dataset is licensed under Apache License 2.0, meaning you are free to use, modify, and distribute it — with proper attribution.
--- | 3,655 | 2 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"synthetic",
"ai",
"nlp",
"pretraining",
"dataset",
"text",
"open-source"
] | 2025-10-30T05:31:56+00:00 | 2025-11-12T17:50:02+00:00 | 0 |
Pendrokar/TTS_Arena | [TTS Arena's](https://huggingface.co/spaces/Pendrokar/TTS-Spaces-Arena) DB is _SQLlite_ DB file. The above is just a summary query that should be useful for TTS developers to evaluate faults of their model.
## Why no audio samples?
Unsafe. Cannot constantly oversee the output of uncontrolled HuggingFace Spaces. While it could be safeguarded by using an ASR model before uploading, something unwanted may still slip through.
## Useful queries for TTS developers and evaluators
### All votes mentioning specified TTS model:
```sql
SELECT
spokentext, lang, chosen, rejected, count(spokentext) AS times, MAX(vl.timestamp) AS lastvote
FROM "main"."spokentext"
INNER JOIN votelog vl ON votelog_id = vl.id
WHERE
vl.chosen = "Pendrokar/xVASynth-TTS"
OR vl.rejected = "Pendrokar/xVASynth-TTS"
GROUP BY spokentext, chosen, rejected
ORDER BY times DESC, spokentext ASC
LIMIT 0, 49999;
```
### All rejections of specified TTS model against another:
```sql
SELECT
spokentext, lang, chosen, rejected, count(spokentext) AS times, MAX(vl.timestamp) AS lastvote
FROM "main"."spokentext"
INNER JOIN votelog vl ON votelog_id = vl.id AND vl.rejected = "Pendrokar/xVASynth-TTS"
GROUP BY spokentext, chosen
ORDER BY spokentext ASC
LIMIT 0, 49999;
```
### All rejections of a TTS model against another:
**The one used in dataset viewer.** Note that the `chosen` column may include models that the `rejected` model beat more times. That is also why `votes` may sometimes be even less than the amount of distinct chosen models.
```sql
SELECT
st.spokentext,
vl.rejected,
COUNT(vl.rejected) - COALESCE(chosen_counts.chosen_count, 0) AS votes,
(COUNT(DISTINCT vl.chosen) || ' ' || GROUP_CONCAT(DISTINCT ' ' || vl.chosen)) AS chosen,
MAX(vl.timestamp) AS lastvote
FROM
votelog vl
JOIN
spokentext st ON vl.id = st.votelog_id
LEFT JOIN (
SELECT
st_inner.spokentext,
vl_inner.chosen,
COUNT(vl_inner.chosen) AS chosen_count
FROM
votelog vl_inner
JOIN
spokentext st_inner ON vl_inner.id = st_inner.votelog_id
GROUP BY
st_inner.spokentext,
vl_inner.chosen
ORDER BY
chosen_count DESC
) AS chosen_counts ON st.spokentext = chosen_counts.spokentext AND vl.rejected = chosen_counts.chosen
GROUP BY
st.spokentext,
vl.rejected
HAVING
votes > 0
AND lastvote BETWEEN datetime('now', '-1 month') AND datetime('now', 'localtime')
ORDER BY
((votes * COUNT(DISTINCT vl.chosen)) / 2) DESC,
COUNT(DISTINCT vl.chosen) DESC,
st.spokentext ASC;
```
If you use this data in your publication, please cite us!
Copy the BibTeX citation to cite this source:
```bibtext\n
@misc{tts-arena,
title = {Text to Speech Arena - Pendrokar's HF Spaces Fork},
author = {mrfakename and Srivastav, Vaibhav and Fourrier, Clémentine and Pouget, Lucain and Lacombe, Yoach and main and Gandhi, Sanchit},
year = 2024,
publisher = {Hugging Face},
howpublished = "\\url{https://huggingface.co/spaces/TTS-AGI/TTS-Arena}"
}
``` | [TTS Arena's](https://huggingface.co/spaces/Pendrokar/TTS-Spaces-Arena) DB is _SQLlite_ DB file. The above is just a summary query that should be useful for TTS developers to evaluate faults of their model.
## Why no audio samples?
Unsafe. Cannot constantly oversee the output of uncontrolled HuggingFace Spaces. While it could be safeguarded by using an ASR model before uploading, something unwanted may still slip through.
## Useful queries for TTS developers and evaluators
### All votes mentioning specified TTS model:
```sql
SELECT
spokentext, lang, chosen, rejected, count(spokentext) AS times, MAX(vl.timestamp) AS lastvote
FROM "main"."spokentext"
INNER JOIN votelog vl ON votelog_id = vl.id
WHERE
vl.chosen = "Pendrokar/xVASynth-TTS"
OR vl.rejected = "Pendrokar/xVASynth-TTS"
GROUP BY spokentext, chosen, rejected
ORDER BY times DESC, spokentext ASC
LIMIT 0, 49999;
```
### All rejections of specified TTS model against another:
```sql
SELECT
spokentext, lang, chosen, rejected, count(spokentext) AS times, MAX(vl.timestamp) AS lastvote
FROM "main"."spokentext"
INNER JOIN votelog vl ON votelog_id = vl.id AND vl.rejected = "Pendrokar/xVASynth-TTS"
GROUP BY spokentext, chosen
ORDER BY spokentext ASC
LIMIT 0, 49999;
```
### All rejections of a TTS model against another:
**The one used in dataset viewer.** Note that the `chosen` column may include models that the `rejected` model beat more times. That is also why `votes` may sometimes be even less than the amount of distinct chosen models.
```sql
SELECT
st.spokentext,
vl.rejected,
COUNT(vl.rejected) - COALESCE(chosen_counts.chosen_count, 0) AS votes,
(COUNT(DISTINCT vl.chosen) || ' ' || GROUP_CONCAT(DISTINCT ' ' || vl.chosen)) AS chosen,
MAX(vl.timestamp) AS lastvote
FROM
votelog vl
JOIN
spokentext st ON vl.id = st.votelog_id
LEFT JOIN (
SELECT
st_inner.spokentext,
vl_inner.chosen,
COUNT(vl_inner.chosen) AS chosen_count
FROM
votelog vl_inner
JOIN
spokentext st_inner ON vl_inner.id = st_inner.votelog_id
GROUP BY
st_inner.spokentext,
vl_inner.chosen
ORDER BY
chosen_count DESC
) AS chosen_counts ON st.spokentext = chosen_counts.spokentext AND vl.rejected = chosen_counts.chosen
GROUP BY
st.spokentext,
vl.rejected
HAVING
votes > 0
AND lastvote BETWEEN datetime('now', '-1 month') AND datetime('now', 'localtime')
ORDER BY
((votes * COUNT(DISTINCT vl.chosen)) / 2) DESC,
COUNT(DISTINCT vl.chosen) DESC,
st.spokentext ASC;
```
If you use this data in your publication, please cite us!
Copy the BibTeX citation to cite this source:
```bibtext\n
@misc{tts-arena,
title = {Text to Speech Arena - Pendrokar's HF Spaces Fork},
author = {mrfakename and Srivastav, Vaibhav and Fourrier, Clémentine and Pouget, Lucain and Lacombe, Yoach and main and Gandhi, Sanchit},
year = 2024,
publisher = {Hugging Face},
howpublished = "\\url{https://huggingface.co/spaces/TTS-AGI/TTS-Arena}"
}
``` | 4,448 | 6 | [
"language:en",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"arena"
] | 2024-10-11T16:52:25+00:00 | 2025-11-12T17:49:16+00:00 | 0 |
electricsheepafrica/livestock-health-disease-ssa-synthetic |
# Dataset Card: Livestock Health & Disease Surveillance (Synthetic Data)
## Dataset Summary
This synthetic dataset represents **1,000,000 African smallholder households** with livestock systems, capturing livestock health, disease surveillance, veterinary access, and herd management practices across Sub-Saharan Africa. It combines baseline farm characteristics (Dataset 1) with 15 livestock-specific variables to create a comprehensive picture of livestock production systems and animal health challenges.
**Key Features:**
- **1M households** across 5 agro-ecological zones
- **27 variables** (12 base farm + 15 livestock health)
- **African-specific** livestock systems and diseases
- **Literature-grounded** distributions (50+ peer-reviewed sources)
- **Conditional dependencies** modeling real-world relationships
- **Realistic missing data** patterns
## Variables
### Base Farm Characteristics (Dataset 1 - 12 variables)
1. **agro_ecological_zone**: Arid, semi-arid, sub-humid, humid, highland
2. **region_type**: Urban, peri-urban, rural accessible, rural remote
3. **farm_size_ha**: Farm size in hectares
4. **soil_quality_index**: Soil quality (0-100 scale)
5. **rainfall_mm_annual**: Annual rainfall (mm)
6. **household_size**: Number of household members
7. **market_distance_km**: Distance to nearest market
8. **livestock_tlu**: Tropical Livestock Units owned
9. **extension_access**: Access to agricultural extension (yes/no)
10. **fertilizer_use_kg_ha**: Fertilizer application rate
11. **rainfall_mm_season**: Seasonal rainfall (mm)
12. **maize_yield_kg_ha**: Maize yield (kg/ha)
### Livestock Health & Production (NEW - 15 variables)
#### Herd Composition
13. **herd_size_cattle**: Number of cattle owned (0-50+)
14. **herd_size_small_ruminants**: Sheep and goats owned (0-100+)
15. **poultry_count**: Chickens, ducks, etc. (0-200+)
#### Veterinary Services & Access
16. **vet_distance_km**: Distance to nearest veterinary service (1-200 km)
17. **vaccination_coverage_pct**: % of herd vaccinated (0-100%)
18. **vet_visit_annual**: Had veterinary visit in past year (yes/no)
#### Disease & Health
19. **disease_incidence_annual**: Reported disease in past year (yes/no)
20. **disease_type**: Type of disease (FMD, ECF, CBPP, trypanosomiasis, PPR, Newcastle, respiratory, diarrhea, other)
21. **mortality_rate_annual_pct**: Annual livestock mortality rate (%)
22. **pasture_quality_index**: Pasture/rangeland quality (0-100 scale)
#### Management Systems
23. **grazing_system**: Type of grazing (communal, private, mixed, zero-grazing)
24. **water_source_reliability**: Water availability (year-round, seasonal, unreliable)
25. **treatment_access**: Type of treatment accessed (none, traditional, veterinary, both)
26. **feed_supplementation**: Provides supplementary feed (yes/no)
27. **livestock_dependency_index**: Household dependence on livestock (0-100 scale)
## Dataset Statistics
### Livestock Ownership
- **43.4%** of households own cattle
- **62.9%** own small ruminants (sheep/goats)
- **67.5%** keep poultry
- Mean cattle herd size: ~5 animals (among owners)
- Mean small ruminant herd: ~12 animals (among owners)
- Mean poultry flock: ~8 birds (among keepers)
### Disease Burden
- **32.7%** reported disease incidence in past year
- Most common diseases:
- Newcastle disease (poultry): 20%
- FMD (Foot & Mouth): 18%
- PPR (Peste des Petits Ruminants): 15%
- ECF (East Coast Fever): 12%
- Trypanosomiasis: 10%
### Veterinary Access
- **40.3%** had veterinary contact in past year
- Mean distance to vet services: **58.9 km**
- **20%** vaccination coverage (median)
- Treatment types:
- 35% no treatment
- 45% traditional remedies only
- 15% veterinary treatment
- 5% both traditional and veterinary
### Management Practices
- **50%** use communal grazing systems
- **25%** private grazing
- **20%** mixed systems
- **5%** zero-grazing (intensive)
- **30%** provide feed supplementation
- **40%** have year-round water access
- **35%** seasonal water only
- **25%** unreliable water
## Uses
### Permitted Uses
- **Livestock policy analysis**: Model impacts of disease control programs
- **Veterinary service planning**: Optimize clinic placement and mobile vet routes
- **Disease surveillance system design**: Test outbreak detection algorithms
- **Animal health research**: Train ML models for disease prediction
- **One Health initiatives**: Link livestock-human health systems
- **Extension service planning**: Target interventions by livestock system type
- **Educational purposes**: Teaching livestock epidemiology and policy
- **Climate adaptation**: Model livestock system resilience
- **Value chain analysis**: Link livestock production to markets
- **Research method development**: Test statistical techniques
### Prohibited Uses
- **Not for replacement of real data collection**: Cannot substitute for actual field surveys
- **Not for country-specific policy**: Too generalized for single-country decisions
- **Not for real-time disease outbreak response**: Not actual surveillance data
- **Not for individual farmer targeting**: Synthetic households are not real
- **Not for precise cost-benefit analysis**: Use for methodological prototypes only
## Dataset Creation
### Why This Dataset Exists
Real livestock health data in Sub-Saharan Africa faces critical gaps:
1. **Surveillance gaps**: Most countries lack systematic disease surveillance
2. **Underreporting**: Livestock diseases often go unreported (especially in remote areas)
3. **Fragmented data**: Information scattered across vet clinics, ministries, NGOs
4. **Access restrictions**: Sensitive disease data rarely shared publicly
5. **High collection costs**: Surveys expensive and logistically challenging
6. **Privacy concerns**: Household-level data cannot be openly published
**This synthetic dataset enables:**
- Algorithm development without waiting for data access
- Training of researchers and students
- International collaboration without data sharing barriers
- Rapid prototyping of livestock information systems
- Evidence generation for funding proposals
### Creation Methodology
**Rigorous 4-stage process** following synthetic data best practices:
#### Stage 1: Literature Review (50+ sources)
- Systematic review of livestock systems in SSA
- Disease prevalence studies (FMD, ECF, trypanosomiasis, PPR, Newcastle)
- Veterinary service coverage assessments
- Management practice surveys
- Mortality and productivity benchmarks
#### Stage 2: Parameter Specification (15 files, 60-150 lines each)
- Conditional probability distributions by zone, region, herd size
- Functional relationships (e.g., vet distance → vaccination rates)
- Species-specific disease patterns
- Management system typologies
- Full provenance tracking
#### Stage 3: Conditional Data Generation
- Base variables from Dataset 1 (smallholder farms)
- Sequential generation respecting dependencies
- Zero-inflated distributions for herd sizes
- Categorical conditioning for disease types
- Realistic missing data (MCAR: 1-10%)
#### Stage 4: Validation
- Cross-variable consistency checks
- Literature benchmark comparisons
- Logical constraint verification
- Distribution shape validation
## Limitations and Biases
### Known Limitations
1. **Oversimplified disease dynamics**: Real disease spread is more complex than modeled
2. **Static snapshot**: No temporal dynamics (outbreaks, seasonality within year)
3. **No spatial clustering**: Real diseases show geographic clustering not captured
4. **Coarse zones**: 5 AEZ categories don't capture local variation
5. **Missing variables**: No breed info, no herd demographics, no animal-level data
6. **Treatment outcomes**: No data on treatment success/failure
7. **No cost data**: Disease impacts measured only in mortality, not economics
8. **Simplified grazing**: Complex pastoral mobility patterns simplified
9. **Binary disease incidence**: Real incidence is more granular (multiple episodes)
### Potential Biases
1. **Literature bias**: Sources mostly from East Africa (Kenya, Tanzania, Ethiopia)
2. **Veterinary access**: May overestimate coverage in very remote pastoral areas
3. **Disease reporting**: Literature likely underrepresents mild/unreported diseases
4. **Poultry systems**: Village chickens well-represented, commercial systems underrepresented
5. **Traditional knowledge**: Traditional treatment effectiveness may be under-captured
6. **Gender**: No gender disaggregation of livestock ownership/management
7. **Wealth gradient**: Livestock wealth distribution may be too uniform
8. **Conflict zones**: Data may not reflect pastoralist areas affected by conflict
### What This Dataset Is NOT
- ❌ **Not real surveillance data**: Do not use for actual disease outbreak decisions
- ❌ **Not predictive**: Cannot predict real disease occurrence
- ❌ **Not country-specific**: Generalized SSA patterns, not any single country
- ❌ **Not longitudinal**: Single time point, no panel structure
- ❌ **Not spatially explicit**: No GPS coordinates, no spatial autocorrelation
## Technical Specifications
### File Formats
- **CSV**: `livestock_data.csv` (315 MB, 1M rows)
- **Parquet**: `livestock_data.parquet` (111 MB, compressed)
- **Metadata**: `metadata.json` (generation parameters, sources)
- **Data Dictionary**: `data_dictionary.csv` (variable descriptions)
### Missing Data
Realistic missing data rates by variable:
- Herd sizes: 2%
- Vet distance: 4%
- Vaccination coverage: 5%
- Disease incidence: 3%
- Pasture quality: 6%
- Mortality rate: 3%
- Disease type: 10% (conditional on disease occurrence)
- Management variables: 3-4%
### Data Quality Indicators
- ✅ All constraints validated (no impossible values)
- ✅ Conditional dependencies respected
- ✅ Literature benchmarks matched (±10%)
- ✅ Cross-variable correlations logical
- ✅ Missing data patterns realistic
## Ethical Considerations
### Privacy
- **No real households**: All data fully synthetic, cannot identify real people/places
- **No GPS coordinates**: No geographic identifiers that could reveal locations
- **Aggregated patterns only**: Individual records are fictional
### Representation
- **Pan-African focus**: Captures diversity across SSA, not dominated by single region
- **Pastoral systems included**: Arid/semi-arid zones well-represented
- **Smallholder-centric**: Large commercial farms not included
- **Traditional knowledge**: Ethnoveterinary practices acknowledged
### Responsible Use
Users should:
- ✅ Clearly label outputs as based on synthetic data
- ✅ Validate methods on real data before deployment
- ✅ Not overstate generalizability of findings
- ✅ Cite real data sources when transitioning to applications
- ✅ Engage local stakeholders when designing interventions
## Citation Information
If you use this dataset, please cite:
```bibtex
@dataset{livestock_health_synthetic_2024,
author = {Electric Sheep Africa},
title = {Livestock Health and Disease Surveillance Synthetic Dataset for Sub-Saharan Africa},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/electricsheepafrica/livestock-health-disease-ssa-synthetic}
}
```
### Key Literature Sources
This dataset synthesizes information from 50+ sources, including:
- **Perry & Grace (2009)**: Economic impacts of animal diseases (Journal of Agricultural Economics)
- **Cleaveland et al. (2001)**: Diseases of humans and domestic mammals (Phil Trans Royal Society B)
- **Leonard et al. (2017)**: Veterinary service delivery in developing countries (Rev. sci. tech. Off. int. Epiz)
- **Robinson et al. (2011)**: Global livestock production systems (FAO/ILRI)
- **AU-IBAR (2013)**: Veterinary services delivery in Africa (African Union)
- **McCorkle (1995)**: Ethnoveterinary R&D (Agriculture and Human Values)
- **Herrero et al. (2013)**: Biomass use in global livestock systems (PNAS)
- **Reid et al. (2014)**: Pastoral land development models (Ecology and Society)
Full bibliography available in parameter files (`parameters_livestock/` directory).
## Dataset Structure
### Variable Types
- **Categorical** (9 variables): Zones, disease types, systems
- **Continuous** (14 variables): Herd sizes, distances, indices, rates
- **Binary** (4 variables): Access, incidence, supplementation
### Sample Record
```csv
agro_ecological_zone,region_type,herd_size_cattle,disease_incidence_annual,vet_distance_km,...
semi_arid,rural_accessible,4,yes,35.2,...
```
## Updates and Versioning
- **Version**: 1.0
- **Release Date**: November 2024
- **Status**: Stable
- **Planned Updates**: None currently planned
## Contact
**Creator**: Electric Sheep Africa
**Repository**: [GitHub](https://github.com/electricsheepafrica/agriculture-synthetic-data)
**Issues**: Report via GitHub Issues
## License
**CC BY 4.0** (Creative Commons Attribution 4.0 International)
You are free to:
- ✅ Share and redistribute
- ✅ Adapt and build upon
- ✅ Use commercially
Under the condition that you:
- ✅ Give appropriate credit
- ✅ Indicate if changes were made
- ✅ Do not misrepresent as real surveillance data
---
## How to Load
```python
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("electricsheepafrica/livestock-health-disease-ssa-synthetic")
# Load as pandas DataFrame
import pandas as pd
df = dataset['train'].to_pandas()
# Or load Parquet directly
df = pd.read_parquet("livestock_data.parquet")
```
## Example Use Cases
### 1. Disease Risk Prediction
```python
# Train ML model to predict disease incidence
X = df[['herd_size_cattle', 'vet_distance_km', 'vaccination_coverage_pct',
'agro_ecological_zone', 'pasture_quality_index']]
y = df['disease_incidence_annual']
```
### 2. Vet Clinic Placement Optimization
```python
# Find underserved areas
underserved = df[(df['vet_distance_km'] > 60) & (df['livestock_tlu'] > 5)]
```
### 3. Vaccination Campaign Targeting
```python
# Identify high-risk, low-coverage households
targets = df[(df['vaccination_coverage_pct'] < 20) &
(df['disease_incidence_annual'] == 'yes')]
```
---
**Dataset 2 of 5** in the African Agriculture & Food Security Synthetic Data Portfolio
|
# Dataset Card: Livestock Health & Disease Surveillance (Synthetic Data)
## Dataset Summary
This synthetic dataset represents **1,000,000 African smallholder households** with livestock systems, capturing livestock health, disease surveillance, veterinary access, and herd management practices across Sub-Saharan Africa. It combines baseline farm characteristics (Dataset 1) with 15 livestock-specific variables to create a comprehensive picture of livestock production systems and animal health challenges.
**Key Features:**
- **1M households** across 5 agro-ecological zones
- **27 variables** (12 base farm + 15 livestock health)
- **African-specific** livestock systems and diseases
- **Literature-grounded** distributions (50+ peer-reviewed sources)
- **Conditional dependencies** modeling real-world relationships
- **Realistic missing data** patterns
## Variables
### Base Farm Characteristics (Dataset 1 - 12 variables)
1. **agro_ecological_zone**: Arid, semi-arid, sub-humid, humid, highland
2. **region_type**: Urban, peri-urban, rural accessible, rural remote
3. **farm_size_ha**: Farm size in hectares
4. **soil_quality_index**: Soil quality (0-100 scale)
5. **rainfall_mm_annual**: Annual rainfall (mm)
6. **household_size**: Number of household members
7. **market_distance_km**: Distance to nearest market
8. **livestock_tlu**: Tropical Livestock Units owned
9. **extension_access**: Access to agricultural extension (yes/no)
10. **fertilizer_use_kg_ha**: Fertilizer application rate
11. **rainfall_mm_season**: Seasonal rainfall (mm)
12. **maize_yield_kg_ha**: Maize yield (kg/ha)
### Livestock Health & Production (NEW - 15 variables)
#### Herd Composition
13. **herd_size_cattle**: Number of cattle owned (0-50+)
14. **herd_size_small_ruminants**: Sheep and goats owned (0-100+)
15. **poultry_count**: Chickens, ducks, etc. (0-200+)
#### Veterinary Services & Access
16. **vet_distance_km**: Distance to nearest veterinary service (1-200 km)
17. **vaccination_coverage_pct**: % of herd vaccinated (0-100%)
18. **vet_visit_annual**: Had veterinary visit in past year (yes/no)
#### Disease & Health
19. **disease_incidence_annual**: Reported disease in past year (yes/no)
20. **disease_type**: Type of disease (FMD, ECF, CBPP, trypanosomiasis, PPR, Newcastle, respiratory, diarrhea, other)
21. **mortality_rate_annual_pct**: Annual livestock mortality rate (%)
22. **pasture_quality_index**: Pasture/rangeland quality (0-100 scale)
#### Management Systems
23. **grazing_system**: Type of grazing (communal, private, mixed, zero-grazing)
24. **water_source_reliability**: Water availability (year-round, seasonal, unreliable)
25. **treatment_access**: Type of treatment accessed (none, traditional, veterinary, both)
26. **feed_supplementation**: Provides supplementary feed (yes/no)
27. **livestock_dependency_index**: Household dependence on livestock (0-100 scale)
## Dataset Statistics
### Livestock Ownership
- **43.4%** of households own cattle
- **62.9%** own small ruminants (sheep/goats)
- **67.5%** keep poultry
- Mean cattle herd size: ~5 animals (among owners)
- Mean small ruminant herd: ~12 animals (among owners)
- Mean poultry flock: ~8 birds (among keepers)
### Disease Burden
- **32.7%** reported disease incidence in past year
- Most common diseases:
- Newcastle disease (poultry): 20%
- FMD (Foot & Mouth): 18%
- PPR (Peste des Petits Ruminants): 15%
- ECF (East Coast Fever): 12%
- Trypanosomiasis: 10%
### Veterinary Access
- **40.3%** had veterinary contact in past year
- Mean distance to vet services: **58.9 km**
- **20%** vaccination coverage (median)
- Treatment types:
- 35% no treatment
- 45% traditional remedies only
- 15% veterinary treatment
- 5% both traditional and veterinary
### Management Practices
- **50%** use communal grazing systems
- **25%** private grazing
- **20%** mixed systems
- **5%** zero-grazing (intensive)
- **30%** provide feed supplementation
- **40%** have year-round water access
- **35%** seasonal water only
- **25%** unreliable water
## Uses
### Permitted Uses
- **Livestock policy analysis**: Model impacts of disease control programs
- **Veterinary service planning**: Optimize clinic placement and mobile vet routes
- **Disease surveillance system design**: Test outbreak detection algorithms
- **Animal health research**: Train ML models for disease prediction
- **One Health initiatives**: Link livestock-human health systems
- **Extension service planning**: Target interventions by livestock system type
- **Educational purposes**: Teaching livestock epidemiology and policy
- **Climate adaptation**: Model livestock system resilience
- **Value chain analysis**: Link livestock production to markets
- **Research method development**: Test statistical techniques
### Prohibited Uses
- **Not for replacement of real data collection**: Cannot substitute for actual field surveys
- **Not for country-specific policy**: Too generalized for single-country decisions
- **Not for real-time disease outbreak response**: Not actual surveillance data
- **Not for individual farmer targeting**: Synthetic households are not real
- **Not for precise cost-benefit analysis**: Use for methodological prototypes only
## Dataset Creation
### Why This Dataset Exists
Real livestock health data in Sub-Saharan Africa faces critical gaps:
1. **Surveillance gaps**: Most countries lack systematic disease surveillance
2. **Underreporting**: Livestock diseases often go unreported (especially in remote areas)
3. **Fragmented data**: Information scattered across vet clinics, ministries, NGOs
4. **Access restrictions**: Sensitive disease data rarely shared publicly
5. **High collection costs**: Surveys expensive and logistically challenging
6. **Privacy concerns**: Household-level data cannot be openly published
**This synthetic dataset enables:**
- Algorithm development without waiting for data access
- Training of researchers and students
- International collaboration without data sharing barriers
- Rapid prototyping of livestock information systems
- Evidence generation for funding proposals
### Creation Methodology
**Rigorous 4-stage process** following synthetic data best practices:
#### Stage 1: Literature Review (50+ sources)
- Systematic review of livestock systems in SSA
- Disease prevalence studies (FMD, ECF, trypanosomiasis, PPR, Newcastle)
- Veterinary service coverage assessments
- Management practice surveys
- Mortality and productivity benchmarks
#### Stage 2: Parameter Specification (15 files, 60-150 lines each)
- Conditional probability distributions by zone, region, herd size
- Functional relationships (e.g., vet distance → vaccination rates)
- Species-specific disease patterns
- Management system typologies
- Full provenance tracking
#### Stage 3: Conditional Data Generation
- Base variables from Dataset 1 (smallholder farms)
- Sequential generation respecting dependencies
- Zero-inflated distributions for herd sizes
- Categorical conditioning for disease types
- Realistic missing data (MCAR: 1-10%)
#### Stage 4: Validation
- Cross-variable consistency checks
- Literature benchmark comparisons
- Logical constraint verification
- Distribution shape validation
## Limitations and Biases
### Known Limitations
1. **Oversimplified disease dynamics**: Real disease spread is more complex than modeled
2. **Static snapshot**: No temporal dynamics (outbreaks, seasonality within year)
3. **No spatial clustering**: Real diseases show geographic clustering not captured
4. **Coarse zones**: 5 AEZ categories don't capture local variation
5. **Missing variables**: No breed info, no herd demographics, no animal-level data
6. **Treatment outcomes**: No data on treatment success/failure
7. **No cost data**: Disease impacts measured only in mortality, not economics
8. **Simplified grazing**: Complex pastoral mobility patterns simplified
9. **Binary disease incidence**: Real incidence is more granular (multiple episodes)
### Potential Biases
1. **Literature bias**: Sources mostly from East Africa (Kenya, Tanzania, Ethiopia)
2. **Veterinary access**: May overestimate coverage in very remote pastoral areas
3. **Disease reporting**: Literature likely underrepresents mild/unreported diseases
4. **Poultry systems**: Village chickens well-represented, commercial systems underrepresented
5. **Traditional knowledge**: Traditional treatment effectiveness may be under-captured
6. **Gender**: No gender disaggregation of livestock ownership/management
7. **Wealth gradient**: Livestock wealth distribution may be too uniform
8. **Conflict zones**: Data may not reflect pastoralist areas affected by conflict
### What This Dataset Is NOT
- ❌ **Not real surveillance data**: Do not use for actual disease outbreak decisions
- ❌ **Not predictive**: Cannot predict real disease occurrence
- ❌ **Not country-specific**: Generalized SSA patterns, not any single country
- ❌ **Not longitudinal**: Single time point, no panel structure
- ❌ **Not spatially explicit**: No GPS coordinates, no spatial autocorrelation
## Technical Specifications
### File Formats
- **CSV**: `livestock_data.csv` (315 MB, 1M rows)
- **Parquet**: `livestock_data.parquet` (111 MB, compressed)
- **Metadata**: `metadata.json` (generation parameters, sources)
- **Data Dictionary**: `data_dictionary.csv` (variable descriptions)
### Missing Data
Realistic missing data rates by variable:
- Herd sizes: 2%
- Vet distance: 4%
- Vaccination coverage: 5%
- Disease incidence: 3%
- Pasture quality: 6%
- Mortality rate: 3%
- Disease type: 10% (conditional on disease occurrence)
- Management variables: 3-4%
### Data Quality Indicators
- ✅ All constraints validated (no impossible values)
- ✅ Conditional dependencies respected
- ✅ Literature benchmarks matched (±10%)
- ✅ Cross-variable correlations logical
- ✅ Missing data patterns realistic
## Ethical Considerations
### Privacy
- **No real households**: All data fully synthetic, cannot identify real people/places
- **No GPS coordinates**: No geographic identifiers that could reveal locations
- **Aggregated patterns only**: Individual records are fictional
### Representation
- **Pan-African focus**: Captures diversity across SSA, not dominated by single region
- **Pastoral systems included**: Arid/semi-arid zones well-represented
- **Smallholder-centric**: Large commercial farms not included
- **Traditional knowledge**: Ethnoveterinary practices acknowledged
### Responsible Use
Users should:
- ✅ Clearly label outputs as based on synthetic data
- ✅ Validate methods on real data before deployment
- ✅ Not overstate generalizability of findings
- ✅ Cite real data sources when transitioning to applications
- ✅ Engage local stakeholders when designing interventions
## Citation Information
If you use this dataset, please cite:
```bibtex
@dataset{livestock_health_synthetic_2024,
author = {Electric Sheep Africa},
title = {Livestock Health and Disease Surveillance Synthetic Dataset for Sub-Saharan Africa},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/electricsheepafrica/livestock-health-disease-ssa-synthetic}
}
```
### Key Literature Sources
This dataset synthesizes information from 50+ sources, including:
- **Perry & Grace (2009)**: Economic impacts of animal diseases (Journal of Agricultural Economics)
- **Cleaveland et al. (2001)**: Diseases of humans and domestic mammals (Phil Trans Royal Society B)
- **Leonard et al. (2017)**: Veterinary service delivery in developing countries (Rev. sci. tech. Off. int. Epiz)
- **Robinson et al. (2011)**: Global livestock production systems (FAO/ILRI)
- **AU-IBAR (2013)**: Veterinary services delivery in Africa (African Union)
- **McCorkle (1995)**: Ethnoveterinary R&D (Agriculture and Human Values)
- **Herrero et al. (2013)**: Biomass use in global livestock systems (PNAS)
- **Reid et al. (2014)**: Pastoral land development models (Ecology and Society)
Full bibliography available in parameter files (`parameters_livestock/` directory).
## Dataset Structure
### Variable Types
- **Categorical** (9 variables): Zones, disease types, systems
- **Continuous** (14 variables): Herd sizes, distances, indices, rates
- **Binary** (4 variables): Access, incidence, supplementation
### Sample Record
```csv
agro_ecological_zone,region_type,herd_size_cattle,disease_incidence_annual,vet_distance_km,...
semi_arid,rural_accessible,4,yes,35.2,...
```
## Updates and Versioning
- **Version**: 1.0
- **Release Date**: November 2024
- **Status**: Stable
- **Planned Updates**: None currently planned
## Contact
**Creator**: Electric Sheep Africa
**Repository**: [GitHub](https://github.com/electricsheepafrica/agriculture-synthetic-data)
**Issues**: Report via GitHub Issues
## License
**CC BY 4.0** (Creative Commons Attribution 4.0 International)
You are free to:
- ✅ Share and redistribute
- ✅ Adapt and build upon
- ✅ Use commercially
Under the condition that you:
- ✅ Give appropriate credit
- ✅ Indicate if changes were made
- ✅ Do not misrepresent as real surveillance data
---
## How to Load
```python
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("electricsheepafrica/livestock-health-disease-ssa-synthetic")
# Load as pandas DataFrame
import pandas as pd
df = dataset['train'].to_pandas()
# Or load Parquet directly
df = pd.read_parquet("livestock_data.parquet")
```
## Example Use Cases
### 1. Disease Risk Prediction
```python
# Train ML model to predict disease incidence
X = df[['herd_size_cattle', 'vet_distance_km', 'vaccination_coverage_pct',
'agro_ecological_zone', 'pasture_quality_index']]
y = df['disease_incidence_annual']
```
### 2. Vet Clinic Placement Optimization
```python
# Find underserved areas
underserved = df[(df['vet_distance_km'] > 60) & (df['livestock_tlu'] > 5)]
```
### 3. Vaccination Campaign Targeting
```python
# Identify high-risk, low-coverage households
targets = df[(df['vaccination_coverage_pct'] < 20) &
(df['disease_incidence_annual'] == 'yes')]
```
---
**Dataset 2 of 5** in the African Agriculture & Food Security Synthetic Data Portfolio
| 0 | 0 | [
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"region:us",
"agriculture",
"livestock",
"africa",
"synthetic-data",
"food-security",
"veterinary",
"disease-surveillance",
"smallholder-farming"
] | 2025-11-12T17:34:30+00:00 | 2025-11-12T17:47:57+00:00 | 0 |
TheFactoryX/edition_0345_SWE-Gym-SWE-Gym-readymade |
# edition_0345_SWE-Gym-SWE-Gym-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[SWE-Gym/SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
|
# edition_0345_SWE-Gym-SWE-Gym-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[SWE-Gym/SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
| 0 | 0 | [
"license:other",
"region:us",
"readymades",
"art",
"shuffled",
"duchamp"
] | 2025-11-12T17:47:45+00:00 | 2025-11-12T17:47:47+00:00 | 0 |
StannumX/ae0815 |
Hong Kong A&E Waiting Time
香港急症室等候時間
可视化&visualization: https://huggingface.co/spaces/StannumX/AE_Time
- `hospCode` = 医院名称
- `hospTimeEn` = 时间点
- `topWait` = 等候时间 |
Hong Kong A&E Waiting Time
香港急症室等候時間
可视化&visualization: https://huggingface.co/spaces/StannumX/AE_Time
- `hospCode` = 医院名称
- `hospTimeEn` = 时间点
- `topWait` = 等候时间 | 5,336 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-08-15T05:34:24+00:00 | 2025-11-12T17:47:15+00:00 | 0 |
hf-doc-build/doc-build-dev |
This is a dataset which contains the docs from all the PRs that are updating one of the docs from https://huggingface.co/docs.
It is automatically updated by this [github action](https://github.com/huggingface/doc-builder/blob/main/.github/workflows/build_pr_documentation.yml) from the [doc-buider](https://github.com/huggingface/doc-builder) repo. |
This is a dataset which contains the docs from all the PRs that are updating one of the docs from https://huggingface.co/docs.
It is automatically updated by this [github action](https://github.com/huggingface/doc-builder/blob/main/.github/workflows/build_pr_documentation.yml) from the [doc-buider](https://github.com/huggingface/doc-builder) repo. | 277,972 | 6 | [
"license:mit",
"region:us",
"documentation"
] | 2022-11-08T09:03:37+00:00 | 2025-11-12T17:47:04+00:00 | 0 |
ems123/Water-Potability-Classification-Project | # 💧 Water Potability Prediction: Classification Model Analysis
**Author:** [Emilie Levenbach]
**Date:** [12-11-2025]
**Project Goal:** To classify water samples as either potable (safe to drink) or non-potable based on their chemical properties.
---
## 1. Dataset Selection & Preparation (Part 1)
### Chosen Dataset: Water Potability (Source: Kaggle)
| Criterion | Status |
| :--- | :--- |
| Size | 3,276 rows, 10 features |
| Type | Mostly numerical |
| Target Variable | **`Potability`** (Binary: 1=Potable, 0=Not Potable) |
| ML Task | **Classification** |
### Data Cleaning Decisions
1. **Missing Values:** Missing values were found in `ph`, `Sulfate`, and `Trihalomethanes`. We used **mean imputation** to fill these gaps.
2. **Duplicates:** Duplicate rows were checked for and **removed**.
---
## 2. Exploratory Data Analysis (EDA) & Research (Part 2)
### A. Core Insights (Visual Research)
#### Research Question 1: What is the distribution of Potable vs. Non-Potable water in the dataset?
[**Image of Potability Status Distribution Plot**]
**Insight:** The count plot clearly demonstrates significant **class imbalance**. The Non-Potable class (0) heavily outweighs the Potable class (1). This imbalance is critical as it biases the model towards predicting the majority class.
#### Research Question 2: Are water samples with high Hardness more likely to be Potable or Non-Potable?
[**Image of Hardness Distribution by Potability Status Box Plot**]
**Insight:** The box plots show that the median and IQR of the `Hardness` feature are **almost identical** for both potable and non-potable groups. This indicates that water hardness alone is **not an effective feature** for differentiating safe drinking water from unsafe water.
#### Research Question 3: Does the level of Trihalomethanes show any difference between Potable and Non-Potable water?
[**Image of Trihalomethanes Distribution by Potability Status KDE Plot**]
**Insight:** The density plot (KDE) shows that the distributions for both classes **overlap heavily**. This confirms that `Trihalomethanes` is **not an effective feature when used in isolation** to predict potability.
### B. Outlier Handling Decision
* **Decision:** **Outliers were kept** in the dataset.
* **Justification:** Outliers often represent rare but **real events** (e.g., pollution spikes) that are valuable for training a robust model.
---
## 3. Modeling and Evaluation (Part 3)
### A. Model Selection & Training
* **Model:** Random Forest Classifier
* **Preprocessing:** Data was scaled using `StandardScaler` after the train-test split.
### B. Evaluation Results
The model was tested on the held-out test set (20%).
| Metric | Score |
| :--- | :--- |
| **Accuracy Score** | **0.6784** |
**Classification Report:**
| Class | Precision | Recall | F1-Score | Support |
| ---------------- | --------- | ------ | -------- | ------- |
| 0 | 0.70 | 0.86 | 0.77 | 412 |
| 1 | 0.61 | 0.38 | 0.47 | 244 |
| **Accuracy** | | | **0.68** | **656** |
| **Macro Avg** | 0.65 | 0.62 | 0.62 | 656 |
| **Weighted Avg** | 0.67 | 0.68 | 0.66 | 656 |
C. Feature Importance
This section includes the list of scores that rank all the features based on how much the model relied on them for prediction (e.g., Sulfate: 0.1257, pH: 0.1243).
Part 4: Conclusion & Next Steps
This section provides the analysis:
Conclusion
This summarizes the main findings using the metrics: "The overall accuracy was 67.84%, but there is a critical issue: poor performance on the minority class (Potable water, Class 1), evidenced by a low Recall of 0.38."
It then attributes the problem to the severe class imbalance and notes which features (Sulfate, pH, and Hardness) were most influential.
Screen record - https://www.loom.com/share/d85e8dd15837430eb726ad0852451773 | # 💧 Water Potability Prediction: Classification Model Analysis
**Author:** [Emilie Levenbach]
**Date:** [12-11-2025]
**Project Goal:** To classify water samples as either potable (safe to drink) or non-potable based on their chemical properties.
---
## 1. Dataset Selection & Preparation (Part 1)
### Chosen Dataset: Water Potability (Source: Kaggle)
| Criterion | Status |
| :--- | :--- |
| Size | 3,276 rows, 10 features |
| Type | Mostly numerical |
| Target Variable | **`Potability`** (Binary: 1=Potable, 0=Not Potable) |
| ML Task | **Classification** |
### Data Cleaning Decisions
1. **Missing Values:** Missing values were found in `ph`, `Sulfate`, and `Trihalomethanes`. We used **mean imputation** to fill these gaps.
2. **Duplicates:** Duplicate rows were checked for and **removed**.
---
## 2. Exploratory Data Analysis (EDA) & Research (Part 2)
### A. Core Insights (Visual Research)
#### Research Question 1: What is the distribution of Potable vs. Non-Potable water in the dataset?
[**Image of Potability Status Distribution Plot**]
**Insight:** The count plot clearly demonstrates significant **class imbalance**. The Non-Potable class (0) heavily outweighs the Potable class (1). This imbalance is critical as it biases the model towards predicting the majority class.
#### Research Question 2: Are water samples with high Hardness more likely to be Potable or Non-Potable?
[**Image of Hardness Distribution by Potability Status Box Plot**]
**Insight:** The box plots show that the median and IQR of the `Hardness` feature are **almost identical** for both potable and non-potable groups. This indicates that water hardness alone is **not an effective feature** for differentiating safe drinking water from unsafe water.
#### Research Question 3: Does the level of Trihalomethanes show any difference between Potable and Non-Potable water?
[**Image of Trihalomethanes Distribution by Potability Status KDE Plot**]
**Insight:** The density plot (KDE) shows that the distributions for both classes **overlap heavily**. This confirms that `Trihalomethanes` is **not an effective feature when used in isolation** to predict potability.
### B. Outlier Handling Decision
* **Decision:** **Outliers were kept** in the dataset.
* **Justification:** Outliers often represent rare but **real events** (e.g., pollution spikes) that are valuable for training a robust model.
---
## 3. Modeling and Evaluation (Part 3)
### A. Model Selection & Training
* **Model:** Random Forest Classifier
* **Preprocessing:** Data was scaled using `StandardScaler` after the train-test split.
### B. Evaluation Results
The model was tested on the held-out test set (20%).
| Metric | Score |
| :--- | :--- |
| **Accuracy Score** | **0.6784** |
**Classification Report:**
| Class | Precision | Recall | F1-Score | Support |
| ---------------- | --------- | ------ | -------- | ------- |
| 0 | 0.70 | 0.86 | 0.77 | 412 |
| 1 | 0.61 | 0.38 | 0.47 | 244 |
| **Accuracy** | | | **0.68** | **656** |
| **Macro Avg** | 0.65 | 0.62 | 0.62 | 656 |
| **Weighted Avg** | 0.67 | 0.68 | 0.66 | 656 |
C. Feature Importance
This section includes the list of scores that rank all the features based on how much the model relied on them for prediction (e.g., Sulfate: 0.1257, pH: 0.1243).
Part 4: Conclusion & Next Steps
This section provides the analysis:
Conclusion
This summarizes the main findings using the metrics: "The overall accuracy was 67.84%, but there is a critical issue: poor performance on the minority class (Potable water, Class 1), evidenced by a low Recall of 0.38."
It then attributes the problem to the severe class imbalance and notes which features (Sulfate, pH, and Hardness) were most influential.
Screen record - https://www.loom.com/share/d85e8dd15837430eb726ad0852451773 | 0 | 0 | [
"region:us"
] | 2025-11-12T16:58:51+00:00 | 2025-11-12T17:47:34+00:00 | 0 |
Milad96/Kluyveromyces-marxianus |
# 🧬 Kluyveromyces marxianus Quantum Dataset v10.0.0
## Overview
Comprehensive multi-omics dataset for *Kluyveromyces marxianus* collected using quantum-grade async streaming pipeline, fully integrated with Cell 0's structured directory system.
### Statistics
| Metric | Value |
|--------|-------|
| **Total Collected** | 3,835 |
| **Total Local Saved** | 3,835 |
| **Version** | v10.0.0 |
| **Collection Date** | 2025-11-10 |
### Data Categories & Local Storage
- **Literature**: 1,417 records (local: 1,417)
- **Proteins**: 1,001 records (local: 1,001)
- **PMC Full-Text**: 999 records (local: 999)
- **SRA Sequencing**: 352 records (local: 352)
- **GEO Expression**: 48 records (local: 48)
- **Nucleotide Sequences**: 18 records (local: 18)
### Cell 0 Integration
This dataset **strictly respects** Cell 0's directory structure. Only folders actively used by collectors:
```
km_dataset/
├── genomic/ # Genes, nucleotide sequences
├── protein/ # Protein sequences
├── literature/ # PubMed, PMC articles
├── expression/ # GEO, SRA sequencing data
└── checkpoints/
└── cell1_quantum/ # Collection checkpoints
```
**Note**: Cell 0 also creates `pathway/`, `interaction/`, `structure/`, `repository/` folders, but current collectors don't produce data for these categories yet.
### HuggingFace Organization
Data is organized by phase using `data_dir` to prevent overwrites:
- `cell1_genes` - Gene data
- `cell1_proteins` - Protein sequences
- `cell1_literature` - PubMed articles
- `cell1_pmc` - PMC full-text articles
- `cell1_sequences` - Nucleotide sequences
- `cell1_geo` - GEO expression data
- `cell1_sra` - SRA sequencing data
- `cell1_splits` - Train/validation/test splits
## Usage
### Load All Data
```python
from datasets import load_dataset, concatenate_datasets
# Load all phases (FIXED: correct data_dir names)
all_data = []
for phase in ['cell1_genes', 'cell1_proteins', 'cell1_literature',
'cell1_pmc', 'cell1_sequences', 'cell1_geo', 'cell1_sra']:
try:
ds = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir=phase)
all_data.append(ds)
except:
pass
combined = concatenate_datasets(all_data)
```
### Load Specific Phase
```python
# Load only genes
genes = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir='cell1_genes')
# Load only literature
literature = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir='cell1_literature')
```
### Load Splits
```python
dataset = load_dataset("Milad96/Kluyveromyces-marxianus", data_dir='cell1_splits')
train = dataset['train']
val = dataset.get('validation')
test = dataset.get('test')
```
## Citation
```bibtex
@dataset{km_quantum_v10_0_0,
title={Kluyveromyces marxianus Quantum Dataset},
version={v10.0.0},
year={2025},
url={https://huggingface.co/datasets/Milad96/Kluyveromyces-marxianus}
}
```
**Status**: ✅ Production Ready
**Quality**: 🌟 Quantum Grade
**Pipeline**: Async Streaming v10.0 + Cell 0 Full Integration
**Local Storage**: ✅ All records saved in structured folders
**Overwrite Protection**: ✅ Phase-specific data_dirs
|
# 🧬 Kluyveromyces marxianus Quantum Dataset v10.0.0
## Overview
Comprehensive multi-omics dataset for *Kluyveromyces marxianus* collected using quantum-grade async streaming pipeline, fully integrated with Cell 0's structured directory system.
### Statistics
| Metric | Value |
|--------|-------|
| **Total Collected** | 3,835 |
| **Total Local Saved** | 3,835 |
| **Version** | v10.0.0 |
| **Collection Date** | 2025-11-10 |
### Data Categories & Local Storage
- **Literature**: 1,417 records (local: 1,417)
- **Proteins**: 1,001 records (local: 1,001)
- **PMC Full-Text**: 999 records (local: 999)
- **SRA Sequencing**: 352 records (local: 352)
- **GEO Expression**: 48 records (local: 48)
- **Nucleotide Sequences**: 18 records (local: 18)
### Cell 0 Integration
This dataset **strictly respects** Cell 0's directory structure. Only folders actively used by collectors:
```
km_dataset/
├── genomic/ # Genes, nucleotide sequences
├── protein/ # Protein sequences
├── literature/ # PubMed, PMC articles
├── expression/ # GEO, SRA sequencing data
└── checkpoints/
└── cell1_quantum/ # Collection checkpoints
```
**Note**: Cell 0 also creates `pathway/`, `interaction/`, `structure/`, `repository/` folders, but current collectors don't produce data for these categories yet.
### HuggingFace Organization
Data is organized by phase using `data_dir` to prevent overwrites:
- `cell1_genes` - Gene data
- `cell1_proteins` - Protein sequences
- `cell1_literature` - PubMed articles
- `cell1_pmc` - PMC full-text articles
- `cell1_sequences` - Nucleotide sequences
- `cell1_geo` - GEO expression data
- `cell1_sra` - SRA sequencing data
- `cell1_splits` - Train/validation/test splits
## Usage
### Load All Data
```python
from datasets import load_dataset, concatenate_datasets
# Load all phases (FIXED: correct data_dir names)
all_data = []
for phase in ['cell1_genes', 'cell1_proteins', 'cell1_literature',
'cell1_pmc', 'cell1_sequences', 'cell1_geo', 'cell1_sra']:
try:
ds = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir=phase)
all_data.append(ds)
except:
pass
combined = concatenate_datasets(all_data)
```
### Load Specific Phase
```python
# Load only genes
genes = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir='cell1_genes')
# Load only literature
literature = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir='cell1_literature')
```
### Load Splits
```python
dataset = load_dataset("Milad96/Kluyveromyces-marxianus", data_dir='cell1_splits')
train = dataset['train']
val = dataset.get('validation')
test = dataset.get('test')
```
## Citation
```bibtex
@dataset{km_quantum_v10_0_0,
title={Kluyveromyces marxianus Quantum Dataset},
version={v10.0.0},
year={2025},
url={https://huggingface.co/datasets/Milad96/Kluyveromyces-marxianus}
}
```
**Status**: ✅ Production Ready
**Quality**: 🌟 Quantum Grade
**Pipeline**: Async Streaming v10.0 + Cell 0 Full Integration
**Local Storage**: ✅ All records saved in structured folders
**Overwrite Protection**: ✅ Phase-specific data_dirs
| 652 | 0 | [
"task_categories:text-generation",
"task_categories:token-classification",
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"kluyveromyces-marxianus",
"yeast",
"genomics",
"proteomics",
"bioinformatics"
] | 2025-11-10T10:13:24+00:00 | 2025-11-12T17:35:13+00:00 | 0 |
TheFactoryX/edition_0344_cornell-movie-review-data-rotten_tomatoes-readymade |
# edition_0344_cornell-movie-review-data-rotten_tomatoes-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[cornell-movie-review-data/rotten_tomatoes](https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
|
# edition_0344_cornell-movie-review-data-rotten_tomatoes-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[cornell-movie-review-data/rotten_tomatoes](https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
| 0 | 0 | [
"license:other",
"region:us",
"readymades",
"art",
"shuffled",
"duchamp"
] | 2025-11-12T17:36:38+00:00 | 2025-11-12T17:36:41+00:00 | 0 |
bezzam/vibevoice_samples |
Source: https://github.com/vibevoice-community/VibeVoice/tree/main/demo |
Source: https://github.com/vibevoice-community/VibeVoice/tree/main/demo | 11 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | 2025-11-08T09:26:38+00:00 | 2025-11-12T17:36:39+00:00 | 0 |
isaacery/test |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 1,
"total_frames": 10,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 1,
"total_frames": 10,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"test"
] | 2025-11-12T17:25:54+00:00 | 2025-11-12T17:25:59+00:00 | 0 |
rahul09122004/neuroscope-dataset | # LGG Segmentation Dataset
This dataset contains brain MR images together with manual FLAIR abnormality segmentation masks.
The images were obtained from The Cancer Imaging Archive (TCIA).
They correspond to 110 patients included in The Cancer Genome Atlas (TCGA) lower-grade glioma collection with at least fluid-attenuated inversion recovery (FLAIR) sequence and genomic cluster data available.
Tumor genomic clusters and patient data is provided in `data.csv` file.
All images are provided in `.tif` format with 3 channels per image.
For 101 cases, 3 sequences are available, i.e. pre-contrast, FLAIR, post-contrast (in this order of channels).
For 9 cases, post-contrast sequence is missing and for 6 cases, pre-contrast sequence is missing.
Missing sequences are replaced with FLAIR sequence to make all images 3-channel.
Masks are binary, 1-channel images.
They segment FLAIR abnormality present in the FLAIR sequence (available for all cases).
The dataset is organized into 110 folders named after case ID that contains information about source institution.
Each folder contains MR images with the following naming convention:
`TCGA_<institution-code>_<patient-id>_<slice-number>.tif`
Corresponding masks have a `_mask` suffix.
| # LGG Segmentation Dataset
This dataset contains brain MR images together with manual FLAIR abnormality segmentation masks.
The images were obtained from The Cancer Imaging Archive (TCIA).
They correspond to 110 patients included in The Cancer Genome Atlas (TCGA) lower-grade glioma collection with at least fluid-attenuated inversion recovery (FLAIR) sequence and genomic cluster data available.
Tumor genomic clusters and patient data is provided in `data.csv` file.
All images are provided in `.tif` format with 3 channels per image.
For 101 cases, 3 sequences are available, i.e. pre-contrast, FLAIR, post-contrast (in this order of channels).
For 9 cases, post-contrast sequence is missing and for 6 cases, pre-contrast sequence is missing.
Missing sequences are replaced with FLAIR sequence to make all images 3-channel.
Masks are binary, 1-channel images.
They segment FLAIR abnormality present in the FLAIR sequence (available for all cases).
The dataset is organized into 110 folders named after case ID that contains information about source institution.
Each folder contains MR images with the following naming convention:
`TCGA_<institution-code>_<patient-id>_<slice-number>.tif`
Corresponding masks have a `_mask` suffix.
| 0 | 0 | [
"region:us"
] | 2025-11-12T16:55:36+00:00 | 2025-11-12T17:22:06+00:00 | 0 |
TheFactoryX/edition_0343_shi-labs-oneformer_demo-readymade |
# edition_0343_shi-labs-oneformer_demo-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[shi-labs/oneformer_demo](https://huggingface.co/datasets/shi-labs/oneformer_demo)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
|
# edition_0343_shi-labs-oneformer_demo-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[shi-labs/oneformer_demo](https://huggingface.co/datasets/shi-labs/oneformer_demo)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
| 0 | 0 | [
"license:other",
"region:us",
"readymades",
"art",
"shuffled",
"duchamp"
] | 2025-11-12T17:13:27+00:00 | 2025-11-12T17:13:30+00:00 | 0 |
phospho-app/b19_new_bboxes |
# b19_new
**This dataset was generated using [phosphobot](https://docs.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot.
To get started in robotics, [get your own phospho starter pack.](https://robots.phospho.ai).
|
# b19_new
**This dataset was generated using [phosphobot](https://docs.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot.
To get started in robotics, [get your own phospho starter pack.](https://robots.phospho.ai).
| 56 | 0 | [
"task_categories:robotics",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | 2025-11-06T15:40:25+00:00 | 2025-11-12T17:12:18+00:00 | 0 |
fmadore/islam-west-africa-collection |
# Islam West Africa Collection (IWAC)
## Dataset Overview
This dataset forms part of the *[Islam West Africa Collection](https://islam.zmo.de/s/westafrica/)* (IWAC), an open-access digital database curated by [Frédérick Madore](https://www.frederickmadore.com/). The project is hosted at [Leibniz-Zentrum Moderner Orient (ZMO)](https://www.zmo.de/en).
Expanding on the *Islam Burkina Faso Collection* (2021), IWAC comprises over 14,000 archival documents, digitized newspaper articles (including both scanned files and web captures via the Wayback Machine), Islamic publications, audiovisual recordings, and photographs documenting Muslim public life in Burkina Faso, Benin, Niger, Nigeria, Togo, and Côte d'Ivoire.
This dataset includes metadata and, where available, full-text content from newspaper articles, Islamic publications, and audiovisual materials, structured into five subsets: `articles`, `audiovisual`, `documents`, `publications`, and `index`.
## Text Analysis Features
The dataset includes advanced text analysis metrics calculated from the OCR content:
- **Lexical Richness** (`Richesse_Lexicale_OCR`): Type-Token Ratio (TTR) measuring vocabulary diversity
- **Readability** (`Lisibilite_OCR`): Flesch reading ease score for French texts
- **Topic Modeling**: BERTopic-based topic discovery and classification using French CamemBERT embeddings
- **Sentiment Analysis**: Traditional sentiment classification using DistilCamemBERT model optimized for French
- **AI-Generated Assessments**: Gemini Flash 2.5 and ChatGPT GPT-4.1 mini evaluations of centrality to Islam/Muslims, subjectivity, and polarity
These metrics were computed using custom scripts:
- **Lexical Analysis**: A lexical richness calculator processes OCR text to generate TTR scores and Flesch readability indices optimized for French language content
- **French Lemmatization**: Uses spaCy's `fr_dep_news_trf` model with text normalization (Unicode NFC, ligature expansion, quote standardization) to generate lemmatized text and stop-word-filtered versions
- **Semantic Embeddings**: Generates vector representations of French summaries using `paraphrase-multilingual-mpnet-base-v2` for semantic search and similarity analysis
- **Topic Modeling**: Uses BERTopic with French CamemBERT embeddings (`dangvantuan/sentence-camembert-base`) to automatically discover and classify topics from lemmatized text without stopwords. The model combines UMAP dimensionality reduction, HDBSCAN clustering, and CountVectorizer for topic representation, generating topic IDs, probability scores, and human-readable topic labels
- **Traditional Sentiment Analysis**: Uses the `cmarkea/distilcamembert-base-sentiment` model to classify text sentiment into POSITIVE, NEGATIVE, or NEUTRAL categories with confidence scores
- **AI Sentiment Analysis**: Both Gemini Flash 2.5 and ChatGPT GPT-4.1 mini-powered scripts analyze how Islam and Muslims are represented in each article, providing:
- **Centrality assessment** (from "Non abordé" to "Très central")
- **Subjectivity scoring** (1-5 scale) measuring objectivity in representation
- **Polarity evaluation** (from "Très négatif" to "Très positif") of the portrayal
- **Detailed justifications** for each assessment
The AI analysis uses structured prompts specifically designed for studying representations of Islam and Muslims in West African francophone media, with built-in caching and error handling for large-scale processing. Both Gemini Flash 2.5 and ChatGPT GPT-4.1 mini models are used to provide comparative AI assessments.
## Loading the Dataset
Use the Hugging Face `datasets` library to access any of the four subsets:
```python
from datasets import load_dataset
# Load newspaper articles
articles_ds = load_dataset("fmadore/iwac-newspaper-articles", name="articles")
# Load audiovisual documents
audiovisual_ds = load_dataset("fmadore/iwac-newspaper-articles", name="audiovisual")
# Load documents (archival materials)
documents_ds = load_dataset("fmadore/iwac-newspaper-articles", name="documents")
# Load Islamic publications
publications_ds = load_dataset("fmadore/iwac-newspaper-articles", name="publications")
# Load index (entities, places, organizations, events, topics)
index_ds = load_dataset("fmadore/iwac-newspaper-articles", name="index")
```
## Dataset Structure
### Subset: articles
This subset contains metadata and OCR-processed content from 11,540 newspaper articles.
**Fields:**
- **`o:id`** (string) — Unique item ID from the Omeka repository
- **`identifier`** (string) — Internal item identifier (dcterms:identifier)
- **`url`** (string) — URL of the item's page on the IWAC website
- **`PDF`** (string) — Link to the original PDF file (if available)
- **`thumbnail`** (string) — IIIF thumbnail URL of the document
- **`title`** (string) — Title of the article (dcterms:title)
- **`author`** (string) — Author(s), separated by `|` (dcterms:creator)
- **`newspaper`** (string) — Newspaper title (dcterms:publisher)
- **`country`** (string) — Country of publication
- **`pub_date`** (string) — Publication date in YYYY-MM-DD format (dcterms:date)
- **`descriptionAI`** (string) — Gemini 2.5 Flash generated French summary (bibo:shortDescription)
- **`embedding_descriptionAI`** (sequence) — Semantic embeddings of French summaries using paraphrase-multilingual-mpnet-base-v2
- **`subject`** (string) — Subject keywords (Events, Organizations, Persons, Topics), separated by `|` (dcterms:subject)
- **`spatial`** (string) — Geographic focus, separated by `|` (dcterms:spatial)
- **`language`** (string) — Language(s), separated by `|` (dcterms:language)
- **`nb_pages`** (float64) — Number of pages (bibo:numPages)
- **`URL`** (string) — Original article URL (if applicable) (fabio:hasURL)
- **`source`** (string) — Source or provenance (dcterms:source)
- **`OCR`** (string) — Full OCR-extracted text (bibo:content)
- **`nb_mots`** (float64) — Number of words in the OCR text
- **`Richesse_Lexicale_OCR`** (float64) — Lexical richness score of the OCR text
- **`Lisibilite_OCR`** (float64) — Readability score of the OCR text
- **`lemma_text`** (string) — spaCy fr_dep_news_trf lemmatized text (normalized, alphabetic tokens only)
- **`lemma_nostop`** (string) — spaCy lemmatized text with French stopwords removed
- **`topic_id`** (float64) — BERTopic-assigned topic identifier (-1 for outliers)
- **`topic_prob`** (float64) — Maximum probability score for the assigned topic
- **`topic_label`** (string) — Human-readable topic label generated by BERTopic
- **`sentiment_label`** (string) — DistilCamemBERT sentiment classification (POSITIVE/NEGATIVE/NEUTRAL)
- **`sentiment_score`** (float64) — DistilCamemBERT confidence score for sentiment classification
- **`chatgpt_centralite_islam_musulmans`** (string) — ChatGPT GPT-4.1 mini assessment of centrality to Islam/Muslims
- **`chatgpt_centralite_justification`** (string) — ChatGPT GPT-4.1 mini justification for centrality assessment
- **`chatgpt_subjectivite_score`** (float64) — ChatGPT GPT-4.1 mini subjectivity score (1-5 scale)
- **`chatgpt_subjectivite_justification`** (string) — ChatGPT GPT-4.1 mini justification for subjectivity score
- **`chatgpt_polarite`** (string) — ChatGPT GPT-4.1 mini polarity assessment
- **`chatgpt_polarite_justification`** (string) — ChatGPT GPT-4.1 mini justification for polarity assessment
- **`gemini_centralite_islam_musulmans`** (string) — Gemini Flash 2.5 assessment of centrality to Islam/Muslims
- **`gemini_centralite_justification`** (string) — Gemini Flash 2.5 justification for centrality assessment
- **`gemini_subjectivite_score`** (float64) — Gemini Flash 2.5 subjectivity score
- **`gemini_subjectivite_justification`** (string) — Gemini Flash 2.5 justification for subjectivity score
- **`gemini_polarite`** (string) — Gemini Flash 2.5 polarity assessment
- **`gemini_polarite_justification`** (string) — Gemini Flash 2.5 justification for polarity assessment
### Subset: audiovisual
This subset contains metadata from 45 audiovisual documents, including audio and video recordings from Nigerian sources.
**Fields:**
- **`o:id`** (string) — Unique item ID from the Omeka repository
- **`identifier`** (string) — Internal item identifier (dcterms:identifier)
- **`added_date`** (string) — Date when item was added to Omeka (YYYY-MM-DD format)
- **`url`** (string) — URL of the item's page on the IWAC website
- **`iiif_manifest`** (string) — IIIF manifest URL for the media item
- **`PDF`** (string) — Link to the media file (audio/video, not necessarily PDF)
- **`thumbnail`** (string) — IIIF thumbnail URL of the document
- **`title`** (string) — Title of the recording (dcterms:title)
- **`creator`** (string) — Creator(s) of the recording, separated by `|` (dcterms:creator)
- **`publisher`** (string) — Publisher or broadcasting organization (dcterms:publisher)
- **`country`** (string) — Country of origin (Nigeria)
- **`pub_date`** (string) — Publication/broadcast date in YYYY-MM-DD format (dcterms:date)
- **`descriptionAI`** (string) — AI-generated description (bibo:shortDescription)
- **`volume`** (string) — Volume number(s), separated by `|` (bibo:volume)
- **`issue`** (string) — Issue number(s), separated by `|` (bibo:issue)
- **`is_part_of`** (string) — Parent collection or series (dcterms:isPartOf)
- **`extent`** (string) — Duration of recording (e.g., "PT208M" for 208 minutes) (dcterms:extent)
- **`medium`** (string) — Medium type (e.g., audio, video) (dcterms:medium)
- **`subject`** (string) — Subject keywords (Events, Organizations, Persons, Topics), separated by `|` (dcterms:subject)
- **`spatial`** (string) — Geographic focus, separated by `|` (dcterms:spatial)
- **`language`** (string) — Language(s) of the recording, separated by `|` (dcterms:language)
- **`source`** (string) — Source or provenance (dcterms:source)
### Subset: documents
This subset contains metadata and OCR-extracted content from 24 documents (archival materials and other non-periodical publications).
**Fields:**
- **`o:id`** (string) — Unique item ID from the Omeka repository
- **`identifier`** (string) — Internal item identifier
- **`url`** (string) — URL of the item's IWAC page
- **`PDF`** (string) — Link to the original PDF file (if available)
- **`thumbnail`** (string) — IIIF thumbnail URL
- **`title`** (string) — Title of the document
- **`author`** (string) — Author(s) of the document, separated by `|`
- **`country`** (string) — Country of publication
- **`pub_date`** (string) — Publication date in YYYY-MM-DD format
- **`descriptionAI`** (string) — Gemini 2.5 Flash generated French summary
- **`embedding_descriptionAI`** (sequence) — Semantic embeddings of French summaries using paraphrase-multilingual-mpnet-base-v2
- **`subject`** (string) — Subject keywords (Events, Organizations, Persons, Topics), separated by `|`
- **`spatial`** (string) — Geographic focus, separated by `|`
- **`language`** (string) — Language(s) of the document
- **`type`** (string) — Type of document
- **`nb_pages`** (float64) — Number of pages
- **`source`** (string) — Source or provenance
- **`rights`** (string) — Rights information
- **`OCR`** (string) — Full OCR-extracted text
- **`nb_mots`** (int64) — Number of words in the OCR text
### Subset: publications
This subset contains metadata and OCR-extracted content from 1,501 publications (books, pamphlets, and periodicals).
**Fields:**
- **`o:id`** (string) — Unique item ID from the Omeka repository
- **`identifier`** (string) — Internal item identifier
- **`url`** (string) — URL of the item's IWAC page
- **`PDF`** (string) — Link to the original PDF file (if available)
- **`thumbnail`** (string) — IIIF thumbnail URL
- **`title`** (string) — Title of the publication
- **`author`** (string) — Author(s) of the publication, separated by `|`
- **`newspaper`** (string) — Publication/newspaper title
- **`country`** (string) — Country of publication
- **`pub_date`** (string) — Publication date in YYYY-MM-DD format
- **`issue`** (string) — Issue number or identifier
- **`subject`** (string) — Subject keywords (Events, Organizations, Persons, Topics), separated by `|`
- **`spatial`** (string) — Geographic focus, separated by `|`
- **`language`** (string) — Language(s) of the publication
- **`nb_pages`** (int64) — Number of pages
- **`URL`** (string) — Original publication URL (if applicable)
- **`source`** (string) — Source or provenance
- **`OCR`** (string) — Full OCR-extracted text
- **`nb_mots`** (int64) — Number of words in the OCR text
### Subset: index
This subset contains metadata for 4,307 authority records and index entries, including persons, organizations, places, events, and topics referenced in the collection. Each entry includes frequency statistics calculated from the `articles` and `publications` subsets.
**Fields:**
- **`o:id`** (int64) — Unique item ID from the Omeka repository
- **`identifier`** (string) — Internal item identifier (dcterms:identifier)
- **`url`** (string) — URL of the item's page on the IWAC website
- **`thumbnail`** (string) — IIIF thumbnail URL of the entity (if available)
- **`Titre`** (string) — Title or name of the entity (dcterms:title)
- **`Titre alternatif`** (string) — Alternative title or name, separated by `|` (dcterms:alternative)
- **`Type`** (string) — Type of entity: "Lieux" (Places), "Personnes" (Persons), "Organisations" (Organizations), "Événements" (Events), "Sujets" (Topics), or "Notices d'autorité" (Authority records)
- **`Description`** (string) — Description of the entity (dcterms:description)
- **`Date création`** (string) — Creation date (dcterms:created)
- **`date`** (string) — Associated date (dcterms:date)
- **`Relation`** (string) — Related entities (dcterms:relation), separated by `|`
- **`Remplacé par`** (string) — Replaced by (dcterms:isReplacedBy), separated by `|`
- **`Partie de`** (string) — Part of (dcterms:isPartOf), separated by `|`
- **`spatial`** (string) — Geographic information (dcterms:spatial), separated by `|`
- **`A une partie`** (string) — Has part (dcterms:hasPart), separated by `|`
- **`Prénom`** (string) — First name (for person entities) (foaf:firstName)
- **`Nom`** (string) — Last name (for person entities) (foaf:lastName)
- **`Genre`** (string) — Gender (for person entities) (foaf:gender)
- **`Naissance`** (string) — Birth date (for person entities) (foaf:birthday)
- **`Coordonnées`** (string) — Geographic coordinates (for place entities) (curation:coordinates)
- **`frequency`** (int64) — Number of times this entity appears in the articles and publications subsets
- **`first_occurrence`** (string) — Date of first occurrence in the collection (YYYY-MM-DD format)
- **`last_occurrence`** (string) — Date of last occurrence in the collection (YYYY-MM-DD format)
- **`countries`** (string) — Countries where this entity appears, separated by `|`
## Citation
If you use this dataset, please cite:
```bibtex
@misc{madore_2025_iwac,
author = {Frédérick Madore},
title = {Islam West Africa Collection (IWAC)},
year = {2025},
publisher = {Leibniz-Zentrum Moderner Orient (ZMO)},
url = {https://huggingface.co/datasets/fmadore/islam-west-africa-collection},
version = {1.0.0}
}
```
|
# Islam West Africa Collection (IWAC)
## Dataset Overview
This dataset forms part of the *[Islam West Africa Collection](https://islam.zmo.de/s/westafrica/)* (IWAC), an open-access digital database curated by [Frédérick Madore](https://www.frederickmadore.com/). The project is hosted at [Leibniz-Zentrum Moderner Orient (ZMO)](https://www.zmo.de/en).
Expanding on the *Islam Burkina Faso Collection* (2021), IWAC comprises over 14,000 archival documents, digitized newspaper articles (including both scanned files and web captures via the Wayback Machine), Islamic publications, audiovisual recordings, and photographs documenting Muslim public life in Burkina Faso, Benin, Niger, Nigeria, Togo, and Côte d'Ivoire.
This dataset includes metadata and, where available, full-text content from newspaper articles, Islamic publications, and audiovisual materials, structured into five subsets: `articles`, `audiovisual`, `documents`, `publications`, and `index`.
## Text Analysis Features
The dataset includes advanced text analysis metrics calculated from the OCR content:
- **Lexical Richness** (`Richesse_Lexicale_OCR`): Type-Token Ratio (TTR) measuring vocabulary diversity
- **Readability** (`Lisibilite_OCR`): Flesch reading ease score for French texts
- **Topic Modeling**: BERTopic-based topic discovery and classification using French CamemBERT embeddings
- **Sentiment Analysis**: Traditional sentiment classification using DistilCamemBERT model optimized for French
- **AI-Generated Assessments**: Gemini Flash 2.5 and ChatGPT GPT-4.1 mini evaluations of centrality to Islam/Muslims, subjectivity, and polarity
These metrics were computed using custom scripts:
- **Lexical Analysis**: A lexical richness calculator processes OCR text to generate TTR scores and Flesch readability indices optimized for French language content
- **French Lemmatization**: Uses spaCy's `fr_dep_news_trf` model with text normalization (Unicode NFC, ligature expansion, quote standardization) to generate lemmatized text and stop-word-filtered versions
- **Semantic Embeddings**: Generates vector representations of French summaries using `paraphrase-multilingual-mpnet-base-v2` for semantic search and similarity analysis
- **Topic Modeling**: Uses BERTopic with French CamemBERT embeddings (`dangvantuan/sentence-camembert-base`) to automatically discover and classify topics from lemmatized text without stopwords. The model combines UMAP dimensionality reduction, HDBSCAN clustering, and CountVectorizer for topic representation, generating topic IDs, probability scores, and human-readable topic labels
- **Traditional Sentiment Analysis**: Uses the `cmarkea/distilcamembert-base-sentiment` model to classify text sentiment into POSITIVE, NEGATIVE, or NEUTRAL categories with confidence scores
- **AI Sentiment Analysis**: Both Gemini Flash 2.5 and ChatGPT GPT-4.1 mini-powered scripts analyze how Islam and Muslims are represented in each article, providing:
- **Centrality assessment** (from "Non abordé" to "Très central")
- **Subjectivity scoring** (1-5 scale) measuring objectivity in representation
- **Polarity evaluation** (from "Très négatif" to "Très positif") of the portrayal
- **Detailed justifications** for each assessment
The AI analysis uses structured prompts specifically designed for studying representations of Islam and Muslims in West African francophone media, with built-in caching and error handling for large-scale processing. Both Gemini Flash 2.5 and ChatGPT GPT-4.1 mini models are used to provide comparative AI assessments.
## Loading the Dataset
Use the Hugging Face `datasets` library to access any of the four subsets:
```python
from datasets import load_dataset
# Load newspaper articles
articles_ds = load_dataset("fmadore/iwac-newspaper-articles", name="articles")
# Load audiovisual documents
audiovisual_ds = load_dataset("fmadore/iwac-newspaper-articles", name="audiovisual")
# Load documents (archival materials)
documents_ds = load_dataset("fmadore/iwac-newspaper-articles", name="documents")
# Load Islamic publications
publications_ds = load_dataset("fmadore/iwac-newspaper-articles", name="publications")
# Load index (entities, places, organizations, events, topics)
index_ds = load_dataset("fmadore/iwac-newspaper-articles", name="index")
```
## Dataset Structure
### Subset: articles
This subset contains metadata and OCR-processed content from 11,540 newspaper articles.
**Fields:**
- **`o:id`** (string) — Unique item ID from the Omeka repository
- **`identifier`** (string) — Internal item identifier (dcterms:identifier)
- **`url`** (string) — URL of the item's page on the IWAC website
- **`PDF`** (string) — Link to the original PDF file (if available)
- **`thumbnail`** (string) — IIIF thumbnail URL of the document
- **`title`** (string) — Title of the article (dcterms:title)
- **`author`** (string) — Author(s), separated by `|` (dcterms:creator)
- **`newspaper`** (string) — Newspaper title (dcterms:publisher)
- **`country`** (string) — Country of publication
- **`pub_date`** (string) — Publication date in YYYY-MM-DD format (dcterms:date)
- **`descriptionAI`** (string) — Gemini 2.5 Flash generated French summary (bibo:shortDescription)
- **`embedding_descriptionAI`** (sequence) — Semantic embeddings of French summaries using paraphrase-multilingual-mpnet-base-v2
- **`subject`** (string) — Subject keywords (Events, Organizations, Persons, Topics), separated by `|` (dcterms:subject)
- **`spatial`** (string) — Geographic focus, separated by `|` (dcterms:spatial)
- **`language`** (string) — Language(s), separated by `|` (dcterms:language)
- **`nb_pages`** (float64) — Number of pages (bibo:numPages)
- **`URL`** (string) — Original article URL (if applicable) (fabio:hasURL)
- **`source`** (string) — Source or provenance (dcterms:source)
- **`OCR`** (string) — Full OCR-extracted text (bibo:content)
- **`nb_mots`** (float64) — Number of words in the OCR text
- **`Richesse_Lexicale_OCR`** (float64) — Lexical richness score of the OCR text
- **`Lisibilite_OCR`** (float64) — Readability score of the OCR text
- **`lemma_text`** (string) — spaCy fr_dep_news_trf lemmatized text (normalized, alphabetic tokens only)
- **`lemma_nostop`** (string) — spaCy lemmatized text with French stopwords removed
- **`topic_id`** (float64) — BERTopic-assigned topic identifier (-1 for outliers)
- **`topic_prob`** (float64) — Maximum probability score for the assigned topic
- **`topic_label`** (string) — Human-readable topic label generated by BERTopic
- **`sentiment_label`** (string) — DistilCamemBERT sentiment classification (POSITIVE/NEGATIVE/NEUTRAL)
- **`sentiment_score`** (float64) — DistilCamemBERT confidence score for sentiment classification
- **`chatgpt_centralite_islam_musulmans`** (string) — ChatGPT GPT-4.1 mini assessment of centrality to Islam/Muslims
- **`chatgpt_centralite_justification`** (string) — ChatGPT GPT-4.1 mini justification for centrality assessment
- **`chatgpt_subjectivite_score`** (float64) — ChatGPT GPT-4.1 mini subjectivity score (1-5 scale)
- **`chatgpt_subjectivite_justification`** (string) — ChatGPT GPT-4.1 mini justification for subjectivity score
- **`chatgpt_polarite`** (string) — ChatGPT GPT-4.1 mini polarity assessment
- **`chatgpt_polarite_justification`** (string) — ChatGPT GPT-4.1 mini justification for polarity assessment
- **`gemini_centralite_islam_musulmans`** (string) — Gemini Flash 2.5 assessment of centrality to Islam/Muslims
- **`gemini_centralite_justification`** (string) — Gemini Flash 2.5 justification for centrality assessment
- **`gemini_subjectivite_score`** (float64) — Gemini Flash 2.5 subjectivity score
- **`gemini_subjectivite_justification`** (string) — Gemini Flash 2.5 justification for subjectivity score
- **`gemini_polarite`** (string) — Gemini Flash 2.5 polarity assessment
- **`gemini_polarite_justification`** (string) — Gemini Flash 2.5 justification for polarity assessment
### Subset: audiovisual
This subset contains metadata from 45 audiovisual documents, including audio and video recordings from Nigerian sources.
**Fields:**
- **`o:id`** (string) — Unique item ID from the Omeka repository
- **`identifier`** (string) — Internal item identifier (dcterms:identifier)
- **`added_date`** (string) — Date when item was added to Omeka (YYYY-MM-DD format)
- **`url`** (string) — URL of the item's page on the IWAC website
- **`iiif_manifest`** (string) — IIIF manifest URL for the media item
- **`PDF`** (string) — Link to the media file (audio/video, not necessarily PDF)
- **`thumbnail`** (string) — IIIF thumbnail URL of the document
- **`title`** (string) — Title of the recording (dcterms:title)
- **`creator`** (string) — Creator(s) of the recording, separated by `|` (dcterms:creator)
- **`publisher`** (string) — Publisher or broadcasting organization (dcterms:publisher)
- **`country`** (string) — Country of origin (Nigeria)
- **`pub_date`** (string) — Publication/broadcast date in YYYY-MM-DD format (dcterms:date)
- **`descriptionAI`** (string) — AI-generated description (bibo:shortDescription)
- **`volume`** (string) — Volume number(s), separated by `|` (bibo:volume)
- **`issue`** (string) — Issue number(s), separated by `|` (bibo:issue)
- **`is_part_of`** (string) — Parent collection or series (dcterms:isPartOf)
- **`extent`** (string) — Duration of recording (e.g., "PT208M" for 208 minutes) (dcterms:extent)
- **`medium`** (string) — Medium type (e.g., audio, video) (dcterms:medium)
- **`subject`** (string) — Subject keywords (Events, Organizations, Persons, Topics), separated by `|` (dcterms:subject)
- **`spatial`** (string) — Geographic focus, separated by `|` (dcterms:spatial)
- **`language`** (string) — Language(s) of the recording, separated by `|` (dcterms:language)
- **`source`** (string) — Source or provenance (dcterms:source)
### Subset: documents
This subset contains metadata and OCR-extracted content from 24 documents (archival materials and other non-periodical publications).
**Fields:**
- **`o:id`** (string) — Unique item ID from the Omeka repository
- **`identifier`** (string) — Internal item identifier
- **`url`** (string) — URL of the item's IWAC page
- **`PDF`** (string) — Link to the original PDF file (if available)
- **`thumbnail`** (string) — IIIF thumbnail URL
- **`title`** (string) — Title of the document
- **`author`** (string) — Author(s) of the document, separated by `|`
- **`country`** (string) — Country of publication
- **`pub_date`** (string) — Publication date in YYYY-MM-DD format
- **`descriptionAI`** (string) — Gemini 2.5 Flash generated French summary
- **`embedding_descriptionAI`** (sequence) — Semantic embeddings of French summaries using paraphrase-multilingual-mpnet-base-v2
- **`subject`** (string) — Subject keywords (Events, Organizations, Persons, Topics), separated by `|`
- **`spatial`** (string) — Geographic focus, separated by `|`
- **`language`** (string) — Language(s) of the document
- **`type`** (string) — Type of document
- **`nb_pages`** (float64) — Number of pages
- **`source`** (string) — Source or provenance
- **`rights`** (string) — Rights information
- **`OCR`** (string) — Full OCR-extracted text
- **`nb_mots`** (int64) — Number of words in the OCR text
### Subset: publications
This subset contains metadata and OCR-extracted content from 1,501 publications (books, pamphlets, and periodicals).
**Fields:**
- **`o:id`** (string) — Unique item ID from the Omeka repository
- **`identifier`** (string) — Internal item identifier
- **`url`** (string) — URL of the item's IWAC page
- **`PDF`** (string) — Link to the original PDF file (if available)
- **`thumbnail`** (string) — IIIF thumbnail URL
- **`title`** (string) — Title of the publication
- **`author`** (string) — Author(s) of the publication, separated by `|`
- **`newspaper`** (string) — Publication/newspaper title
- **`country`** (string) — Country of publication
- **`pub_date`** (string) — Publication date in YYYY-MM-DD format
- **`issue`** (string) — Issue number or identifier
- **`subject`** (string) — Subject keywords (Events, Organizations, Persons, Topics), separated by `|`
- **`spatial`** (string) — Geographic focus, separated by `|`
- **`language`** (string) — Language(s) of the publication
- **`nb_pages`** (int64) — Number of pages
- **`URL`** (string) — Original publication URL (if applicable)
- **`source`** (string) — Source or provenance
- **`OCR`** (string) — Full OCR-extracted text
- **`nb_mots`** (int64) — Number of words in the OCR text
### Subset: index
This subset contains metadata for 4,307 authority records and index entries, including persons, organizations, places, events, and topics referenced in the collection. Each entry includes frequency statistics calculated from the `articles` and `publications` subsets.
**Fields:**
- **`o:id`** (int64) — Unique item ID from the Omeka repository
- **`identifier`** (string) — Internal item identifier (dcterms:identifier)
- **`url`** (string) — URL of the item's page on the IWAC website
- **`thumbnail`** (string) — IIIF thumbnail URL of the entity (if available)
- **`Titre`** (string) — Title or name of the entity (dcterms:title)
- **`Titre alternatif`** (string) — Alternative title or name, separated by `|` (dcterms:alternative)
- **`Type`** (string) — Type of entity: "Lieux" (Places), "Personnes" (Persons), "Organisations" (Organizations), "Événements" (Events), "Sujets" (Topics), or "Notices d'autorité" (Authority records)
- **`Description`** (string) — Description of the entity (dcterms:description)
- **`Date création`** (string) — Creation date (dcterms:created)
- **`date`** (string) — Associated date (dcterms:date)
- **`Relation`** (string) — Related entities (dcterms:relation), separated by `|`
- **`Remplacé par`** (string) — Replaced by (dcterms:isReplacedBy), separated by `|`
- **`Partie de`** (string) — Part of (dcterms:isPartOf), separated by `|`
- **`spatial`** (string) — Geographic information (dcterms:spatial), separated by `|`
- **`A une partie`** (string) — Has part (dcterms:hasPart), separated by `|`
- **`Prénom`** (string) — First name (for person entities) (foaf:firstName)
- **`Nom`** (string) — Last name (for person entities) (foaf:lastName)
- **`Genre`** (string) — Gender (for person entities) (foaf:gender)
- **`Naissance`** (string) — Birth date (for person entities) (foaf:birthday)
- **`Coordonnées`** (string) — Geographic coordinates (for place entities) (curation:coordinates)
- **`frequency`** (int64) — Number of times this entity appears in the articles and publications subsets
- **`first_occurrence`** (string) — Date of first occurrence in the collection (YYYY-MM-DD format)
- **`last_occurrence`** (string) — Date of last occurrence in the collection (YYYY-MM-DD format)
- **`countries`** (string) — Countries where this entity appears, separated by `|`
## Citation
If you use this dataset, please cite:
```bibtex
@misc{madore_2025_iwac,
author = {Frédérick Madore},
title = {Islam West Africa Collection (IWAC)},
year = {2025},
publisher = {Leibniz-Zentrum Moderner Orient (ZMO)},
url = {https://huggingface.co/datasets/fmadore/islam-west-africa-collection},
version = {1.0.0}
}
```
| 45 | 0 | [
"language:fr",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-05-23T12:46:12+00:00 | 2025-11-12T17:07:19+00:00 | 0 |
sevenc-nanashi/kiiteitte |
# Kiiteitte history
[Kiiteitte](https://github.com/sevenc-nanashi/kiiteitte-web) が収集した、今までの選曲履歴。
1時間おきに更新されます。
## 型
```jsonc
{
// 動画ID
"video_id": "sm44670499",
// タイトル
"title": "library->w4nderers / 足立レイ、つくよみちゃん",
// 投稿者
"author": "名無し。",
// サムネイルのURL
"thumbnail": "https://nicovideo.cdn.nimg.jp/thumbnails/44670499/44670499.91820835",
// 選曲日時
"date": "2025-02-22 12:51:51",
// 新しく増えたお気に入り数。不明の場合は null
"new_faves": 5,
// 回ったユーザーの数。不明の場合は null
"spins": 13,
// イチ押しリストのユーザーのURL。イチ押しリスト以外から選曲された場合は null
"pickup_user_url": "https://kiite.jp/user/vocahai_3939",
// イチ押しリストのユーザーの名前。イチ押しリスト以外から選曲された場合は null
"pickup_user_name": "どこかのボカ廃",
// イチ押しリストのユーザーのアイコンのURL。イチ押しリスト以外から選曲された場合は null
"pickup_user_icon": "https://kiite.jp/img/icon-user.jpg",
// イチ押しリストのURL。イチ押しリスト以外から選曲された場合は null
"pickup_playlist_url": "https://kiite.jp/playlist/0CbV8bnUxq",
}
```
|
# Kiiteitte history
[Kiiteitte](https://github.com/sevenc-nanashi/kiiteitte-web) が収集した、今までの選曲履歴。
1時間おきに更新されます。
## 型
```jsonc
{
// 動画ID
"video_id": "sm44670499",
// タイトル
"title": "library->w4nderers / 足立レイ、つくよみちゃん",
// 投稿者
"author": "名無し。",
// サムネイルのURL
"thumbnail": "https://nicovideo.cdn.nimg.jp/thumbnails/44670499/44670499.91820835",
// 選曲日時
"date": "2025-02-22 12:51:51",
// 新しく増えたお気に入り数。不明の場合は null
"new_faves": 5,
// 回ったユーザーの数。不明の場合は null
"spins": 13,
// イチ押しリストのユーザーのURL。イチ押しリスト以外から選曲された場合は null
"pickup_user_url": "https://kiite.jp/user/vocahai_3939",
// イチ押しリストのユーザーの名前。イチ押しリスト以外から選曲された場合は null
"pickup_user_name": "どこかのボカ廃",
// イチ押しリストのユーザーのアイコンのURL。イチ押しリスト以外から選曲された場合は null
"pickup_user_icon": "https://kiite.jp/img/icon-user.jpg",
// イチ押しリストのURL。イチ押しリスト以外から選曲された場合は null
"pickup_playlist_url": "https://kiite.jp/playlist/0CbV8bnUxq",
}
```
| 3,202 | 0 | [
"size_categories:100K<n<1M",
"format:json",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | 2025-03-03T09:58:01+00:00 | 2025-11-12T17:04:50+00:00 | 0 |
reyavir/PromptEvals | **PromptEvals: A Dataset of Assertions and Guardrails for Custom Production Large Language Model Pipelines**
Large language models (LLMs) are increasingly deployed in specialized production data processing pipelines across diverse domains---such as finance, marketing, and e-commerce.
However, when running them in production across many inputs, they often fail to follow instructions or meet developer expectations.
To improve reliability in these applications, creating assertions or guardrails for LLM outputs to run alongside the pipelines is essential.
Yet, determining the right set of assertions that capture developer requirements for a task is challenging. In this paper, we introduce PromptEvals,
a dataset of 2087 LLM pipeline prompts with 12623 corresponding assertion criteria, sourced from developers using our open-source LLM pipeline tools.
This dataset is 5x larger than previous collections. Using a hold-out test split of PromptEvals as a benchmark, we evaluated closed- and open-source models in generating relevant assertions.
Notably, our fine-tuned Mistral and Llama 3 models outperform GPT-4o by 20.93% on average, offering both reduced latency and improved performance.
We believe our dataset can spur further research in LLM reliability, alignment, and prompt engineering.
Link to the paper: https://arxiv.org/abs/2504.14738
**Datasheet**
Why was the dataset created? (e.g., was there a specific intended task gap that needed to be filled?)
*The dataset was created to be used in training or fine-tuning models to generate higher quality assertion criteria.*
Who funded the creation of the dataset?
*Lab sponsors.*
What preprocessing/cleaning was done? (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances)
*The prompt template was extracted from the metadata and was added to the dataset. We removed any rows that resulted in 0 assertion criteria after the first step of our 3 step workflow.*
If it relates to people, were they told what the dataset would be used for and did they consent? If so, how? Were they provided with any mechanism to revoke their consent in the future or for certain uses?
*Yes, the prompts are all from developers who consented to make their prompts public via a form. They can delete their prompts by submitting a delete request. We will semi-regularly update the Prompt Evals dataset to support the delete requests*.
Will the dataset be updated? How often, by whom?
*We plan to update the dataset yearly.* | **PromptEvals: A Dataset of Assertions and Guardrails for Custom Production Large Language Model Pipelines**
Large language models (LLMs) are increasingly deployed in specialized production data processing pipelines across diverse domains---such as finance, marketing, and e-commerce.
However, when running them in production across many inputs, they often fail to follow instructions or meet developer expectations.
To improve reliability in these applications, creating assertions or guardrails for LLM outputs to run alongside the pipelines is essential.
Yet, determining the right set of assertions that capture developer requirements for a task is challenging. In this paper, we introduce PromptEvals,
a dataset of 2087 LLM pipeline prompts with 12623 corresponding assertion criteria, sourced from developers using our open-source LLM pipeline tools.
This dataset is 5x larger than previous collections. Using a hold-out test split of PromptEvals as a benchmark, we evaluated closed- and open-source models in generating relevant assertions.
Notably, our fine-tuned Mistral and Llama 3 models outperform GPT-4o by 20.93% on average, offering both reduced latency and improved performance.
We believe our dataset can spur further research in LLM reliability, alignment, and prompt engineering.
Link to the paper: https://arxiv.org/abs/2504.14738
**Datasheet**
Why was the dataset created? (e.g., was there a specific intended task gap that needed to be filled?)
*The dataset was created to be used in training or fine-tuning models to generate higher quality assertion criteria.*
Who funded the creation of the dataset?
*Lab sponsors.*
What preprocessing/cleaning was done? (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances)
*The prompt template was extracted from the metadata and was added to the dataset. We removed any rows that resulted in 0 assertion criteria after the first step of our 3 step workflow.*
If it relates to people, were they told what the dataset would be used for and did they consent? If so, how? Were they provided with any mechanism to revoke their consent in the future or for certain uses?
*Yes, the prompts are all from developers who consented to make their prompts public via a form. They can delete their prompts by submitting a delete request. We will semi-regularly update the Prompt Evals dataset to support the delete requests*.
Will the dataset be updated? How often, by whom?
*We plan to update the dataset yearly.* | 30 | 20 | [
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.14738",
"region:us"
] | 2025-02-01T18:34:26+00:00 | 2025-11-12T17:04:50+00:00 | 0 |
dasyd/quants |
# QuAnTS: Question Answering on Time Series
[](https://github.com/mauricekraus/quants-generate)
[](https://arxiv.org/abs/2511.05124)
QuAnTS is a challenging dataset designed to bridge the gap in question-answering research on time series data.
The dataset features a wide variety of questions and answers concerning human movements, presented as tracked skeleton trajectories.
QuAnTS also includes human reference performance to benchmark the practical usability of models trained on this dataset.
<img src="doc/intro-chat.png" alt="Example chat motivating time series question answering: Q: 'What is the person doing first?', A: 'They are waving.', Q: 'How many times are they jumping after that?', A: '...'" width="30%"/>
At present, there is no official leaderboard for this dataset.
## Dataset Generation Overview

For details, please refer to the paper: *Under Review*
## Task and Format
The primary task for the QuAnTS dataset is Time Series Question Answering.
Given a time series of human skeleton trajectories and a question in natural language, the goal is to generate a correct answer.
Answers are provided in one of the following formats: binary (Yes/No), multiple-choice (A/B/C), or open (free text).
Additionally, to provide more training data for free-text answers, we provide entirely textual answers for all binary and multiple-choice questions.
The ground truth action sequence or scene descriptions *may not* be used to answer the dataset — we provide them for debugging purposes only.
The text in the dataset is in English.
We provide fixed splits into training, validation, and test portions, where only the latter may be used to compare performance across different approaches.
You are free to mix the training and validation splits as needed.
## Licensing, Citation, and Acknowledgments
The QuAnTS dataset is licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](ttps://creativecommons.org/licenses/by/4.0/ ) license.
If you use the QuAnTS dataset in your research, please cite [the paper]([2511.05124](https://arxiv.org/abs/2511.05124)):
```
@misc{divo2025quantsquestionansweringtime,
title={QuAnTS: Question Answering on Time Series},
author={Felix Divo and Maurice Kraus and Anh Q. Nguyen and Hao Xue and Imran Razzak and Flora D. Salim and Kristian Kersting and Devendra Singh Dhami},
year={2025},
eprint={2511.05124},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2511.05124},
}
```
The dataset was curated by a team of researchers from various institutions:
* Felix Divo, Maurice Kraus, and Kristian Kersting (hessian.AI, DFKI, and the Centre for Cognitive Science) from Technische Universität Darmstadt.
* Anh Q. Nguyen, Hao Xue, and Flora D. Salim from UNSW Sydney.
* Imran Razzak from Mohamed bin Zayed University of Artificial Intelligence.
* Devendra Singh Dhami from Eindhoven University of Technology.
|
# QuAnTS: Question Answering on Time Series
[](https://github.com/mauricekraus/quants-generate)
[](https://arxiv.org/abs/2511.05124)
QuAnTS is a challenging dataset designed to bridge the gap in question-answering research on time series data.
The dataset features a wide variety of questions and answers concerning human movements, presented as tracked skeleton trajectories.
QuAnTS also includes human reference performance to benchmark the practical usability of models trained on this dataset.
<img src="doc/intro-chat.png" alt="Example chat motivating time series question answering: Q: 'What is the person doing first?', A: 'They are waving.', Q: 'How many times are they jumping after that?', A: '...'" width="30%"/>
At present, there is no official leaderboard for this dataset.
## Dataset Generation Overview

For details, please refer to the paper: *Under Review*
## Task and Format
The primary task for the QuAnTS dataset is Time Series Question Answering.
Given a time series of human skeleton trajectories and a question in natural language, the goal is to generate a correct answer.
Answers are provided in one of the following formats: binary (Yes/No), multiple-choice (A/B/C), or open (free text).
Additionally, to provide more training data for free-text answers, we provide entirely textual answers for all binary and multiple-choice questions.
The ground truth action sequence or scene descriptions *may not* be used to answer the dataset — we provide them for debugging purposes only.
The text in the dataset is in English.
We provide fixed splits into training, validation, and test portions, where only the latter may be used to compare performance across different approaches.
You are free to mix the training and validation splits as needed.
## Licensing, Citation, and Acknowledgments
The QuAnTS dataset is licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](ttps://creativecommons.org/licenses/by/4.0/ ) license.
If you use the QuAnTS dataset in your research, please cite [the paper]([2511.05124](https://arxiv.org/abs/2511.05124)):
```
@misc{divo2025quantsquestionansweringtime,
title={QuAnTS: Question Answering on Time Series},
author={Felix Divo and Maurice Kraus and Anh Q. Nguyen and Hao Xue and Imran Razzak and Flora D. Salim and Kristian Kersting and Devendra Singh Dhami},
year={2025},
eprint={2511.05124},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2511.05124},
}
```
The dataset was curated by a team of researchers from various institutions:
* Felix Divo, Maurice Kraus, and Kristian Kersting (hessian.AI, DFKI, and the Centre for Cognitive Science) from Technische Universität Darmstadt.
* Anh Q. Nguyen, Hao Xue, and Flora D. Salim from UNSW Sydney.
* Imran Razzak from Mohamed bin Zayed University of Artificial Intelligence.
* Devendra Singh Dhami from Eindhoven University of Technology.
| 60 | 1 | [
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2511.05124",
"doi:10.57967/hf/6663",
"region:us"
] | 2024-04-09T20:58:03+00:00 | 2025-11-12T17:10:32+00:00 | 0 |
KozMi/pal_fullflow_golden_lora_training |
# PAL FullFlow Golden - LoRA Training Dataset
Training dataset for PAL FullFlow Golden character LoRA used with WAN 2.2.
## Dataset Information
- **Character**: PAL FullFlow Golden
- **Trigger Word**: `chr_pal_fullflow_golden`
- **ZIP Size**: 7.0 MB
- **File**: `training_dataset.zip`
## Character Attributes
- **Build**: average
- **Ethnicity**: Latina
- **Facial Features**: oval face shape, defined cheekbones, almond-shaped eyes, arched eyebrows, full lips
- **Hair**: dark brown, long, straight
- **Distinctive Features**: long eyelashes, heart-shaped pendant necklace
## Contents
This ZIP file contains:
- Training images (1024x1024, cropped and processed)
- Caption files (one .txt file per image)
## Usage
Download the ZIP file and use it for LoRA training with WaveSpeed AI or compatible trainers.
---
*Generated by Once Content Automation*
|
# PAL FullFlow Golden - LoRA Training Dataset
Training dataset for PAL FullFlow Golden character LoRA used with WAN 2.2.
## Dataset Information
- **Character**: PAL FullFlow Golden
- **Trigger Word**: `chr_pal_fullflow_golden`
- **ZIP Size**: 7.0 MB
- **File**: `training_dataset.zip`
## Character Attributes
- **Build**: average
- **Ethnicity**: Latina
- **Facial Features**: oval face shape, defined cheekbones, almond-shaped eyes, arched eyebrows, full lips
- **Hair**: dark brown, long, straight
- **Distinctive Features**: long eyelashes, heart-shaped pendant necklace
## Contents
This ZIP file contains:
- Training images (1024x1024, cropped and processed)
- Caption files (one .txt file per image)
## Usage
Download the ZIP file and use it for LoRA training with WaveSpeed AI or compatible trainers.
---
*Generated by Once Content Automation*
| 0 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"license:other",
"region:us",
"lora",
"training",
"wan-2.2"
] | 2025-11-12T17:02:07+00:00 | 2025-11-12T17:02:14+00:00 | 0 |
asterism45/bi_openarm_collect_tools_2 |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "bi_openarm",
"total_episodes": 10,
"total_frames": 17717,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"left_shoulder_pan.pos",
"left_shoulder_lift.pos",
"left_elbow.pos",
"left_wrist_pitch.pos",
"left_wrist_roll.pos",
"left_wrist_yaw.pos",
"left_tool.pos",
"right_shoulder_pan.pos",
"right_shoulder_lift.pos",
"right_elbow.pos",
"right_wrist_pitch.pos",
"right_wrist_roll.pos",
"right_wrist_yaw.pos",
"right_tool.pos",
"right_gripper.pos"
],
"shape": [
15
]
},
"observation.state": {
"dtype": "float32",
"names": [
"left_shoulder_pan.pos",
"left_shoulder_pan.vel",
"left_shoulder_lift.pos",
"left_shoulder_lift.vel",
"left_elbow.pos",
"left_elbow.vel",
"left_wrist_pitch.pos",
"left_wrist_pitch.vel",
"left_wrist_roll.pos",
"left_wrist_roll.vel",
"left_wrist_yaw.pos",
"left_wrist_yaw.vel",
"left_tool.pos",
"left_tool.vel",
"right_shoulder_pan.pos",
"right_shoulder_pan.vel",
"right_shoulder_lift.pos",
"right_shoulder_lift.vel",
"right_elbow.pos",
"right_elbow.vel",
"right_wrist_pitch.pos",
"right_wrist_pitch.vel",
"right_wrist_roll.pos",
"right_wrist_roll.vel",
"right_wrist_yaw.pos",
"right_wrist_yaw.vel",
"right_tool.pos",
"right_tool.vel",
"right_gripper.pos",
"right_gripper.vel"
],
"shape": [
30
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "bi_openarm",
"total_episodes": 10,
"total_frames": 17717,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"left_shoulder_pan.pos",
"left_shoulder_lift.pos",
"left_elbow.pos",
"left_wrist_pitch.pos",
"left_wrist_roll.pos",
"left_wrist_yaw.pos",
"left_tool.pos",
"right_shoulder_pan.pos",
"right_shoulder_lift.pos",
"right_elbow.pos",
"right_wrist_pitch.pos",
"right_wrist_roll.pos",
"right_wrist_yaw.pos",
"right_tool.pos",
"right_gripper.pos"
],
"shape": [
15
]
},
"observation.state": {
"dtype": "float32",
"names": [
"left_shoulder_pan.pos",
"left_shoulder_pan.vel",
"left_shoulder_lift.pos",
"left_shoulder_lift.vel",
"left_elbow.pos",
"left_elbow.vel",
"left_wrist_pitch.pos",
"left_wrist_pitch.vel",
"left_wrist_roll.pos",
"left_wrist_roll.vel",
"left_wrist_yaw.pos",
"left_wrist_yaw.vel",
"left_tool.pos",
"left_tool.vel",
"right_shoulder_pan.pos",
"right_shoulder_pan.vel",
"right_shoulder_lift.pos",
"right_shoulder_lift.vel",
"right_elbow.pos",
"right_elbow.vel",
"right_wrist_pitch.pos",
"right_wrist_pitch.vel",
"right_wrist_roll.pos",
"right_wrist_roll.vel",
"right_wrist_yaw.pos",
"right_wrist_yaw.vel",
"right_tool.pos",
"right_tool.vel",
"right_gripper.pos",
"right_gripper.vel"
],
"shape": [
30
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T16:56:47+00:00 | 2025-11-12T16:59:40+00:00 | 0 |
volcanos/3TF |
## Dataset Details
### Dataset Description
This is the model for paper [Efficient Reasoning via Thought-Training and Thought-Free Inference](https://arxiv.org/abs/2511.03408)
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{wu2025efficient,
title={Efficient Reasoning via Thought-Training and Thought-Free Inference},
author={Wu, Canhui and Cao, Qiong and Xue, Chao and Xi, Wei and He, Xiaodong},
journal={arXiv preprint arXiv:2511.03408},
year={2025}
}
``` |
## Dataset Details
### Dataset Description
This is the model for paper [Efficient Reasoning via Thought-Training and Thought-Free Inference](https://arxiv.org/abs/2511.03408)
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{wu2025efficient,
title={Efficient Reasoning via Thought-Training and Thought-Free Inference},
author={Wu, Canhui and Cao, Qiong and Xue, Chao and Xi, Wei and He, Xiaodong},
journal={arXiv preprint arXiv:2511.03408},
year={2025}
}
``` | 0 | 0 | [
"language:en",
"license:mit",
"arxiv:2511.03408",
"region:us"
] | 2025-11-12T16:58:21+00:00 | 2025-11-12T17:00:20+00:00 | 0 |
aigrant/taiwan-ly-law-research |
# Taiwan Legislator Yuan Law Research Data
## Overview
The law research documents are issued irregularly from Taiwan Legislator Yuan.
The purpose of those research are providing better understanding on social issues in aspect of laws.
One may find documents rich with technical terms which could provided as training data.
For comprehensive document list check out this [link](https://www.ly.gov.tw/Pages/List.aspx?nodeid=6590) provided by Taiwan Legislator Yuan.
There are currently missing document download links in 10th and 9th terms due to minor issue on crawler.
We will fill in those missing data ASAP.
## Data Fields
| Field name | Description |
|----------------|------------------------------------------------------------------------------------------------------------------------------------|
| research_no | ID of the research document |
| title | title of the document |
| related_laws | Related names of laws in the document. Separated by `;` |
| authors | Authors of document. Separated by `;` |
| published_date | Published date of the document in form `YYYY-mm-dd` |
| content | Full text content of the document. One may also find the original content in `.html` format at `html/{research_no}.html` |
| doc_url | The download link hosted on ly.gov.tw |
## Sponsorship
The work is sponsored by "【g0v 零時小學校】繁體中文AI 開源實踐計畫"
## Contact
If you have any issue on the dataset. Please leave a discussion on it or contact us via:
報導者(The Reporter) data@twreporter.org
歐噴有限公司(OpenFun Ltd.) contact@openfun.tw |
# Taiwan Legislator Yuan Law Research Data
## Overview
The law research documents are issued irregularly from Taiwan Legislator Yuan.
The purpose of those research are providing better understanding on social issues in aspect of laws.
One may find documents rich with technical terms which could provided as training data.
For comprehensive document list check out this [link](https://www.ly.gov.tw/Pages/List.aspx?nodeid=6590) provided by Taiwan Legislator Yuan.
There are currently missing document download links in 10th and 9th terms due to minor issue on crawler.
We will fill in those missing data ASAP.
## Data Fields
| Field name | Description |
|----------------|------------------------------------------------------------------------------------------------------------------------------------|
| research_no | ID of the research document |
| title | title of the document |
| related_laws | Related names of laws in the document. Separated by `;` |
| authors | Authors of document. Separated by `;` |
| published_date | Published date of the document in form `YYYY-mm-dd` |
| content | Full text content of the document. One may also find the original content in `.html` format at `html/{research_no}.html` |
| doc_url | The download link hosted on ly.gov.tw |
## Sponsorship
The work is sponsored by "【g0v 零時小學校】繁體中文AI 開源實踐計畫"
## Contact
If you have any issue on the dataset. Please leave a discussion on it or contact us via:
報導者(The Reporter) data@twreporter.org
歐噴有限公司(OpenFun Ltd.) contact@openfun.tw | 176 | 6 | [
"language:zh",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-07-22T21:29:38+00:00 | 2025-11-12T17:00:10+00:00 | 0 |
theonegareth/antam_historical_gold_prices |
# Unofficial Antam Gold Price History (IDR)
This repository provides an unofficial historical record of Antam gold selling prices in Indonesian Rupiah (IDR), compiled from publicly accessible information on the official Antam Logam Mulia website.
It is intended for research, analysis, and educational use cases such as time series analysis, forecasting experiments, and financial data tutorials.
## Dataset Overview
Main file:
- `antam_gold_prices.csv`
- `date`:
- Timestamp of the quoted price in ISO format (e.g. `YYYY-MM-DD HH:MM:SS`).
- Derived from the original `Date` field in the source.
- `price_idr_per_gram`:
- Antam gold selling price in IDR per gram.
- Derived from the original `Gold Price` field.
Source fields (for context):
- Original data (prior to cleaning) included:
- `Time (ms)`: Unix timestamp in milliseconds from the chart API.
- `Gold Price`: Selling price in IDR.
- `Date`: Datetime string.
- In this dataset, we keep a clean, standardized version focusing on:
- `date`
- `price_idr_per_gram`
If you want to expose both raw and cleaned formats, you can add a separate raw file (e.g. `raw_antam_gold_prices.csv`) and describe it here similarly.
## Example Usage
You can use this dataset directly as a CSV or via the Hugging Face Datasets library.
Replace `YOUR_USERNAME/antam-gold-price-history-idr` with your actual dataset repo path.
Using `datasets`:
```python
from datasets import load_dataset
ds = load_dataset("YOUR_USERNAME/antam-gold-price-history-idr")
df = ds["train"].to_pandas()
print(df.head())
```
Using `pandas` directly with the CSV:
```python
import pandas as pd
df = pd.read_csv("antam_gold_prices.csv", parse_dates=["date"])
print(df.head())
```
## Included Notebook
- `example_notebook.ipynb`:
- Demonstrates:
- Loading and cleaning the data.
- Exploratory data analysis (EDA).
- Creating derived features:
- `daily_return_pct`
- `ma_7` (7-day moving average)
- `ma_30` (30-day moving average)
- `vol_7d` (7-day rolling volatility of daily returns)
- A simple baseline next-day price model (for demonstration only).
- Shows how this dataset can be integrated into typical data science and ML workflows.
## Potential Use Cases
- Time series analysis:
- Trend and regime changes in local gold prices.
- Volatility and drawdown analysis.
- Forecasting experiments:
- ARIMA / SARIMA
- Prophet
- Gradient boosting / tree-based models
- Neural networks (LSTM/Transformer) for time series.
- Educational materials:
- Feature engineering with financial time series.
- Time-based train-test splits.
- Model evaluation and overfitting discussions.
- Benchmarking:
- Compare Antam local gold prices against:
- International gold prices,
- IDR exchange rates,
- Inflation or macroeconomic indicators.
## Data Preparation Notes
Key cleaning/transformation steps applied to build `antam_gold_prices.csv`:
- Parsed the `Date` column into a proper datetime format.
- Standardized column names to:
- `date`
- `price_idr_per_gram`
- Ensured `price_idr_per_gram` is numeric (IDR per gram).
- Sorted records chronologically.
- Removed rows with invalid or missing dates/prices.
If you modify or extend the dataset (e.g. adding more columns or derived features), document those changes here so users understand the schema.
## Attribution and Source
- Underlying price information originates from publicly accessible pages on:
- `https://www.logammulia.com`
- Specifically: the "Grafik Emas Harian" (daily gold chart) and related public price views.
- This repository contains a cleaned, tabular representation created for analytical convenience.
Please always verify current prices directly from the official website.
## Important Notes and Disclaimer
- This dataset is:
- Unofficial.
- For research, analysis, and educational purposes only.
- This project is:
- Not affiliated with,
- Not sponsored by, and
- Not endorsed by
- PT Aneka Tambang Tbk / Antam Logam Mulia or any related entity.
- All trademarks, logos, and brand names are the property of their respective owners.
- Users are responsible for ensuring their use of this dataset complies with:
- The terms and conditions of the original data source,
- Applicable laws and regulations,
- Their own organizational policies.
- For authoritative, complete, and up-to-date information on Antam gold prices, always refer directly to:
- `https://www.logammulia.com`
If you are a rights holder and have concerns or requests (e.g. corrections, restrictions, takedown), please open an issue on the repository or contact the maintainer, and we will respond promptly. |
# Unofficial Antam Gold Price History (IDR)
This repository provides an unofficial historical record of Antam gold selling prices in Indonesian Rupiah (IDR), compiled from publicly accessible information on the official Antam Logam Mulia website.
It is intended for research, analysis, and educational use cases such as time series analysis, forecasting experiments, and financial data tutorials.
## Dataset Overview
Main file:
- `antam_gold_prices.csv`
- `date`:
- Timestamp of the quoted price in ISO format (e.g. `YYYY-MM-DD HH:MM:SS`).
- Derived from the original `Date` field in the source.
- `price_idr_per_gram`:
- Antam gold selling price in IDR per gram.
- Derived from the original `Gold Price` field.
Source fields (for context):
- Original data (prior to cleaning) included:
- `Time (ms)`: Unix timestamp in milliseconds from the chart API.
- `Gold Price`: Selling price in IDR.
- `Date`: Datetime string.
- In this dataset, we keep a clean, standardized version focusing on:
- `date`
- `price_idr_per_gram`
If you want to expose both raw and cleaned formats, you can add a separate raw file (e.g. `raw_antam_gold_prices.csv`) and describe it here similarly.
## Example Usage
You can use this dataset directly as a CSV or via the Hugging Face Datasets library.
Replace `YOUR_USERNAME/antam-gold-price-history-idr` with your actual dataset repo path.
Using `datasets`:
```python
from datasets import load_dataset
ds = load_dataset("YOUR_USERNAME/antam-gold-price-history-idr")
df = ds["train"].to_pandas()
print(df.head())
```
Using `pandas` directly with the CSV:
```python
import pandas as pd
df = pd.read_csv("antam_gold_prices.csv", parse_dates=["date"])
print(df.head())
```
## Included Notebook
- `example_notebook.ipynb`:
- Demonstrates:
- Loading and cleaning the data.
- Exploratory data analysis (EDA).
- Creating derived features:
- `daily_return_pct`
- `ma_7` (7-day moving average)
- `ma_30` (30-day moving average)
- `vol_7d` (7-day rolling volatility of daily returns)
- A simple baseline next-day price model (for demonstration only).
- Shows how this dataset can be integrated into typical data science and ML workflows.
## Potential Use Cases
- Time series analysis:
- Trend and regime changes in local gold prices.
- Volatility and drawdown analysis.
- Forecasting experiments:
- ARIMA / SARIMA
- Prophet
- Gradient boosting / tree-based models
- Neural networks (LSTM/Transformer) for time series.
- Educational materials:
- Feature engineering with financial time series.
- Time-based train-test splits.
- Model evaluation and overfitting discussions.
- Benchmarking:
- Compare Antam local gold prices against:
- International gold prices,
- IDR exchange rates,
- Inflation or macroeconomic indicators.
## Data Preparation Notes
Key cleaning/transformation steps applied to build `antam_gold_prices.csv`:
- Parsed the `Date` column into a proper datetime format.
- Standardized column names to:
- `date`
- `price_idr_per_gram`
- Ensured `price_idr_per_gram` is numeric (IDR per gram).
- Sorted records chronologically.
- Removed rows with invalid or missing dates/prices.
If you modify or extend the dataset (e.g. adding more columns or derived features), document those changes here so users understand the schema.
## Attribution and Source
- Underlying price information originates from publicly accessible pages on:
- `https://www.logammulia.com`
- Specifically: the "Grafik Emas Harian" (daily gold chart) and related public price views.
- This repository contains a cleaned, tabular representation created for analytical convenience.
Please always verify current prices directly from the official website.
## Important Notes and Disclaimer
- This dataset is:
- Unofficial.
- For research, analysis, and educational purposes only.
- This project is:
- Not affiliated with,
- Not sponsored by, and
- Not endorsed by
- PT Aneka Tambang Tbk / Antam Logam Mulia or any related entity.
- All trademarks, logos, and brand names are the property of their respective owners.
- Users are responsible for ensuring their use of this dataset complies with:
- The terms and conditions of the original data source,
- Applicable laws and regulations,
- Their own organizational policies.
- For authoritative, complete, and up-to-date information on Antam gold prices, always refer directly to:
- `https://www.logammulia.com`
If you are a rights holder and have concerns or requests (e.g. corrections, restrictions, takedown), please open an issue on the repository or contact the maintainer, and we will respond promptly. | 3 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"region:us",
"finance"
] | 2025-11-12T03:54:22+00:00 | 2025-11-12T16:58:58+00:00 | 0 |
OpenDataArena/FineReason | # FineReason: A Comprehensive Multimodal Dataset for Visual Reasoning
FineReason is a multimodal reasoning dataset designed to enhance large multimodal models (LMMs) in visual reasoning, covering **STEM (Science, Technology, Engineering, and Mathematics), visual puzzles, games, complex diagram reasoning**.
Each example includes a reasoning-style answer distilled from **Qwen3-VL-235B-a22B-thinking**, promoting long-chain, interpretable multimodal reasoning.
---
## 🧠 Motivation
Reasoning over structured or non-natural images requires more than visual perception and OCR capabilities. It demands **logical inference, symbolic understanding, and step-by-step analytical thinking**.
However:
1. **Data imbalance**: In existing composite open-source multimodal datasets (e.g., FineVision, LLaVA-OneVision-1.5-data), reasoning samples are limited and underrepresented due to the intrinsic difficulty of acquiring high-quality data.
2. **Constraints on reasoning quality**: Existing open-source multimodal datasets are generally small, scattered, and lack a consistent reasoning style with long-form, interpretable reasoning chains, which hinders research on data-centric approaches for multimodal reasoning.
FineReason aims to address this gap by curating and distilling high-quality reasoning datasets with a consistent reasoning style, thereby providing a robust foundation for **data-centric** multimodal training and evaluation.
---
## 📊 Dataset Composition (Continuously Expanding...)
| Sub-dataset | Count |
| -------------------------------------- | ------- |
| BMMR | 85,275 |
| Euclid30K | 27,111 |
| ai2d_merged | 2,446 |
| geo170k (qa) | 12,101 |
| geometry3k (mathv360k) | 9,724 |
| scienceqa | 6,146 |
| tqa | 12,565 |
| visualwebinstruct (filtered) | 261,436 |
| MMR1 |1,610,242|
| VisualSphinx | 3,781 |
| mmopenr1-8k | 7,428 |
---
## 🧩 Data Structure
Each entry contains:
```json
{
"id": "unique_identifier",
"question": "textual question",
"image": "PIL Image",
"qwen3vl_235b_thinking_response": "reasoning-style answer distilled from Qwen3-VL-235B-a22B-thinking"
}
```
---
## ⚙️ Data Generation Process
We unify all sub-datasets under a **common reasoning style** by **distilling long-chain answers** from ***Qwen3-VL-235B-a22B-thinking***.
The model is prompted to produce structured, interpretable, and step-by-step reasoning grounded in the provided images and questions.
### Example Reasoning Pattern
```text
<think>
[Detailed reasoning process]
- Analyze the problem and extract key information
- Identify relevant formulas/principles
- Work through step-by-step calculations
- Consider multiple approaches if needed
- Resolve any contradictions
- Converge toward the solution
- Verification
</think>
<answer>
[Final answer here]
</answer>
```
This ensures:
* Consistent reasoning traces across datasets
* Visually grounded logical steps
* Improved interpretability and compositional reasoning
---
## 📈 Future Work
We are continuously:
* Expanding coverage across math, science, logical, and spatial reasoning
* Re-distilling reasoning traces with improved thinking models
* Filtering and improving response quality
* Performing domain-specific reasoning data augmentation
---
# 🌐 About OpenDataArena
[OpenDataArena](https://opendataarena.github.io/) is an open research platform dedicated to **discovering, evaluating, and advancing high-quality datasets for AI post-training**. It provides a transparent, data-centric ecosystem to support reproducible dataset evaluation and sharing.
**Key Features:**
* 🏆 **Dataset Leaderboard** — helps researchers identify **the most valuable and high-quality datasets across different domains**.
* 📊 **Detailed Evaluation Scores** — provides **comprehensive metrics** to assess data quality, complexity, difficulty etc.
* 🧰 **Data Processing Toolkit** — [OpenDataArena-Tool](https://github.com/OpenDataArena/OpenDataArena-Tool)
offers an open-source pipeline for dataset curation and scoring.
If you find our work helpful, please consider **⭐ starring and subscribing** to support our research.
# 📚 Citation
```bibtex
@dataset{opendataarena_finereason_2025,
author = {OpenDataArena},
title = {OpenDataArena-finereason},
year = {2025},
url = {[https://huggingface.co/datasets/OpenDataArena/FineReason](https://huggingface.co/datasets/OpenDataArena/FineReason)}
}
``` | # FineReason: A Comprehensive Multimodal Dataset for Visual Reasoning
FineReason is a multimodal reasoning dataset designed to enhance large multimodal models (LMMs) in visual reasoning, covering **STEM (Science, Technology, Engineering, and Mathematics), visual puzzles, games, complex diagram reasoning**.
Each example includes a reasoning-style answer distilled from **Qwen3-VL-235B-a22B-thinking**, promoting long-chain, interpretable multimodal reasoning.
---
## 🧠 Motivation
Reasoning over structured or non-natural images requires more than visual perception and OCR capabilities. It demands **logical inference, symbolic understanding, and step-by-step analytical thinking**.
However:
1. **Data imbalance**: In existing composite open-source multimodal datasets (e.g., FineVision, LLaVA-OneVision-1.5-data), reasoning samples are limited and underrepresented due to the intrinsic difficulty of acquiring high-quality data.
2. **Constraints on reasoning quality**: Existing open-source multimodal datasets are generally small, scattered, and lack a consistent reasoning style with long-form, interpretable reasoning chains, which hinders research on data-centric approaches for multimodal reasoning.
FineReason aims to address this gap by curating and distilling high-quality reasoning datasets with a consistent reasoning style, thereby providing a robust foundation for **data-centric** multimodal training and evaluation.
---
## 📊 Dataset Composition (Continuously Expanding...)
| Sub-dataset | Count |
| -------------------------------------- | ------- |
| BMMR | 85,275 |
| Euclid30K | 27,111 |
| ai2d_merged | 2,446 |
| geo170k (qa) | 12,101 |
| geometry3k (mathv360k) | 9,724 |
| scienceqa | 6,146 |
| tqa | 12,565 |
| visualwebinstruct (filtered) | 261,436 |
| MMR1 |1,610,242|
| VisualSphinx | 3,781 |
| mmopenr1-8k | 7,428 |
---
## 🧩 Data Structure
Each entry contains:
```json
{
"id": "unique_identifier",
"question": "textual question",
"image": "PIL Image",
"qwen3vl_235b_thinking_response": "reasoning-style answer distilled from Qwen3-VL-235B-a22B-thinking"
}
```
---
## ⚙️ Data Generation Process
We unify all sub-datasets under a **common reasoning style** by **distilling long-chain answers** from ***Qwen3-VL-235B-a22B-thinking***.
The model is prompted to produce structured, interpretable, and step-by-step reasoning grounded in the provided images and questions.
### Example Reasoning Pattern
```text
<think>
[Detailed reasoning process]
- Analyze the problem and extract key information
- Identify relevant formulas/principles
- Work through step-by-step calculations
- Consider multiple approaches if needed
- Resolve any contradictions
- Converge toward the solution
- Verification
</think>
<answer>
[Final answer here]
</answer>
```
This ensures:
* Consistent reasoning traces across datasets
* Visually grounded logical steps
* Improved interpretability and compositional reasoning
---
## 📈 Future Work
We are continuously:
* Expanding coverage across math, science, logical, and spatial reasoning
* Re-distilling reasoning traces with improved thinking models
* Filtering and improving response quality
* Performing domain-specific reasoning data augmentation
---
# 🌐 About OpenDataArena
[OpenDataArena](https://opendataarena.github.io/) is an open research platform dedicated to **discovering, evaluating, and advancing high-quality datasets for AI post-training**. It provides a transparent, data-centric ecosystem to support reproducible dataset evaluation and sharing.
**Key Features:**
* 🏆 **Dataset Leaderboard** — helps researchers identify **the most valuable and high-quality datasets across different domains**.
* 📊 **Detailed Evaluation Scores** — provides **comprehensive metrics** to assess data quality, complexity, difficulty etc.
* 🧰 **Data Processing Toolkit** — [OpenDataArena-Tool](https://github.com/OpenDataArena/OpenDataArena-Tool)
offers an open-source pipeline for dataset curation and scoring.
If you find our work helpful, please consider **⭐ starring and subscribing** to support our research.
# 📚 Citation
```bibtex
@dataset{opendataarena_finereason_2025,
author = {OpenDataArena},
title = {OpenDataArena-finereason},
year = {2025},
url = {[https://huggingface.co/datasets/OpenDataArena/FineReason](https://huggingface.co/datasets/OpenDataArena/FineReason)}
}
``` | 667 | 16 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-10-24T16:42:22+00:00 | 2025-11-12T16:54:39+00:00 | 9 |
volcanos/StepPruner |
## Dataset Details
### Dataset Description
This is the dataset for paper [Beyond Token Length: Step Pruner for Efficient and Accurate Reasoning in Large Language Models](https://arxiv.org/abs/2510.03805)
This is a subset of DeepScalar.
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{wu2025beyond,
title={Beyond Token Length: Step Pruner for Efficient and Accurate Reasoning in Large Language Models},
author={Wu, Canhui and Cao, Qiong and Li, Chang and Wang, Zhenfang and Xue, Chao and Fan, Yuwei and Xi, Wei and He, Xiaodong},
journal={arXiv preprint arXiv:2510.03805},
year={2025}
}
``` |
## Dataset Details
### Dataset Description
This is the dataset for paper [Beyond Token Length: Step Pruner for Efficient and Accurate Reasoning in Large Language Models](https://arxiv.org/abs/2510.03805)
This is a subset of DeepScalar.
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{wu2025beyond,
title={Beyond Token Length: Step Pruner for Efficient and Accurate Reasoning in Large Language Models},
author={Wu, Canhui and Cao, Qiong and Li, Chang and Wang, Zhenfang and Xue, Chao and Fan, Yuwei and Xi, Wei and He, Xiaodong},
journal={arXiv preprint arXiv:2510.03805},
year={2025}
}
``` | 0 | 0 | [
"language:en",
"license:mit",
"arxiv:2510.03805",
"region:us"
] | 2025-11-12T16:54:08+00:00 | 2025-11-12T16:57:09+00:00 | 0 |
LSDB/mmu-sdss-sdss |
---
description: 'HATS version of MultimodalUniverse/sdss: Spectra dataset based on SDSS-IV.
'
homepage: https://www.sdss.org/
version: 1.0.0
citation: "% % ACKNOWLEDGEMENTS\n% % From: https://www.sdss4.org/collaboration/citing-sdss/\n\
% \n% Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred\
\ P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the\
\ Participating Institutions. SDSS acknowledges support and resources from the Center\
\ for High-Performance Computing at the University of Utah. The SDSS web site is\
\ www.sdss4.org.\n% \n% SDSS is managed by the Astrophysical Research Consortium\
\ for the Participating Institutions of the SDSS Collaboration including the Brazilian\
\ Participation Group, the Carnegie Institution for Science, Carnegie Mellon University,\
\ Center for Astrophysics | Harvard & Smithsonian (CfA), the Chilean Participation\
\ Group, the French Participation Group, Instituto de Astrofísica de Canarias, The\
\ Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the\
\ Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence\
\ Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP),\
\ Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für\
\ Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik\
\ (MPE), National Astronomical Observatories of China, New Mexico State University,\
\ New York University, University of Notre Dame, Observatório Nacional / MCTI, The\
\ Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory,\
\ United Kingdom Participation Group, Universidad Nacional Autónoma de México, University\
\ of Arizona, University of Colorado Boulder, University of Oxford, University of\
\ Portsmouth, University of Utah, University of Virginia, University of Washington,\
\ University of Wisconsin, Vanderbilt University, and Yale University.\n% \n% In\
\ addition, the appropriate SDSS acknowledgment(s) for the survey and data releases\
\ that were used should be included in the Acknowledgments section: \n% \n% Funding\
\ for the Sloan Digital Sky Survey IV has been provided by the \n% Alfred P. Sloan\
\ Foundation, the U.S. Department of Energy Office of \n% Science, and the Participating\
\ Institutions. \n% \n% SDSS-IV acknowledges support and resources from the Center\
\ for High \n% Performance Computing at the University of Utah. The SDSS \n% website\
\ is www.sdss4.org.\n% \n% SDSS-IV is managed by the Astrophysical Research Consortium\
\ \n% for the Participating Institutions of the SDSS Collaboration including \n\
% the Brazilian Participation Group, the Carnegie Institution for Science, \n% Carnegie\
\ Mellon University, Center for Astrophysics | Harvard \\& \n% Smithsonian, the\
\ Chilean Participation Group, the French Participation Group, \n% Instituto de\
\ Astrof\\'isica de Canarias, The Johns Hopkins \n% University, Kavli Institute\
\ for the Physics and Mathematics of the \n% Universe (IPMU) / University of Tokyo,\
\ the Korean Participation Group, \n% Lawrence Berkeley National Laboratory, Leibniz\
\ Institut f\\\"ur Astrophysik \n% Potsdam (AIP), Max-Planck-Institut f\\\"ur Astronomie\
\ (MPIA Heidelberg), \n% Max-Planck-Institut f\\\"ur Astrophysik (MPA Garching),\
\ \n% Max-Planck-Institut f\\\"ur Extraterrestrische Physik (MPE), \n% National\
\ Astronomical Observatories of China, New Mexico State University, \n% New York\
\ University, University of Notre Dame, Observat\\'ario \n% Nacional / MCTI, The\
\ Ohio State University, Pennsylvania State \n% University, Shanghai Astronomical\
\ Observatory, United \n% Kingdom Participation Group, Universidad Nacional Aut\\\
'onoma \n% de M\\'exico, University of Arizona, University of Colorado Boulder,\
\ \n% University of Oxford, University of Portsmouth, University of Utah, \n% University\
\ of Virginia, University of Washington, University of \n% Wisconsin, Vanderbilt\
\ University, and Yale University.\n% \n% CITATION\n@ARTICLE{2022ApJS..259...35A,\n\
\ author = {{Abdurro'uf} and {Accetta}, Katherine and {Aerts}, Conny and {Silva\
\ Aguirre}, V{\\'\\i}ctor and {Ahumada}, Romina and {Ajgaonkar}, Nikhil and {Filiz\
\ Ak}, N. and {Alam}, Shadab and {Allende Prieto}, Carlos and {Almeida}, Andr{\\\
'e}s and {Anders}, Friedrich and {Anderson}, Scott F. and {Andrews}, Brett H. and\
\ {Anguiano}, Borja and {Aquino-Ort{\\'\\i}z}, Erik and {Arag{\\'o}n-Salamanca},\
\ Alfonso and {Argudo-Fern{\\'a}ndez}, Maria and {Ata}, Metin and {Aubert}, Marie\
\ and {Avila-Reese}, Vladimir and {Badenes}, Carles and {Barb{\\'a}}, Rodolfo H.\
\ and {Barger}, Kat and {Barrera-Ballesteros}, Jorge K. and {Beaton}, Rachael L.\
\ and {Beers}, Timothy C. and {Belfiore}, Francesco and {Bender}, Chad F. and {Bernardi},\
\ Mariangela and {Bershady}, Matthew A. and {Beutler}, Florian and {Bidin}, Christian\
\ Moni and {Bird}, Jonathan C. and {Bizyaev}, Dmitry and {Blanc}, Guillermo A. and\
\ {Blanton}, Michael R. and {Boardman}, Nicholas Fraser and {Bolton}, Adam S. and\
\ {Boquien}, M{\\'e}d{\\'e}ric and {Borissova}, Jura and {Bovy}, Jo and {Brandt},\
\ W.~N. and {Brown}, Jordan and {Brownstein}, Joel R. and {Brusa}, Marcella and\
\ {Buchner}, Johannes and {Bundy}, Kevin and {Burchett}, Joseph N. and {Bureau},\
\ Martin and {Burgasser}, Adam and {Cabang}, Tuesday K. and {Campbell}, Stephanie\
\ and {Cappellari}, Michele and {Carlberg}, Joleen K. and {Wanderley}, F{\\'a}bio\
\ Carneiro and {Carrera}, Ricardo and {Cash}, Jennifer and {Chen}, Yan-Ping and\
\ {Chen}, Wei-Huai and {Cherinka}, Brian and {Chiappini}, Cristina and {Choi}, Peter\
\ Doohyun and {Chojnowski}, S. Drew and {Chung}, Haeun and {Clerc}, Nicolas and\
\ {Cohen}, Roger E. and {Comerford}, Julia M. and {Comparat}, Johan and {da Costa},\
\ Luiz and {Covey}, Kevin and {Crane}, Jeffrey D. and {Cruz-Gonzalez}, Irene and\
\ {Culhane}, Connor and {Cunha}, Katia and {Dai}, Y. Sophia and {Damke}, Guillermo\
\ and {Darling}, Jeremy and {Davidson}, James W., Jr. and {Davies}, Roger and {Dawson},\
\ Kyle and {De Lee}, Nathan and {Diamond-Stanic}, Aleksandar M. and {Cano-D{\\'\\\
i}az}, Mariana and {S{\\'a}nchez}, Helena Dom{\\'\\i}nguez and {Donor}, John and\
\ {Duckworth}, Chris and {Dwelly}, Tom and {Eisenstein}, Daniel J. and {Elsworth},\
\ Yvonne P. and {Emsellem}, Eric and {Eracleous}, Mike and {Escoffier}, Stephanie\
\ and {Fan}, Xiaohui and {Farr}, Emily and {Feng}, Shuai and {Fern{\\'a}ndez-Trincado},\
\ Jos{\\'e} G. and {Feuillet}, Diane and {Filipp}, Andreas and {Fillingham}, Sean\
\ P. and {Frinchaboy}, Peter M. and {Fromenteau}, Sebastien and {Galbany}, Llu{\\\
'\\i}s and {Garc{\\'\\i}a}, Rafael A. and {Garc{\\'\\i}a-Hern{\\'a}ndez}, D.~A.\
\ and {Ge}, Junqiang and {Geisler}, Doug and {Gelfand}, Joseph and {G{\\'e}ron},\
\ Tobias and {Gibson}, Benjamin J. and {Goddy}, Julian and {Godoy-Rivera}, Diego\
\ and {Grabowski}, Kathleen and {Green}, Paul J. and {Greener}, Michael and {Grier},\
\ Catherine J. and {Griffith}, Emily and {Guo}, Hong and {Guy}, Julien and {Hadjara},\
\ Massinissa and {Harding}, Paul and {Hasselquist}, Sten and {Hayes}, Christian\
\ R. and {Hearty}, Fred and {Hern{\\'a}ndez}, Jes{\\'u}s and {Hill}, Lewis and {Hogg},\
\ David W. and {Holtzman}, Jon A. and {Horta}, Danny and {Hsieh}, Bau-Ching and\
\ {Hsu}, Chin-Hao and {Hsu}, Yun-Hsin and {Huber}, Daniel and {Huertas-Company},\
\ Marc and {Hutchinson}, Brian and {Hwang}, Ho Seong and {Ibarra-Medel}, H{\\'e}ctor\
\ J. and {Chitham}, Jacob Ider and {Ilha}, Gabriele S. and {Imig}, Julie and {Jaekle},\
\ Will and {Jayasinghe}, Tharindu and {Ji}, Xihan and {Johnson}, Jennifer A. and\
\ {Jones}, Amy and {J{\\\"o}nsson}, Henrik and {Katkov}, Ivan and {Khalatyan}, Arman,\
\ Dr. and {Kinemuchi}, Karen and {Kisku}, Shobhit and {Knapen}, Johan H. and {Kneib},\
\ Jean-Paul and {Kollmeier}, Juna A. and {Kong}, Miranda and {Kounkel}, Marina and\
\ {Kreckel}, Kathryn and {Krishnarao}, Dhanesh and {Lacerna}, Ivan and {Lane}, Richard\
\ R. and {Langgin}, Rachel and {Lavender}, Ramon and {Law}, David R. and {Lazarz},\
\ Daniel and {Leung}, Henry W. and {Leung}, Ho-Hin and {Lewis}, Hannah M. and {Li},\
\ Cheng and {Li}, Ran and {Lian}, Jianhui and {Liang}, Fu-Heng and {Lin}, Lihwai\
\ and {Lin}, Yen-Ting and {Lin}, Sicheng and {Lintott}, Chris and {Long}, Dan and\
\ {Longa-Pe{\\~n}a}, Pen{\\'e}lope and {L{\\'o}pez-Cob{\\'a}}, Carlos and {Lu},\
\ Shengdong and {Lundgren}, Britt F. and {Luo}, Yuanze and {Mackereth}, J. Ted and\
\ {de la Macorra}, Axel and {Mahadevan}, Suvrath and {Majewski}, Steven R. and {Manchado},\
\ Arturo and {Mandeville}, Travis and {Maraston}, Claudia and {Margalef-Bentabol},\
\ Berta and {Masseron}, Thomas and {Masters}, Karen L. and {Mathur}, Savita and\
\ {McDermid}, Richard M. and {Mckay}, Myles and {Merloni}, Andrea and {Merrifield},\
\ Michael and {Meszaros}, Szabolcs and {Miglio}, Andrea and {Di Mille}, Francesco\
\ and {Minniti}, Dante and {Minsley}, Rebecca and {Monachesi}, Antonela and {Moon},\
\ Jeongin and {Mosser}, Benoit and {Mulchaey}, John and {Muna}, Demitri and {Mu{\\\
~n}oz}, Ricardo R. and {Myers}, Adam D. and {Myers}, Natalie and {Nadathur}, Seshadri\
\ and {Nair}, Preethi and {Nandra}, Kirpal and {Neumann}, Justus and {Newman}, Jeffrey\
\ A. and {Nidever}, David L. and {Nikakhtar}, Farnik and {Nitschelm}, Christian\
\ and {O'Connell}, Julia E. and {Garma-Oehmichen}, Luis and {Luan Souza de Oliveira},\
\ Gabriel and {Olney}, Richard and {Oravetz}, Daniel and {Ortigoza-Urdaneta}, Mario\
\ and {Osorio}, Yeisson and {Otter}, Justin and {Pace}, Zachary J. and {Padilla},\
\ Nelson and {Pan}, Kaike and {Pan}, Hsi-An and {Parikh}, Taniya and {Parker}, James\
\ and {Peirani}, Sebastien and {Pe{\\~n}a Ram{\\'\\i}rez}, Karla and {Penny}, Samantha\
\ and {Percival}, Will J. and {Perez-Fournon}, Ismael and {Pinsonneault}, Marc and\
\ {Poidevin}, Fr{\\'e}d{\\'e}rick and {Poovelil}, Vijith Jacob and {Price-Whelan},\
\ Adrian M. and {B{\\'a}rbara de Andrade Queiroz}, Anna and {Raddick}, M. Jordan\
\ and {Ray}, Amy and {Rembold}, Sandro Barboza and {Riddle}, Nicole and {Riffel},\
\ Rogemar A. and {Riffel}, Rog{\\'e}rio and {Rix}, Hans-Walter and {Robin}, Annie\
\ C. and {Rodr{\\'\\i}guez-Puebla}, Aldo and {Roman-Lopes}, Alexandre and {Rom{\\\
'a}n-Z{\\'u}{\\~n}iga}, Carlos and {Rose}, Benjamin and {Ross}, Ashley J. and {Rossi},\
\ Graziano and {Rubin}, Kate H.~R. and {Salvato}, Mara and {S{\\'a}nchez}, Seb{\\\
'a}stian F. and {S{\\'a}nchez-Gallego}, Jos{\\'e} R. and {Sanderson}, Robyn and\
\ {Santana Rojas}, Felipe Antonio and {Sarceno}, Edgar and {Sarmiento}, Regina and\
\ {Sayres}, Conor and {Sazonova}, Elizaveta and {Schaefer}, Adam L. and {Schiavon},\
\ Ricardo and {Schlegel}, David J. and {Schneider}, Donald P. and {Schultheis},\
\ Mathias and {Schwope}, Axel and {Serenelli}, Aldo and {Serna}, Javier and {Shao},\
\ Zhengyi and {Shapiro}, Griffin and {Sharma}, Anubhav and {Shen}, Yue and {Shetrone},\
\ Matthew and {Shu}, Yiping and {Simon}, Joshua D. and {Skrutskie}, M.~F. and {Smethurst},\
\ Rebecca and {Smith}, Verne and {Sobeck}, Jennifer and {Spoo}, Taylor and {Sprague},\
\ Dani and {Stark}, David V. and {Stassun}, Keivan G. and {Steinmetz}, Matthias\
\ and {Stello}, Dennis and {Stone-Martinez}, Alexander and {Storchi-Bergmann}, Thaisa\
\ and {Stringfellow}, Guy S. and {Stutz}, Amelia and {Su}, Yung-Chau and {Taghizadeh-Popp},\
\ Manuchehr and {Talbot}, Michael S. and {Tayar}, Jamie and {Telles}, Eduardo and\
\ {Teske}, Johanna and {Thakar}, Ani and {Theissen}, Christopher and {Tkachenko},\
\ Andrew and {Thomas}, Daniel and {Tojeiro}, Rita and {Hernandez Toledo}, Hector\
\ and {Troup}, Nicholas W. and {Trump}, Jonathan R. and {Trussler}, James and {Turner},\
\ Jacqueline and {Tuttle}, Sarah and {Unda-Sanzana}, Eduardo and {V{\\'a}zquez-Mata},\
\ Jos{\\'e} Antonio and {Valentini}, Marica and {Valenzuela}, Octavio and {Vargas-Gonz{\\\
'a}lez}, Jaime and {Vargas-Maga{\\~n}a}, Mariana and {Alfaro}, Pablo Vera and {Villanova},\
\ Sandro and {Vincenzo}, Fiorenzo and {Wake}, David and {Warfield}, Jack T. and\
\ {Washington}, Jessica Diane and {Weaver}, Benjamin Alan and {Weijmans}, Anne-Marie\
\ and {Weinberg}, David H. and {Weiss}, Achim and {Westfall}, Kyle B. and {Wild},\
\ Vivienne and {Wilde}, Matthew C. and {Wilson}, John C. and {Wilson}, Robert F.\
\ and {Wilson}, Mikayla and {Wolf}, Julien and {Wood-Vasey}, W.~M. and {Yan}, Renbin\
\ and {Zamora}, Olga and {Zasowski}, Gail and {Zhang}, Kai and {Zhao}, Cheng and\
\ {Zheng}, Zheng and {Zheng}, Zheng and {Zhu}, Kai},\n title = \"{The Seventeenth\
\ Data Release of the Sloan Digital Sky Surveys: Complete Release of MaNGA, MaStar,\
\ and APOGEE-2 Data}\",\n journal = {\\apjs},\n keywords = {Astronomy data\
\ acquisition, Astronomy databases, Surveys, 1860, 83, 1671, Astrophysics - Astrophysics\
\ of Galaxies, Astrophysics - Instrumentation and Methods for Astrophysics},\n \
\ year = 2022,\n month = apr,\n volume = {259},\n number\
\ = {2},\n eid = {35},\n pages = {35},\n doi = {10.3847/1538-4365/ac4414},\n\
archivePrefix = {arXiv},\n eprint = {2112.02026},\n primaryClass = {astro-ph.GA},\n\
\ adsurl = {https://ui.adsabs.harvard.edu/abs/2022ApJS..259...35A},\n \
\ adsnote = {Provided by the SAO/NASA Astrophysics Data System}\n}\n"
---
# Sdss Dataset
Spectra dataset based on SDSS-IV.
% % ACKNOWLEDGEMENTS
% % From: https://www.sdss4.org/collaboration/citing-sdss/
%
% Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss4.org.
%
% SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian (CfA), the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
%
% In addition, the appropriate SDSS acknowledgment(s) for the survey and data releases that were used should be included in the Acknowledgments section:
%
% Funding for the Sloan Digital Sky Survey IV has been provided by the
% Alfred P. Sloan Foundation, the U.S. Department of Energy Office of
% Science, and the Participating Institutions.
%
% SDSS-IV acknowledges support and resources from the Center for High
% Performance Computing at the University of Utah. The SDSS
% website is www.sdss4.org.
%
% SDSS-IV is managed by the Astrophysical Research Consortium
% for the Participating Institutions of the SDSS Collaboration including
% the Brazilian Participation Group, the Carnegie Institution for Science,
% Carnegie Mellon University, Center for Astrophysics | Harvard \&
% Smithsonian, the Chilean Participation Group, the French Participation Group,
% Instituto de Astrof\'isica de Canarias, The Johns Hopkins
% University, Kavli Institute for the Physics and Mathematics of the
% Universe (IPMU) / University of Tokyo, the Korean Participation Group,
% Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik
% Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg),
% Max-Planck-Institut f\"ur Astrophysik (MPA Garching),
% Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE),
% National Astronomical Observatories of China, New Mexico State University,
% New York University, University of Notre Dame, Observat\'ario
% Nacional / MCTI, The Ohio State University, Pennsylvania State
% University, Shanghai Astronomical Observatory, United
% Kingdom Participation Group, Universidad Nacional Aut\'onoma
% de M\'exico, University of Arizona, University of Colorado Boulder,
% University of Oxford, University of Portsmouth, University of Utah,
% University of Virginia, University of Washington, University of
% Wisconsin, Vanderbilt University, and Yale University.
%
% CITATION
@ARTICLE{2022ApJS..259...35A,
author = {{Abdurro'uf} and {Accetta}, Katherine and {Aerts}, Conny and {Silva Aguirre}, V{\'\i}ctor and {Ahumada}, Romina and {Ajgaonkar}, Nikhil and {Filiz Ak}, N. and {Alam}, Shadab and {Allende Prieto}, Carlos and {Almeida}, Andr{\'e}s and {Anders}, Friedrich and {Anderson}, Scott F. and {Andrews}, Brett H. and {Anguiano}, Borja and {Aquino-Ort{\'\i}z}, Erik and {Arag{\'o}n-Salamanca}, Alfonso and {Argudo-Fern{\'a}ndez}, Maria and {Ata}, Metin and {Aubert}, Marie and {Avila-Reese}, Vladimir and {Badenes}, Carles and {Barb{\'a}}, Rodolfo H. and {Barger}, Kat and {Barrera-Ballesteros}, Jorge K. and {Beaton}, Rachael L. and {Beers}, Timothy C. and {Belfiore}, Francesco and {Bender}, Chad F. and {Bernardi}, Mariangela and {Bershady}, Matthew A. and {Beutler}, Florian and {Bidin}, Christian Moni and {Bird}, Jonathan C. and {Bizyaev}, Dmitry and {Blanc}, Guillermo A. and {Blanton}, Michael R. and {Boardman}, Nicholas Fraser and {Bolton}, Adam S. and {Boquien}, M{\'e}d{\'e}ric and {Borissova}, Jura and {Bovy}, Jo and {Brandt}, W.~N. and {Brown}, Jordan and {Brownstein}, Joel R. and {Brusa}, Marcella and {Buchner}, Johannes and {Bundy}, Kevin and {Burchett}, Joseph N. and {Bureau}, Martin and {Burgasser}, Adam and {Cabang}, Tuesday K. and {Campbell}, Stephanie and {Cappellari}, Michele and {Carlberg}, Joleen K. and {Wanderley}, F{\'a}bio Carneiro and {Carrera}, Ricardo and {Cash}, Jennifer and {Chen}, Yan-Ping and {Chen}, Wei-Huai and {Cherinka}, Brian and {Chiappini}, Cristina and {Choi}, Peter Doohyun and {Chojnowski}, S. Drew and {Chung}, Haeun and {Clerc}, Nicolas and {Cohen}, Roger E. and {Comerford}, Julia M. and {Comparat}, Johan and {da Costa}, Luiz and {Covey}, Kevin and {Crane}, Jeffrey D. and {Cruz-Gonzalez}, Irene and {Culhane}, Connor and {Cunha}, Katia and {Dai}, Y. Sophia and {Damke}, Guillermo and {Darling}, Jeremy and {Davidson}, James W., Jr. and {Davies}, Roger and {Dawson}, Kyle and {De Lee}, Nathan and {Diamond-Stanic}, Aleksandar M. and {Cano-D{\'\i}az}, Mariana and {S{\'a}nchez}, Helena Dom{\'\i}nguez and {Donor}, John and {Duckworth}, Chris and {Dwelly}, Tom and {Eisenstein}, Daniel J. and {Elsworth}, Yvonne P. and {Emsellem}, Eric and {Eracleous}, Mike and {Escoffier}, Stephanie and {Fan}, Xiaohui and {Farr}, Emily and {Feng}, Shuai and {Fern{\'a}ndez-Trincado}, Jos{\'e} G. and {Feuillet}, Diane and {Filipp}, Andreas and {Fillingham}, Sean P. and {Frinchaboy}, Peter M. and {Fromenteau}, Sebastien and {Galbany}, Llu{\'\i}s and {Garc{\'\i}a}, Rafael A. and {Garc{\'\i}a-Hern{\'a}ndez}, D.~A. and {Ge}, Junqiang and {Geisler}, Doug and {Gelfand}, Joseph and {G{\'e}ron}, Tobias and {Gibson}, Benjamin J. and {Goddy}, Julian and {Godoy-Rivera}, Diego and {Grabowski}, Kathleen and {Green}, Paul J. and {Greener}, Michael and {Grier}, Catherine J. and {Griffith}, Emily and {Guo}, Hong and {Guy}, Julien and {Hadjara}, Massinissa and {Harding}, Paul and {Hasselquist}, Sten and {Hayes}, Christian R. and {Hearty}, Fred and {Hern{\'a}ndez}, Jes{\'u}s and {Hill}, Lewis and {Hogg}, David W. and {Holtzman}, Jon A. and {Horta}, Danny and {Hsieh}, Bau-Ching and {Hsu}, Chin-Hao and {Hsu}, Yun-Hsin and {Huber}, Daniel and {Huertas-Company}, Marc and {Hutchinson}, Brian and {Hwang}, Ho Seong and {Ibarra-Medel}, H{\'e}ctor J. and {Chitham}, Jacob Ider and {Ilha}, Gabriele S. and {Imig}, Julie and {Jaekle}, Will and {Jayasinghe}, Tharindu and {Ji}, Xihan and {Johnson}, Jennifer A. and {Jones}, Amy and {J{\"o}nsson}, Henrik and {Katkov}, Ivan and {Khalatyan}, Arman, Dr. and {Kinemuchi}, Karen and {Kisku}, Shobhit and {Knapen}, Johan H. and {Kneib}, Jean-Paul and {Kollmeier}, Juna A. and {Kong}, Miranda and {Kounkel}, Marina and {Kreckel}, Kathryn and {Krishnarao}, Dhanesh and {Lacerna}, Ivan and {Lane}, Richard R. and {Langgin}, Rachel and {Lavender}, Ramon and {Law}, David R. and {Lazarz}, Daniel and {Leung}, Henry W. and {Leung}, Ho-Hin and {Lewis}, Hannah M. and {Li}, Cheng and {Li}, Ran and {Lian}, Jianhui and {Liang}, Fu-Heng and {Lin}, Lihwai and {Lin}, Yen-Ting and {Lin}, Sicheng and {Lintott}, Chris and {Long}, Dan and {Longa-Pe{\~n}a}, Pen{\'e}lope and {L{\'o}pez-Cob{\'a}}, Carlos and {Lu}, Shengdong and {Lundgren}, Britt F. and {Luo}, Yuanze and {Mackereth}, J. Ted and {de la Macorra}, Axel and {Mahadevan}, Suvrath and {Majewski}, Steven R. and {Manchado}, Arturo and {Mandeville}, Travis and {Maraston}, Claudia and {Margalef-Bentabol}, Berta and {Masseron}, Thomas and {Masters}, Karen L. and {Mathur}, Savita and {McDermid}, Richard M. and {Mckay}, Myles and {Merloni}, Andrea and {Merrifield}, Michael and {Meszaros}, Szabolcs and {Miglio}, Andrea and {Di Mille}, Francesco and {Minniti}, Dante and {Minsley}, Rebecca and {Monachesi}, Antonela and {Moon}, Jeongin and {Mosser}, Benoit and {Mulchaey}, John and {Muna}, Demitri and {Mu{\~n}oz}, Ricardo R. and {Myers}, Adam D. and {Myers}, Natalie and {Nadathur}, Seshadri and {Nair}, Preethi and {Nandra}, Kirpal and {Neumann}, Justus and {Newman}, Jeffrey A. and {Nidever}, David L. and {Nikakhtar}, Farnik and {Nitschelm}, Christian and {O'Connell}, Julia E. and {Garma-Oehmichen}, Luis and {Luan Souza de Oliveira}, Gabriel and {Olney}, Richard and {Oravetz}, Daniel and {Ortigoza-Urdaneta}, Mario and {Osorio}, Yeisson and {Otter}, Justin and {Pace}, Zachary J. and {Padilla}, Nelson and {Pan}, Kaike and {Pan}, Hsi-An and {Parikh}, Taniya and {Parker}, James and {Peirani}, Sebastien and {Pe{\~n}a Ram{\'\i}rez}, Karla and {Penny}, Samantha and {Percival}, Will J. and {Perez-Fournon}, Ismael and {Pinsonneault}, Marc and {Poidevin}, Fr{\'e}d{\'e}rick and {Poovelil}, Vijith Jacob and {Price-Whelan}, Adrian M. and {B{\'a}rbara de Andrade Queiroz}, Anna and {Raddick}, M. Jordan and {Ray}, Amy and {Rembold}, Sandro Barboza and {Riddle}, Nicole and {Riffel}, Rogemar A. and {Riffel}, Rog{\'e}rio and {Rix}, Hans-Walter and {Robin}, Annie C. and {Rodr{\'\i}guez-Puebla}, Aldo and {Roman-Lopes}, Alexandre and {Rom{\'a}n-Z{\'u}{\~n}iga}, Carlos and {Rose}, Benjamin and {Ross}, Ashley J. and {Rossi}, Graziano and {Rubin}, Kate H.~R. and {Salvato}, Mara and {S{\'a}nchez}, Seb{\'a}stian F. and {S{\'a}nchez-Gallego}, Jos{\'e} R. and {Sanderson}, Robyn and {Santana Rojas}, Felipe Antonio and {Sarceno}, Edgar and {Sarmiento}, Regina and {Sayres}, Conor and {Sazonova}, Elizaveta and {Schaefer}, Adam L. and {Schiavon}, Ricardo and {Schlegel}, David J. and {Schneider}, Donald P. and {Schultheis}, Mathias and {Schwope}, Axel and {Serenelli}, Aldo and {Serna}, Javier and {Shao}, Zhengyi and {Shapiro}, Griffin and {Sharma}, Anubhav and {Shen}, Yue and {Shetrone}, Matthew and {Shu}, Yiping and {Simon}, Joshua D. and {Skrutskie}, M.~F. and {Smethurst}, Rebecca and {Smith}, Verne and {Sobeck}, Jennifer and {Spoo}, Taylor and {Sprague}, Dani and {Stark}, David V. and {Stassun}, Keivan G. and {Steinmetz}, Matthias and {Stello}, Dennis and {Stone-Martinez}, Alexander and {Storchi-Bergmann}, Thaisa and {Stringfellow}, Guy S. and {Stutz}, Amelia and {Su}, Yung-Chau and {Taghizadeh-Popp}, Manuchehr and {Talbot}, Michael S. and {Tayar}, Jamie and {Telles}, Eduardo and {Teske}, Johanna and {Thakar}, Ani and {Theissen}, Christopher and {Tkachenko}, Andrew and {Thomas}, Daniel and {Tojeiro}, Rita and {Hernandez Toledo}, Hector and {Troup}, Nicholas W. and {Trump}, Jonathan R. and {Trussler}, James and {Turner}, Jacqueline and {Tuttle}, Sarah and {Unda-Sanzana}, Eduardo and {V{\'a}zquez-Mata}, Jos{\'e} Antonio and {Valentini}, Marica and {Valenzuela}, Octavio and {Vargas-Gonz{\'a}lez}, Jaime and {Vargas-Maga{\~n}a}, Mariana and {Alfaro}, Pablo Vera and {Villanova}, Sandro and {Vincenzo}, Fiorenzo and {Wake}, David and {Warfield}, Jack T. and {Washington}, Jessica Diane and {Weaver}, Benjamin Alan and {Weijmans}, Anne-Marie and {Weinberg}, David H. and {Weiss}, Achim and {Westfall}, Kyle B. and {Wild}, Vivienne and {Wilde}, Matthew C. and {Wilson}, John C. and {Wilson}, Robert F. and {Wilson}, Mikayla and {Wolf}, Julien and {Wood-Vasey}, W.~M. and {Yan}, Renbin and {Zamora}, Olga and {Zasowski}, Gail and {Zhang}, Kai and {Zhao}, Cheng and {Zheng}, Zheng and {Zheng}, Zheng and {Zhu}, Kai},
title = "{The Seventeenth Data Release of the Sloan Digital Sky Surveys: Complete Release of MaNGA, MaStar, and APOGEE-2 Data}",
journal = {\apjs},
keywords = {Astronomy data acquisition, Astronomy databases, Surveys, 1860, 83, 1671, Astrophysics - Astrophysics of Galaxies, Astrophysics - Instrumentation and Methods for Astrophysics},
year = 2022,
month = apr,
volume = {259},
number = {2},
eid = {35},
pages = {35},
doi = {10.3847/1538-4365/ac4414},
archivePrefix = {arXiv},
eprint = {2112.02026},
primaryClass = {astro-ph.GA},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022ApJS..259...35A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
|
---
description: 'HATS version of MultimodalUniverse/sdss: Spectra dataset based on SDSS-IV.
'
homepage: https://www.sdss.org/
version: 1.0.0
citation: "% % ACKNOWLEDGEMENTS\n% % From: https://www.sdss4.org/collaboration/citing-sdss/\n\
% \n% Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred\
\ P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the\
\ Participating Institutions. SDSS acknowledges support and resources from the Center\
\ for High-Performance Computing at the University of Utah. The SDSS web site is\
\ www.sdss4.org.\n% \n% SDSS is managed by the Astrophysical Research Consortium\
\ for the Participating Institutions of the SDSS Collaboration including the Brazilian\
\ Participation Group, the Carnegie Institution for Science, Carnegie Mellon University,\
\ Center for Astrophysics | Harvard & Smithsonian (CfA), the Chilean Participation\
\ Group, the French Participation Group, Instituto de Astrofísica de Canarias, The\
\ Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the\
\ Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence\
\ Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP),\
\ Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für\
\ Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik\
\ (MPE), National Astronomical Observatories of China, New Mexico State University,\
\ New York University, University of Notre Dame, Observatório Nacional / MCTI, The\
\ Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory,\
\ United Kingdom Participation Group, Universidad Nacional Autónoma de México, University\
\ of Arizona, University of Colorado Boulder, University of Oxford, University of\
\ Portsmouth, University of Utah, University of Virginia, University of Washington,\
\ University of Wisconsin, Vanderbilt University, and Yale University.\n% \n% In\
\ addition, the appropriate SDSS acknowledgment(s) for the survey and data releases\
\ that were used should be included in the Acknowledgments section: \n% \n% Funding\
\ for the Sloan Digital Sky Survey IV has been provided by the \n% Alfred P. Sloan\
\ Foundation, the U.S. Department of Energy Office of \n% Science, and the Participating\
\ Institutions. \n% \n% SDSS-IV acknowledges support and resources from the Center\
\ for High \n% Performance Computing at the University of Utah. The SDSS \n% website\
\ is www.sdss4.org.\n% \n% SDSS-IV is managed by the Astrophysical Research Consortium\
\ \n% for the Participating Institutions of the SDSS Collaboration including \n\
% the Brazilian Participation Group, the Carnegie Institution for Science, \n% Carnegie\
\ Mellon University, Center for Astrophysics | Harvard \\& \n% Smithsonian, the\
\ Chilean Participation Group, the French Participation Group, \n% Instituto de\
\ Astrof\\'isica de Canarias, The Johns Hopkins \n% University, Kavli Institute\
\ for the Physics and Mathematics of the \n% Universe (IPMU) / University of Tokyo,\
\ the Korean Participation Group, \n% Lawrence Berkeley National Laboratory, Leibniz\
\ Institut f\\\"ur Astrophysik \n% Potsdam (AIP), Max-Planck-Institut f\\\"ur Astronomie\
\ (MPIA Heidelberg), \n% Max-Planck-Institut f\\\"ur Astrophysik (MPA Garching),\
\ \n% Max-Planck-Institut f\\\"ur Extraterrestrische Physik (MPE), \n% National\
\ Astronomical Observatories of China, New Mexico State University, \n% New York\
\ University, University of Notre Dame, Observat\\'ario \n% Nacional / MCTI, The\
\ Ohio State University, Pennsylvania State \n% University, Shanghai Astronomical\
\ Observatory, United \n% Kingdom Participation Group, Universidad Nacional Aut\\\
'onoma \n% de M\\'exico, University of Arizona, University of Colorado Boulder,\
\ \n% University of Oxford, University of Portsmouth, University of Utah, \n% University\
\ of Virginia, University of Washington, University of \n% Wisconsin, Vanderbilt\
\ University, and Yale University.\n% \n% CITATION\n@ARTICLE{2022ApJS..259...35A,\n\
\ author = {{Abdurro'uf} and {Accetta}, Katherine and {Aerts}, Conny and {Silva\
\ Aguirre}, V{\\'\\i}ctor and {Ahumada}, Romina and {Ajgaonkar}, Nikhil and {Filiz\
\ Ak}, N. and {Alam}, Shadab and {Allende Prieto}, Carlos and {Almeida}, Andr{\\\
'e}s and {Anders}, Friedrich and {Anderson}, Scott F. and {Andrews}, Brett H. and\
\ {Anguiano}, Borja and {Aquino-Ort{\\'\\i}z}, Erik and {Arag{\\'o}n-Salamanca},\
\ Alfonso and {Argudo-Fern{\\'a}ndez}, Maria and {Ata}, Metin and {Aubert}, Marie\
\ and {Avila-Reese}, Vladimir and {Badenes}, Carles and {Barb{\\'a}}, Rodolfo H.\
\ and {Barger}, Kat and {Barrera-Ballesteros}, Jorge K. and {Beaton}, Rachael L.\
\ and {Beers}, Timothy C. and {Belfiore}, Francesco and {Bender}, Chad F. and {Bernardi},\
\ Mariangela and {Bershady}, Matthew A. and {Beutler}, Florian and {Bidin}, Christian\
\ Moni and {Bird}, Jonathan C. and {Bizyaev}, Dmitry and {Blanc}, Guillermo A. and\
\ {Blanton}, Michael R. and {Boardman}, Nicholas Fraser and {Bolton}, Adam S. and\
\ {Boquien}, M{\\'e}d{\\'e}ric and {Borissova}, Jura and {Bovy}, Jo and {Brandt},\
\ W.~N. and {Brown}, Jordan and {Brownstein}, Joel R. and {Brusa}, Marcella and\
\ {Buchner}, Johannes and {Bundy}, Kevin and {Burchett}, Joseph N. and {Bureau},\
\ Martin and {Burgasser}, Adam and {Cabang}, Tuesday K. and {Campbell}, Stephanie\
\ and {Cappellari}, Michele and {Carlberg}, Joleen K. and {Wanderley}, F{\\'a}bio\
\ Carneiro and {Carrera}, Ricardo and {Cash}, Jennifer and {Chen}, Yan-Ping and\
\ {Chen}, Wei-Huai and {Cherinka}, Brian and {Chiappini}, Cristina and {Choi}, Peter\
\ Doohyun and {Chojnowski}, S. Drew and {Chung}, Haeun and {Clerc}, Nicolas and\
\ {Cohen}, Roger E. and {Comerford}, Julia M. and {Comparat}, Johan and {da Costa},\
\ Luiz and {Covey}, Kevin and {Crane}, Jeffrey D. and {Cruz-Gonzalez}, Irene and\
\ {Culhane}, Connor and {Cunha}, Katia and {Dai}, Y. Sophia and {Damke}, Guillermo\
\ and {Darling}, Jeremy and {Davidson}, James W., Jr. and {Davies}, Roger and {Dawson},\
\ Kyle and {De Lee}, Nathan and {Diamond-Stanic}, Aleksandar M. and {Cano-D{\\'\\\
i}az}, Mariana and {S{\\'a}nchez}, Helena Dom{\\'\\i}nguez and {Donor}, John and\
\ {Duckworth}, Chris and {Dwelly}, Tom and {Eisenstein}, Daniel J. and {Elsworth},\
\ Yvonne P. and {Emsellem}, Eric and {Eracleous}, Mike and {Escoffier}, Stephanie\
\ and {Fan}, Xiaohui and {Farr}, Emily and {Feng}, Shuai and {Fern{\\'a}ndez-Trincado},\
\ Jos{\\'e} G. and {Feuillet}, Diane and {Filipp}, Andreas and {Fillingham}, Sean\
\ P. and {Frinchaboy}, Peter M. and {Fromenteau}, Sebastien and {Galbany}, Llu{\\\
'\\i}s and {Garc{\\'\\i}a}, Rafael A. and {Garc{\\'\\i}a-Hern{\\'a}ndez}, D.~A.\
\ and {Ge}, Junqiang and {Geisler}, Doug and {Gelfand}, Joseph and {G{\\'e}ron},\
\ Tobias and {Gibson}, Benjamin J. and {Goddy}, Julian and {Godoy-Rivera}, Diego\
\ and {Grabowski}, Kathleen and {Green}, Paul J. and {Greener}, Michael and {Grier},\
\ Catherine J. and {Griffith}, Emily and {Guo}, Hong and {Guy}, Julien and {Hadjara},\
\ Massinissa and {Harding}, Paul and {Hasselquist}, Sten and {Hayes}, Christian\
\ R. and {Hearty}, Fred and {Hern{\\'a}ndez}, Jes{\\'u}s and {Hill}, Lewis and {Hogg},\
\ David W. and {Holtzman}, Jon A. and {Horta}, Danny and {Hsieh}, Bau-Ching and\
\ {Hsu}, Chin-Hao and {Hsu}, Yun-Hsin and {Huber}, Daniel and {Huertas-Company},\
\ Marc and {Hutchinson}, Brian and {Hwang}, Ho Seong and {Ibarra-Medel}, H{\\'e}ctor\
\ J. and {Chitham}, Jacob Ider and {Ilha}, Gabriele S. and {Imig}, Julie and {Jaekle},\
\ Will and {Jayasinghe}, Tharindu and {Ji}, Xihan and {Johnson}, Jennifer A. and\
\ {Jones}, Amy and {J{\\\"o}nsson}, Henrik and {Katkov}, Ivan and {Khalatyan}, Arman,\
\ Dr. and {Kinemuchi}, Karen and {Kisku}, Shobhit and {Knapen}, Johan H. and {Kneib},\
\ Jean-Paul and {Kollmeier}, Juna A. and {Kong}, Miranda and {Kounkel}, Marina and\
\ {Kreckel}, Kathryn and {Krishnarao}, Dhanesh and {Lacerna}, Ivan and {Lane}, Richard\
\ R. and {Langgin}, Rachel and {Lavender}, Ramon and {Law}, David R. and {Lazarz},\
\ Daniel and {Leung}, Henry W. and {Leung}, Ho-Hin and {Lewis}, Hannah M. and {Li},\
\ Cheng and {Li}, Ran and {Lian}, Jianhui and {Liang}, Fu-Heng and {Lin}, Lihwai\
\ and {Lin}, Yen-Ting and {Lin}, Sicheng and {Lintott}, Chris and {Long}, Dan and\
\ {Longa-Pe{\\~n}a}, Pen{\\'e}lope and {L{\\'o}pez-Cob{\\'a}}, Carlos and {Lu},\
\ Shengdong and {Lundgren}, Britt F. and {Luo}, Yuanze and {Mackereth}, J. Ted and\
\ {de la Macorra}, Axel and {Mahadevan}, Suvrath and {Majewski}, Steven R. and {Manchado},\
\ Arturo and {Mandeville}, Travis and {Maraston}, Claudia and {Margalef-Bentabol},\
\ Berta and {Masseron}, Thomas and {Masters}, Karen L. and {Mathur}, Savita and\
\ {McDermid}, Richard M. and {Mckay}, Myles and {Merloni}, Andrea and {Merrifield},\
\ Michael and {Meszaros}, Szabolcs and {Miglio}, Andrea and {Di Mille}, Francesco\
\ and {Minniti}, Dante and {Minsley}, Rebecca and {Monachesi}, Antonela and {Moon},\
\ Jeongin and {Mosser}, Benoit and {Mulchaey}, John and {Muna}, Demitri and {Mu{\\\
~n}oz}, Ricardo R. and {Myers}, Adam D. and {Myers}, Natalie and {Nadathur}, Seshadri\
\ and {Nair}, Preethi and {Nandra}, Kirpal and {Neumann}, Justus and {Newman}, Jeffrey\
\ A. and {Nidever}, David L. and {Nikakhtar}, Farnik and {Nitschelm}, Christian\
\ and {O'Connell}, Julia E. and {Garma-Oehmichen}, Luis and {Luan Souza de Oliveira},\
\ Gabriel and {Olney}, Richard and {Oravetz}, Daniel and {Ortigoza-Urdaneta}, Mario\
\ and {Osorio}, Yeisson and {Otter}, Justin and {Pace}, Zachary J. and {Padilla},\
\ Nelson and {Pan}, Kaike and {Pan}, Hsi-An and {Parikh}, Taniya and {Parker}, James\
\ and {Peirani}, Sebastien and {Pe{\\~n}a Ram{\\'\\i}rez}, Karla and {Penny}, Samantha\
\ and {Percival}, Will J. and {Perez-Fournon}, Ismael and {Pinsonneault}, Marc and\
\ {Poidevin}, Fr{\\'e}d{\\'e}rick and {Poovelil}, Vijith Jacob and {Price-Whelan},\
\ Adrian M. and {B{\\'a}rbara de Andrade Queiroz}, Anna and {Raddick}, M. Jordan\
\ and {Ray}, Amy and {Rembold}, Sandro Barboza and {Riddle}, Nicole and {Riffel},\
\ Rogemar A. and {Riffel}, Rog{\\'e}rio and {Rix}, Hans-Walter and {Robin}, Annie\
\ C. and {Rodr{\\'\\i}guez-Puebla}, Aldo and {Roman-Lopes}, Alexandre and {Rom{\\\
'a}n-Z{\\'u}{\\~n}iga}, Carlos and {Rose}, Benjamin and {Ross}, Ashley J. and {Rossi},\
\ Graziano and {Rubin}, Kate H.~R. and {Salvato}, Mara and {S{\\'a}nchez}, Seb{\\\
'a}stian F. and {S{\\'a}nchez-Gallego}, Jos{\\'e} R. and {Sanderson}, Robyn and\
\ {Santana Rojas}, Felipe Antonio and {Sarceno}, Edgar and {Sarmiento}, Regina and\
\ {Sayres}, Conor and {Sazonova}, Elizaveta and {Schaefer}, Adam L. and {Schiavon},\
\ Ricardo and {Schlegel}, David J. and {Schneider}, Donald P. and {Schultheis},\
\ Mathias and {Schwope}, Axel and {Serenelli}, Aldo and {Serna}, Javier and {Shao},\
\ Zhengyi and {Shapiro}, Griffin and {Sharma}, Anubhav and {Shen}, Yue and {Shetrone},\
\ Matthew and {Shu}, Yiping and {Simon}, Joshua D. and {Skrutskie}, M.~F. and {Smethurst},\
\ Rebecca and {Smith}, Verne and {Sobeck}, Jennifer and {Spoo}, Taylor and {Sprague},\
\ Dani and {Stark}, David V. and {Stassun}, Keivan G. and {Steinmetz}, Matthias\
\ and {Stello}, Dennis and {Stone-Martinez}, Alexander and {Storchi-Bergmann}, Thaisa\
\ and {Stringfellow}, Guy S. and {Stutz}, Amelia and {Su}, Yung-Chau and {Taghizadeh-Popp},\
\ Manuchehr and {Talbot}, Michael S. and {Tayar}, Jamie and {Telles}, Eduardo and\
\ {Teske}, Johanna and {Thakar}, Ani and {Theissen}, Christopher and {Tkachenko},\
\ Andrew and {Thomas}, Daniel and {Tojeiro}, Rita and {Hernandez Toledo}, Hector\
\ and {Troup}, Nicholas W. and {Trump}, Jonathan R. and {Trussler}, James and {Turner},\
\ Jacqueline and {Tuttle}, Sarah and {Unda-Sanzana}, Eduardo and {V{\\'a}zquez-Mata},\
\ Jos{\\'e} Antonio and {Valentini}, Marica and {Valenzuela}, Octavio and {Vargas-Gonz{\\\
'a}lez}, Jaime and {Vargas-Maga{\\~n}a}, Mariana and {Alfaro}, Pablo Vera and {Villanova},\
\ Sandro and {Vincenzo}, Fiorenzo and {Wake}, David and {Warfield}, Jack T. and\
\ {Washington}, Jessica Diane and {Weaver}, Benjamin Alan and {Weijmans}, Anne-Marie\
\ and {Weinberg}, David H. and {Weiss}, Achim and {Westfall}, Kyle B. and {Wild},\
\ Vivienne and {Wilde}, Matthew C. and {Wilson}, John C. and {Wilson}, Robert F.\
\ and {Wilson}, Mikayla and {Wolf}, Julien and {Wood-Vasey}, W.~M. and {Yan}, Renbin\
\ and {Zamora}, Olga and {Zasowski}, Gail and {Zhang}, Kai and {Zhao}, Cheng and\
\ {Zheng}, Zheng and {Zheng}, Zheng and {Zhu}, Kai},\n title = \"{The Seventeenth\
\ Data Release of the Sloan Digital Sky Surveys: Complete Release of MaNGA, MaStar,\
\ and APOGEE-2 Data}\",\n journal = {\\apjs},\n keywords = {Astronomy data\
\ acquisition, Astronomy databases, Surveys, 1860, 83, 1671, Astrophysics - Astrophysics\
\ of Galaxies, Astrophysics - Instrumentation and Methods for Astrophysics},\n \
\ year = 2022,\n month = apr,\n volume = {259},\n number\
\ = {2},\n eid = {35},\n pages = {35},\n doi = {10.3847/1538-4365/ac4414},\n\
archivePrefix = {arXiv},\n eprint = {2112.02026},\n primaryClass = {astro-ph.GA},\n\
\ adsurl = {https://ui.adsabs.harvard.edu/abs/2022ApJS..259...35A},\n \
\ adsnote = {Provided by the SAO/NASA Astrophysics Data System}\n}\n"
---
# Sdss Dataset
Spectra dataset based on SDSS-IV.
% % ACKNOWLEDGEMENTS
% % From: https://www.sdss4.org/collaboration/citing-sdss/
%
% Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss4.org.
%
% SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian (CfA), the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
%
% In addition, the appropriate SDSS acknowledgment(s) for the survey and data releases that were used should be included in the Acknowledgments section:
%
% Funding for the Sloan Digital Sky Survey IV has been provided by the
% Alfred P. Sloan Foundation, the U.S. Department of Energy Office of
% Science, and the Participating Institutions.
%
% SDSS-IV acknowledges support and resources from the Center for High
% Performance Computing at the University of Utah. The SDSS
% website is www.sdss4.org.
%
% SDSS-IV is managed by the Astrophysical Research Consortium
% for the Participating Institutions of the SDSS Collaboration including
% the Brazilian Participation Group, the Carnegie Institution for Science,
% Carnegie Mellon University, Center for Astrophysics | Harvard \&
% Smithsonian, the Chilean Participation Group, the French Participation Group,
% Instituto de Astrof\'isica de Canarias, The Johns Hopkins
% University, Kavli Institute for the Physics and Mathematics of the
% Universe (IPMU) / University of Tokyo, the Korean Participation Group,
% Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik
% Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg),
% Max-Planck-Institut f\"ur Astrophysik (MPA Garching),
% Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE),
% National Astronomical Observatories of China, New Mexico State University,
% New York University, University of Notre Dame, Observat\'ario
% Nacional / MCTI, The Ohio State University, Pennsylvania State
% University, Shanghai Astronomical Observatory, United
% Kingdom Participation Group, Universidad Nacional Aut\'onoma
% de M\'exico, University of Arizona, University of Colorado Boulder,
% University of Oxford, University of Portsmouth, University of Utah,
% University of Virginia, University of Washington, University of
% Wisconsin, Vanderbilt University, and Yale University.
%
% CITATION
@ARTICLE{2022ApJS..259...35A,
author = {{Abdurro'uf} and {Accetta}, Katherine and {Aerts}, Conny and {Silva Aguirre}, V{\'\i}ctor and {Ahumada}, Romina and {Ajgaonkar}, Nikhil and {Filiz Ak}, N. and {Alam}, Shadab and {Allende Prieto}, Carlos and {Almeida}, Andr{\'e}s and {Anders}, Friedrich and {Anderson}, Scott F. and {Andrews}, Brett H. and {Anguiano}, Borja and {Aquino-Ort{\'\i}z}, Erik and {Arag{\'o}n-Salamanca}, Alfonso and {Argudo-Fern{\'a}ndez}, Maria and {Ata}, Metin and {Aubert}, Marie and {Avila-Reese}, Vladimir and {Badenes}, Carles and {Barb{\'a}}, Rodolfo H. and {Barger}, Kat and {Barrera-Ballesteros}, Jorge K. and {Beaton}, Rachael L. and {Beers}, Timothy C. and {Belfiore}, Francesco and {Bender}, Chad F. and {Bernardi}, Mariangela and {Bershady}, Matthew A. and {Beutler}, Florian and {Bidin}, Christian Moni and {Bird}, Jonathan C. and {Bizyaev}, Dmitry and {Blanc}, Guillermo A. and {Blanton}, Michael R. and {Boardman}, Nicholas Fraser and {Bolton}, Adam S. and {Boquien}, M{\'e}d{\'e}ric and {Borissova}, Jura and {Bovy}, Jo and {Brandt}, W.~N. and {Brown}, Jordan and {Brownstein}, Joel R. and {Brusa}, Marcella and {Buchner}, Johannes and {Bundy}, Kevin and {Burchett}, Joseph N. and {Bureau}, Martin and {Burgasser}, Adam and {Cabang}, Tuesday K. and {Campbell}, Stephanie and {Cappellari}, Michele and {Carlberg}, Joleen K. and {Wanderley}, F{\'a}bio Carneiro and {Carrera}, Ricardo and {Cash}, Jennifer and {Chen}, Yan-Ping and {Chen}, Wei-Huai and {Cherinka}, Brian and {Chiappini}, Cristina and {Choi}, Peter Doohyun and {Chojnowski}, S. Drew and {Chung}, Haeun and {Clerc}, Nicolas and {Cohen}, Roger E. and {Comerford}, Julia M. and {Comparat}, Johan and {da Costa}, Luiz and {Covey}, Kevin and {Crane}, Jeffrey D. and {Cruz-Gonzalez}, Irene and {Culhane}, Connor and {Cunha}, Katia and {Dai}, Y. Sophia and {Damke}, Guillermo and {Darling}, Jeremy and {Davidson}, James W., Jr. and {Davies}, Roger and {Dawson}, Kyle and {De Lee}, Nathan and {Diamond-Stanic}, Aleksandar M. and {Cano-D{\'\i}az}, Mariana and {S{\'a}nchez}, Helena Dom{\'\i}nguez and {Donor}, John and {Duckworth}, Chris and {Dwelly}, Tom and {Eisenstein}, Daniel J. and {Elsworth}, Yvonne P. and {Emsellem}, Eric and {Eracleous}, Mike and {Escoffier}, Stephanie and {Fan}, Xiaohui and {Farr}, Emily and {Feng}, Shuai and {Fern{\'a}ndez-Trincado}, Jos{\'e} G. and {Feuillet}, Diane and {Filipp}, Andreas and {Fillingham}, Sean P. and {Frinchaboy}, Peter M. and {Fromenteau}, Sebastien and {Galbany}, Llu{\'\i}s and {Garc{\'\i}a}, Rafael A. and {Garc{\'\i}a-Hern{\'a}ndez}, D.~A. and {Ge}, Junqiang and {Geisler}, Doug and {Gelfand}, Joseph and {G{\'e}ron}, Tobias and {Gibson}, Benjamin J. and {Goddy}, Julian and {Godoy-Rivera}, Diego and {Grabowski}, Kathleen and {Green}, Paul J. and {Greener}, Michael and {Grier}, Catherine J. and {Griffith}, Emily and {Guo}, Hong and {Guy}, Julien and {Hadjara}, Massinissa and {Harding}, Paul and {Hasselquist}, Sten and {Hayes}, Christian R. and {Hearty}, Fred and {Hern{\'a}ndez}, Jes{\'u}s and {Hill}, Lewis and {Hogg}, David W. and {Holtzman}, Jon A. and {Horta}, Danny and {Hsieh}, Bau-Ching and {Hsu}, Chin-Hao and {Hsu}, Yun-Hsin and {Huber}, Daniel and {Huertas-Company}, Marc and {Hutchinson}, Brian and {Hwang}, Ho Seong and {Ibarra-Medel}, H{\'e}ctor J. and {Chitham}, Jacob Ider and {Ilha}, Gabriele S. and {Imig}, Julie and {Jaekle}, Will and {Jayasinghe}, Tharindu and {Ji}, Xihan and {Johnson}, Jennifer A. and {Jones}, Amy and {J{\"o}nsson}, Henrik and {Katkov}, Ivan and {Khalatyan}, Arman, Dr. and {Kinemuchi}, Karen and {Kisku}, Shobhit and {Knapen}, Johan H. and {Kneib}, Jean-Paul and {Kollmeier}, Juna A. and {Kong}, Miranda and {Kounkel}, Marina and {Kreckel}, Kathryn and {Krishnarao}, Dhanesh and {Lacerna}, Ivan and {Lane}, Richard R. and {Langgin}, Rachel and {Lavender}, Ramon and {Law}, David R. and {Lazarz}, Daniel and {Leung}, Henry W. and {Leung}, Ho-Hin and {Lewis}, Hannah M. and {Li}, Cheng and {Li}, Ran and {Lian}, Jianhui and {Liang}, Fu-Heng and {Lin}, Lihwai and {Lin}, Yen-Ting and {Lin}, Sicheng and {Lintott}, Chris and {Long}, Dan and {Longa-Pe{\~n}a}, Pen{\'e}lope and {L{\'o}pez-Cob{\'a}}, Carlos and {Lu}, Shengdong and {Lundgren}, Britt F. and {Luo}, Yuanze and {Mackereth}, J. Ted and {de la Macorra}, Axel and {Mahadevan}, Suvrath and {Majewski}, Steven R. and {Manchado}, Arturo and {Mandeville}, Travis and {Maraston}, Claudia and {Margalef-Bentabol}, Berta and {Masseron}, Thomas and {Masters}, Karen L. and {Mathur}, Savita and {McDermid}, Richard M. and {Mckay}, Myles and {Merloni}, Andrea and {Merrifield}, Michael and {Meszaros}, Szabolcs and {Miglio}, Andrea and {Di Mille}, Francesco and {Minniti}, Dante and {Minsley}, Rebecca and {Monachesi}, Antonela and {Moon}, Jeongin and {Mosser}, Benoit and {Mulchaey}, John and {Muna}, Demitri and {Mu{\~n}oz}, Ricardo R. and {Myers}, Adam D. and {Myers}, Natalie and {Nadathur}, Seshadri and {Nair}, Preethi and {Nandra}, Kirpal and {Neumann}, Justus and {Newman}, Jeffrey A. and {Nidever}, David L. and {Nikakhtar}, Farnik and {Nitschelm}, Christian and {O'Connell}, Julia E. and {Garma-Oehmichen}, Luis and {Luan Souza de Oliveira}, Gabriel and {Olney}, Richard and {Oravetz}, Daniel and {Ortigoza-Urdaneta}, Mario and {Osorio}, Yeisson and {Otter}, Justin and {Pace}, Zachary J. and {Padilla}, Nelson and {Pan}, Kaike and {Pan}, Hsi-An and {Parikh}, Taniya and {Parker}, James and {Peirani}, Sebastien and {Pe{\~n}a Ram{\'\i}rez}, Karla and {Penny}, Samantha and {Percival}, Will J. and {Perez-Fournon}, Ismael and {Pinsonneault}, Marc and {Poidevin}, Fr{\'e}d{\'e}rick and {Poovelil}, Vijith Jacob and {Price-Whelan}, Adrian M. and {B{\'a}rbara de Andrade Queiroz}, Anna and {Raddick}, M. Jordan and {Ray}, Amy and {Rembold}, Sandro Barboza and {Riddle}, Nicole and {Riffel}, Rogemar A. and {Riffel}, Rog{\'e}rio and {Rix}, Hans-Walter and {Robin}, Annie C. and {Rodr{\'\i}guez-Puebla}, Aldo and {Roman-Lopes}, Alexandre and {Rom{\'a}n-Z{\'u}{\~n}iga}, Carlos and {Rose}, Benjamin and {Ross}, Ashley J. and {Rossi}, Graziano and {Rubin}, Kate H.~R. and {Salvato}, Mara and {S{\'a}nchez}, Seb{\'a}stian F. and {S{\'a}nchez-Gallego}, Jos{\'e} R. and {Sanderson}, Robyn and {Santana Rojas}, Felipe Antonio and {Sarceno}, Edgar and {Sarmiento}, Regina and {Sayres}, Conor and {Sazonova}, Elizaveta and {Schaefer}, Adam L. and {Schiavon}, Ricardo and {Schlegel}, David J. and {Schneider}, Donald P. and {Schultheis}, Mathias and {Schwope}, Axel and {Serenelli}, Aldo and {Serna}, Javier and {Shao}, Zhengyi and {Shapiro}, Griffin and {Sharma}, Anubhav and {Shen}, Yue and {Shetrone}, Matthew and {Shu}, Yiping and {Simon}, Joshua D. and {Skrutskie}, M.~F. and {Smethurst}, Rebecca and {Smith}, Verne and {Sobeck}, Jennifer and {Spoo}, Taylor and {Sprague}, Dani and {Stark}, David V. and {Stassun}, Keivan G. and {Steinmetz}, Matthias and {Stello}, Dennis and {Stone-Martinez}, Alexander and {Storchi-Bergmann}, Thaisa and {Stringfellow}, Guy S. and {Stutz}, Amelia and {Su}, Yung-Chau and {Taghizadeh-Popp}, Manuchehr and {Talbot}, Michael S. and {Tayar}, Jamie and {Telles}, Eduardo and {Teske}, Johanna and {Thakar}, Ani and {Theissen}, Christopher and {Tkachenko}, Andrew and {Thomas}, Daniel and {Tojeiro}, Rita and {Hernandez Toledo}, Hector and {Troup}, Nicholas W. and {Trump}, Jonathan R. and {Trussler}, James and {Turner}, Jacqueline and {Tuttle}, Sarah and {Unda-Sanzana}, Eduardo and {V{\'a}zquez-Mata}, Jos{\'e} Antonio and {Valentini}, Marica and {Valenzuela}, Octavio and {Vargas-Gonz{\'a}lez}, Jaime and {Vargas-Maga{\~n}a}, Mariana and {Alfaro}, Pablo Vera and {Villanova}, Sandro and {Vincenzo}, Fiorenzo and {Wake}, David and {Warfield}, Jack T. and {Washington}, Jessica Diane and {Weaver}, Benjamin Alan and {Weijmans}, Anne-Marie and {Weinberg}, David H. and {Weiss}, Achim and {Westfall}, Kyle B. and {Wild}, Vivienne and {Wilde}, Matthew C. and {Wilson}, John C. and {Wilson}, Robert F. and {Wilson}, Mikayla and {Wolf}, Julien and {Wood-Vasey}, W.~M. and {Yan}, Renbin and {Zamora}, Olga and {Zasowski}, Gail and {Zhang}, Kai and {Zhao}, Cheng and {Zheng}, Zheng and {Zheng}, Zheng and {Zhu}, Kai},
title = "{The Seventeenth Data Release of the Sloan Digital Sky Surveys: Complete Release of MaNGA, MaStar, and APOGEE-2 Data}",
journal = {\apjs},
keywords = {Astronomy data acquisition, Astronomy databases, Surveys, 1860, 83, 1671, Astrophysics - Astrophysics of Galaxies, Astrophysics - Instrumentation and Methods for Astrophysics},
year = 2022,
month = apr,
volume = {259},
number = {2},
eid = {35},
pages = {35},
doi = {10.3847/1538-4365/ac4414},
archivePrefix = {arXiv},
eprint = {2112.02026},
primaryClass = {astro-ph.GA},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022ApJS..259...35A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
| 0 | 0 | [
"arxiv:2112.02026",
"region:us"
] | 2025-11-12T16:51:49+00:00 | 2025-11-12T16:53:33+00:00 | 0 |
sii-geai-lab/ESOT500 |
# ESOT500: A High-Frequency Dataset for Event-Driven Perception
## Introduction
**ESOT500** is a high-frequency annotated event-based single object tracking dataset. It was created to demonstrate the **STARE (STream-based lAtency-awaRe Evaluation)** framework, enabling rigorous assessment of event-driven perception models' real-time capabilities.
This dataset is introduced in the paper: **[Bridging the Latency Gap with a Continuous Stream Evaluation Framework in Event-Driven Perception](https://github.com/ispc-lab/STARE)**.
## Key Features
- **500 Hz Annotations:** Provides temporal dense, time-aligned ground truth bounding boxes at 500 Hz, accurately capturing high-dynamic object motion and mitigating temporal aliasing.
- **Dual Resolutions:** Includes two subsets to test model robustness at different scales:
- `ESOT500-L`: Low-resolution (346x260)
- `ESOT500-H`: High-resolution (1280x720)
- **Diverse Scenarios:** Covers a wide range of indoor/outdoor scenes, object classes, and challenging conditions like high speed, motion blur, and occlusion.
- **Continuous-Stream Ready:** Designed to evaluate models on continuous event streams, moving beyond the conventional frame-based paradigm.
## Dataset Structure
The ESOT500 dataset is organized into two main configurations, `ESOT500-L` and `ESOT500-H`. Each configuration contains the following directories:
- `aedat4/`: Contains the raw event stream data in `.aedat4` format for each sequence.
- `anno_t/`: Contains the corresponding 500 Hz time-aligned annotations in `.txt` format. Each line in the annotation file represents `[timestamp, x, y, width, height]`.
- **Split Files:**
- `train.txt`: The primary training split.
- `test.txt`: The primary testing split.
- `train_additional.txt` / `test_additional.txt`: Additional splits for extended evaluation.*
- `test_challenging.txt`: A subset of challenging sequences from the test set.
- `cas.txt`: A subset of sequences particularly suited for evaluating Context-Aware Sampling strategies.
*Note: The `additional` split of ESOT500-L was recorded with a slight de-focus, leading to some target blur which could affect tracking performance. Consequently, it was discarded from the primary experimental settings in our paper.*
**And you can directly download the compressed files at [`ESOT500/warped`](https://huggingface.co/datasets/sii-geai-lab/ESOT500/tree/main/warped).** |
# ESOT500: A High-Frequency Dataset for Event-Driven Perception
## Introduction
**ESOT500** is a high-frequency annotated event-based single object tracking dataset. It was created to demonstrate the **STARE (STream-based lAtency-awaRe Evaluation)** framework, enabling rigorous assessment of event-driven perception models' real-time capabilities.
This dataset is introduced in the paper: **[Bridging the Latency Gap with a Continuous Stream Evaluation Framework in Event-Driven Perception](https://github.com/ispc-lab/STARE)**.
## Key Features
- **500 Hz Annotations:** Provides temporal dense, time-aligned ground truth bounding boxes at 500 Hz, accurately capturing high-dynamic object motion and mitigating temporal aliasing.
- **Dual Resolutions:** Includes two subsets to test model robustness at different scales:
- `ESOT500-L`: Low-resolution (346x260)
- `ESOT500-H`: High-resolution (1280x720)
- **Diverse Scenarios:** Covers a wide range of indoor/outdoor scenes, object classes, and challenging conditions like high speed, motion blur, and occlusion.
- **Continuous-Stream Ready:** Designed to evaluate models on continuous event streams, moving beyond the conventional frame-based paradigm.
## Dataset Structure
The ESOT500 dataset is organized into two main configurations, `ESOT500-L` and `ESOT500-H`. Each configuration contains the following directories:
- `aedat4/`: Contains the raw event stream data in `.aedat4` format for each sequence.
- `anno_t/`: Contains the corresponding 500 Hz time-aligned annotations in `.txt` format. Each line in the annotation file represents `[timestamp, x, y, width, height]`.
- **Split Files:**
- `train.txt`: The primary training split.
- `test.txt`: The primary testing split.
- `train_additional.txt` / `test_additional.txt`: Additional splits for extended evaluation.*
- `test_challenging.txt`: A subset of challenging sequences from the test set.
- `cas.txt`: A subset of sequences particularly suited for evaluating Context-Aware Sampling strategies.
*Note: The `additional` split of ESOT500-L was recorded with a slight de-focus, leading to some target blur which could affect tracking performance. Consequently, it was discarded from the primary experimental settings in our paper.*
**And you can directly download the compressed files at [`ESOT500/warped`](https://huggingface.co/datasets/sii-geai-lab/ESOT500/tree/main/warped).** | 301 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"event-camera",
"bounding-box",
"tracking",
"image"
] | 2024-06-12T06:32:49+00:00 | 2025-11-12T16:52:36+00:00 | 0 |
ryankamiri/R2E-Gym-Subset |
# R2E-Gym Subset Filtered for MAGRPO
Filtered subset of R2E-Gym optimized for 2-agent MAGRPO training with 7B models.
## Dataset Statistics
- Total instances: 462
- Format: Issue description + Oracle files in prompt
- Optimized for: 2-agent collaboration, 7B models
## Filtering Criteria
1. File count: 1-2 files per instance
2. Oracle size: <100K chars total
3. Problem statement: 50-500 words
4. Excludes import-only changes
5. Patch complexity: 2-10 hunks, ≤50 lines changed
## Usage
```python
from datasets import load_dataset
ds = load_dataset("ryankamiri/R2E-Gym-Subset")
print(ds['train'][0]['prompt']) # Issue + oracle files
print(ds['train'][0]['patch']) # Golden patch (readable)
# All original R2E-Gym fields are preserved:
print(ds['train'][0]['repo_name'])
print(ds['train'][0]['docker_image'])
print(ds['train'][0]['parsed_commit_content'])
```
## Citation
```bibtex
@article{jain2025r2e,
title={R2e-gym: Procedural environments and hybrid verifiers for scaling open-weights swe agents},
author={Jain, Naman and Singh, Jaskirat and Shetty, Manish and Zheng, Liang and Sen, Koushik and Stoica, Ion},
journal={arXiv preprint arXiv:2504.07164},
year={2025}
}
```
|
# R2E-Gym Subset Filtered for MAGRPO
Filtered subset of R2E-Gym optimized for 2-agent MAGRPO training with 7B models.
## Dataset Statistics
- Total instances: 462
- Format: Issue description + Oracle files in prompt
- Optimized for: 2-agent collaboration, 7B models
## Filtering Criteria
1. File count: 1-2 files per instance
2. Oracle size: <100K chars total
3. Problem statement: 50-500 words
4. Excludes import-only changes
5. Patch complexity: 2-10 hunks, ≤50 lines changed
## Usage
```python
from datasets import load_dataset
ds = load_dataset("ryankamiri/R2E-Gym-Subset")
print(ds['train'][0]['prompt']) # Issue + oracle files
print(ds['train'][0]['patch']) # Golden patch (readable)
# All original R2E-Gym fields are preserved:
print(ds['train'][0]['repo_name'])
print(ds['train'][0]['docker_image'])
print(ds['train'][0]['parsed_commit_content'])
```
## Citation
```bibtex
@article{jain2025r2e,
title={R2e-gym: Procedural environments and hybrid verifiers for scaling open-weights swe agents},
author={Jain, Naman and Singh, Jaskirat and Shetty, Manish and Zheng, Liang and Sen, Koushik and Stoica, Ion},
journal={arXiv preprint arXiv:2504.07164},
year={2025}
}
```
| 77 | 0 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2504.07164",
"region:us",
"code",
"python",
"software-engineering",
"magrpo",
"r2e-gym"
] | 2025-11-03T14:34:56+00:00 | 2025-11-12T16:52:37+00:00 | 0 |
hf-doc-build/doc-build | This repo contains all the docs published on https://huggingface.co/docs.
The docs are generated with https://github.com/huggingface/doc-builder.
<!-- comment to trigger webhook.= --> | This repo contains all the docs published on https://huggingface.co/docs.
The docs are generated with https://github.com/huggingface/doc-builder.
<!-- comment to trigger webhook.= --> | 1,021,897 | 12 | [
"license:mit",
"region:us"
] | 2022-10-24T15:39:05+00:00 | 2025-11-12T16:51:28+00:00 | 0 |
giladbecher/apple-stock-price-trend-and-indicators-10-years | # Apple Stock Price Trend and Indicators (10 Years)
### Video Presentation
*(not uploaded yet)*
---
## **Dataset Overview**
This dataset provides ten years of Apple Inc. (AAPL) stock market data, including both raw price information and multiple **technical indicators** commonly used in financial analysis.
It contains **2,516 rows** and **20 columns**, ranging from 2014 to 2023.
### **Source**
[Kaggle – Apple Stock Price Prediction (10 Years)](https://www.kaggle.com/datasets/aspillai/apple-stock-price-prediction-10-years)
### **Features**
- **Price data:** open, high, low, close, volume
- **Momentum indicators:** RSI (7, 14), CCI (7, 14)
- **Trend indicators:** SMA (50, 100), EMA (50, 100)
- **Volatility indicators:** MACD, Bollinger Bands, ATR (7, 14), TrueRange
- **Target:** bullish or bearish – indicating expected price direction
---
## **Research Question**
> **Which technical indicator is the most reliable in predicting Apple’s stock trend (bullish or bearish)?**
This analysis investigates which financial indicators most accurately align with the market’s actual direction, helping identify which tools provide the most trustworthy trading signals.
---
## **Data Cleaning**
- Converted the **date** column into datetime format and sorted the dataset chronologically.
- Confirmed there were **no missing or duplicate entries**.
- Verified all columns were numeric (except date and target).
- Retained all records, as extreme values reflect **genuine market volatility** rather than data errors.
---
## **Outlier Detection**
Outliers were identified using the **IQR method**.
The columns with the most extreme values were:
| Feature | Outliers |
|----------|-----------|
| MACD | 334 |
| TrueRange | 77 |
| ATR_7 | 6 |
| CCI_14 | 4 |
All were kept in the dataset, as they correspond to real market events such as high-volatility trading days.
---
## **Descriptive Statistics**
- The dataset spans a decade, showing a **steady long-term growth** in Apple’s closing price.
- **Volume** varied significantly, peaking during major events and corrections.
- **RSI, CCI, SMA, EMA, and MACD** provide a comprehensive view of momentum and trend behavior.
- The **target** variable (bullish/bearish) was well balanced, ensuring fair modeling.
---
## **Exploratory Data Analysis**
### **1. Correlation Matrix**
A heatmap shows strong relationships among open, high, low, and close prices, and among indicators measuring similar behavior (e.g., RSI_7 vs RSI_14).
### **2. Time-Series Analysis**
A line chart of Apple’s closing prices (2014–2023) reveals a clear upward trend with short-term volatility during global events such as 2020.
### **3. Distribution Analysis**
Histogram of closing prices shows right skewness, reflecting long-term appreciation in value.
### **4. RSI Comparison**
Scatter plot between **RSI_7** and **RSI_14** demonstrates a strong linear correlation, indicating short-term momentum typically aligns with medium-term trends.
### **5. Indicator Reliability**
Correlation of each indicator with the **target** shows that:
- **CCI_7** and **RSI_7** have the highest predictive reliability (~0.35).
- **MACD** and **ATR** follow closely, suggesting consistent but weaker associations.
- Volatility measures (ATR, TrueRange) contribute meaningfully but are less stable over time.
---
## **Key Insights**
- **RSI_7** and **CCI_7** are the most reliable technical indicators for predicting trend direction.
- **MACD** provides complementary information about momentum shifts.
- Periods of high volatility (high ATR, TrueRange) tend to coincide with stronger trading volume.
- Apple’s overall trend is bullish, with brief corrections reflecting broader market behavior.
- The dataset is balanced, numeric, and ready for future **machine learning classification** tasks.
---
## **Conclusion**
This analysis shows that **short-term momentum indicators** like RSI_7 and CCI_7 best capture Apple’s true price movement direction.
While other indicators provide additional context, these two stand out as the most reliable predictors of bullish versus bearish sentiment across a decade of data.
---
## **Files Included**
| File | Description |
|------|--------------|
| `aapl_2014_2023.csv` | Original dataset |
| `apple_stock_EDA.ipynb` | Google Colab notebook with full analysis |
| `README.md` | Summary and findings |
---
© 2025 Gilad Becher | For educational purposes (Reichman University – Introduction to Data Science)
| # Apple Stock Price Trend and Indicators (10 Years)
### Video Presentation
*(not uploaded yet)*
---
## **Dataset Overview**
This dataset provides ten years of Apple Inc. (AAPL) stock market data, including both raw price information and multiple **technical indicators** commonly used in financial analysis.
It contains **2,516 rows** and **20 columns**, ranging from 2014 to 2023.
### **Source**
[Kaggle – Apple Stock Price Prediction (10 Years)](https://www.kaggle.com/datasets/aspillai/apple-stock-price-prediction-10-years)
### **Features**
- **Price data:** open, high, low, close, volume
- **Momentum indicators:** RSI (7, 14), CCI (7, 14)
- **Trend indicators:** SMA (50, 100), EMA (50, 100)
- **Volatility indicators:** MACD, Bollinger Bands, ATR (7, 14), TrueRange
- **Target:** bullish or bearish – indicating expected price direction
---
## **Research Question**
> **Which technical indicator is the most reliable in predicting Apple’s stock trend (bullish or bearish)?**
This analysis investigates which financial indicators most accurately align with the market’s actual direction, helping identify which tools provide the most trustworthy trading signals.
---
## **Data Cleaning**
- Converted the **date** column into datetime format and sorted the dataset chronologically.
- Confirmed there were **no missing or duplicate entries**.
- Verified all columns were numeric (except date and target).
- Retained all records, as extreme values reflect **genuine market volatility** rather than data errors.
---
## **Outlier Detection**
Outliers were identified using the **IQR method**.
The columns with the most extreme values were:
| Feature | Outliers |
|----------|-----------|
| MACD | 334 |
| TrueRange | 77 |
| ATR_7 | 6 |
| CCI_14 | 4 |
All were kept in the dataset, as they correspond to real market events such as high-volatility trading days.
---
## **Descriptive Statistics**
- The dataset spans a decade, showing a **steady long-term growth** in Apple’s closing price.
- **Volume** varied significantly, peaking during major events and corrections.
- **RSI, CCI, SMA, EMA, and MACD** provide a comprehensive view of momentum and trend behavior.
- The **target** variable (bullish/bearish) was well balanced, ensuring fair modeling.
---
## **Exploratory Data Analysis**
### **1. Correlation Matrix**
A heatmap shows strong relationships among open, high, low, and close prices, and among indicators measuring similar behavior (e.g., RSI_7 vs RSI_14).
### **2. Time-Series Analysis**
A line chart of Apple’s closing prices (2014–2023) reveals a clear upward trend with short-term volatility during global events such as 2020.
### **3. Distribution Analysis**
Histogram of closing prices shows right skewness, reflecting long-term appreciation in value.
### **4. RSI Comparison**
Scatter plot between **RSI_7** and **RSI_14** demonstrates a strong linear correlation, indicating short-term momentum typically aligns with medium-term trends.
### **5. Indicator Reliability**
Correlation of each indicator with the **target** shows that:
- **CCI_7** and **RSI_7** have the highest predictive reliability (~0.35).
- **MACD** and **ATR** follow closely, suggesting consistent but weaker associations.
- Volatility measures (ATR, TrueRange) contribute meaningfully but are less stable over time.
---
## **Key Insights**
- **RSI_7** and **CCI_7** are the most reliable technical indicators for predicting trend direction.
- **MACD** provides complementary information about momentum shifts.
- Periods of high volatility (high ATR, TrueRange) tend to coincide with stronger trading volume.
- Apple’s overall trend is bullish, with brief corrections reflecting broader market behavior.
- The dataset is balanced, numeric, and ready for future **machine learning classification** tasks.
---
## **Conclusion**
This analysis shows that **short-term momentum indicators** like RSI_7 and CCI_7 best capture Apple’s true price movement direction.
While other indicators provide additional context, these two stand out as the most reliable predictors of bullish versus bearish sentiment across a decade of data.
---
## **Files Included**
| File | Description |
|------|--------------|
| `aapl_2014_2023.csv` | Original dataset |
| `apple_stock_EDA.ipynb` | Google Colab notebook with full analysis |
| `README.md` | Summary and findings |
---
© 2025 Gilad Becher | For educational purposes (Reichman University – Introduction to Data Science)
| 0 | 0 | [
"region:us"
] | 2025-11-12T14:03:55+00:00 | 2025-11-12T16:49:36+00:00 | 0 |
BrentLab/harbison_2004 | # Harbison 2004
This Dataset is a parsed version of the data provided by Richard A. Young's lab. To cite
this data, please use:
[Harbison CT, Gordon DB, Lee TI, Rinaldi NJ, Macisaac KD, Danford TW, Hannett NM, Tagne
JB, Reynolds DB, Yoo J, et al. 2004. Transcriptional regulatory code of a eukaryotic
genome. Nature 431:
99–104.doi:10.1038/nature02800](https://www.nature.com/articles/nature02800)
This repo provides 1 dataset:
- **harbison_2004**: ChIP-chip transcription factor binding data with environmental
conditions.
### `tfbpapi`
After [installing
tfbpapi](https://github.com/BrentLab/tfbpapi/?tab=readme-ov-file#installation), you can
adapt this [tutorial](https://brentlab.github.io/tfbpapi/tutorials/hfqueryapi_tutorial/)
in order to explore the contents of this repository.
### huggingface_cli/duckdb
You can retrieves and displays the file paths for each configuration of
the "BrentLab/harbison_2004" dataset from Hugging Face Hub.
```python
from huggingface_hub import ModelCard
from pprint import pprint
card = ModelCard.load("BrentLab/harbison_2004", repo_type="dataset")
# cast to dict
card_dict = card.data.to_dict()
# Get partition information
dataset_paths_dict = {d.get("config_name"): d.get("data_files")[0].get("path") for d in card_dict.get("configs")}
pprint(dataset_paths_dict)
```
If you wish to pull the entire repo, due to its size you may need to use an
[authentication token](https://huggingface.co/docs/hub/en/security-tokens).
If you do not have one, try omitting the token related code below and see if
it works. Else, create a token and provide it like so:
```python
from huggingface_hub import snapshot_download
import os
repo_id = "BrentLab/harbison_2004"
hf_token = os.getenv("HF_TOKEN")
# Download entire repo to local directory
repo_path = snapshot_download(
repo_id=repo_id,
repo_type="dataset",
token=hf_token
)
print(f"\n✓ Repository downloaded to: {repo_path}")
# Construct path to the rossi_annotated_features parquet file
parquet_path = os.path.join(repo_path, "harbison_2004.parquet")
print(f"✓ Parquet file at: {parquet_path}")
➜ ~ cat /home/chase/Downloads/Harbison.2004.md
# Harbison 2004
This Dataset is a parsed version of the data provided by Richard A. Young's lab. To cite
this data, please use:
[Harbison CT, Gordon DB, Lee TI, Rinaldi NJ, Macisaac KD, Danford TW, Hannett NM, Tagne
JB, Reynolds DB, Yoo J, et al. 2004. Transcriptional regulatory code of a eukaryotic
genome. Nature 431:
99–104.doi:10.1038/nature02800](https://www.nature.com/articles/nature02800)
This repo provides 1 dataset:
- **harbison_2004**: ChIP-chip transcription factor binding data with environmental
conditions.
### `tfbpapi`
After [installing
tfbpapi](https://github.com/BrentLab/tfbpapi/?tab=readme-ov-file#installation), you can
adapt this [tutorial](https://brentlab.github.io/tfbpapi/tutorials/hfqueryapi_tutorial/)
in order to explore the contents of this repository.
### huggingface_cli/duckdb
You can retrieves and displays the file paths for each configuration of
the "BrentLab/harbison_2004" dataset from Hugging Face Hub.
```python
from huggingface_hub import ModelCard
from pprint import pprint
card = ModelCard.load("BrentLab/harbison_2004", repo_type="dataset")
# cast to dict
card_dict = card.data.to_dict()
# Get partition information
dataset_paths_dict = {d.get("config_name"): d.get("data_files")[0].get("path") for d in card_dict.get("configs")}
pprint(dataset_paths_dict)
```
If you wish to pull the entire repo, due to its size you may need to use an
[authentication token](https://huggingface.co/docs/hub/en/security-tokens).
If you do not have one, try omitting the token related code below and see if
it works. Else, create a token and provide it like so:
```python
from huggingface_hub import snapshot_download
import os
repo_id = "BrentLab/harbison_2004"
hf_token = os.getenv("HF_TOKEN")
# Download entire repo to local directory
repo_path = snapshot_download(
repo_id=repo_id,
repo_type="dataset",
token=hf_token
)
print(f"\n✓ Repository downloaded to: {repo_path}")
# Construct path to the rossi_annotated_features parquet file
parquet_path = os.path.join(repo_path, "harbison_2004.parquet")
print(f"✓ Parquet file at: {parquet_path}")
```
| # Harbison 2004
This Dataset is a parsed version of the data provided by Richard A. Young's lab. To cite
this data, please use:
[Harbison CT, Gordon DB, Lee TI, Rinaldi NJ, Macisaac KD, Danford TW, Hannett NM, Tagne
JB, Reynolds DB, Yoo J, et al. 2004. Transcriptional regulatory code of a eukaryotic
genome. Nature 431:
99–104.doi:10.1038/nature02800](https://www.nature.com/articles/nature02800)
This repo provides 1 dataset:
- **harbison_2004**: ChIP-chip transcription factor binding data with environmental
conditions.
### `tfbpapi`
After [installing
tfbpapi](https://github.com/BrentLab/tfbpapi/?tab=readme-ov-file#installation), you can
adapt this [tutorial](https://brentlab.github.io/tfbpapi/tutorials/hfqueryapi_tutorial/)
in order to explore the contents of this repository.
### huggingface_cli/duckdb
You can retrieves and displays the file paths for each configuration of
the "BrentLab/harbison_2004" dataset from Hugging Face Hub.
```python
from huggingface_hub import ModelCard
from pprint import pprint
card = ModelCard.load("BrentLab/harbison_2004", repo_type="dataset")
# cast to dict
card_dict = card.data.to_dict()
# Get partition information
dataset_paths_dict = {d.get("config_name"): d.get("data_files")[0].get("path") for d in card_dict.get("configs")}
pprint(dataset_paths_dict)
```
If you wish to pull the entire repo, due to its size you may need to use an
[authentication token](https://huggingface.co/docs/hub/en/security-tokens).
If you do not have one, try omitting the token related code below and see if
it works. Else, create a token and provide it like so:
```python
from huggingface_hub import snapshot_download
import os
repo_id = "BrentLab/harbison_2004"
hf_token = os.getenv("HF_TOKEN")
# Download entire repo to local directory
repo_path = snapshot_download(
repo_id=repo_id,
repo_type="dataset",
token=hf_token
)
print(f"\n✓ Repository downloaded to: {repo_path}")
# Construct path to the rossi_annotated_features parquet file
parquet_path = os.path.join(repo_path, "harbison_2004.parquet")
print(f"✓ Parquet file at: {parquet_path}")
➜ ~ cat /home/chase/Downloads/Harbison.2004.md
# Harbison 2004
This Dataset is a parsed version of the data provided by Richard A. Young's lab. To cite
this data, please use:
[Harbison CT, Gordon DB, Lee TI, Rinaldi NJ, Macisaac KD, Danford TW, Hannett NM, Tagne
JB, Reynolds DB, Yoo J, et al. 2004. Transcriptional regulatory code of a eukaryotic
genome. Nature 431:
99–104.doi:10.1038/nature02800](https://www.nature.com/articles/nature02800)
This repo provides 1 dataset:
- **harbison_2004**: ChIP-chip transcription factor binding data with environmental
conditions.
### `tfbpapi`
After [installing
tfbpapi](https://github.com/BrentLab/tfbpapi/?tab=readme-ov-file#installation), you can
adapt this [tutorial](https://brentlab.github.io/tfbpapi/tutorials/hfqueryapi_tutorial/)
in order to explore the contents of this repository.
### huggingface_cli/duckdb
You can retrieves and displays the file paths for each configuration of
the "BrentLab/harbison_2004" dataset from Hugging Face Hub.
```python
from huggingface_hub import ModelCard
from pprint import pprint
card = ModelCard.load("BrentLab/harbison_2004", repo_type="dataset")
# cast to dict
card_dict = card.data.to_dict()
# Get partition information
dataset_paths_dict = {d.get("config_name"): d.get("data_files")[0].get("path") for d in card_dict.get("configs")}
pprint(dataset_paths_dict)
```
If you wish to pull the entire repo, due to its size you may need to use an
[authentication token](https://huggingface.co/docs/hub/en/security-tokens).
If you do not have one, try omitting the token related code below and see if
it works. Else, create a token and provide it like so:
```python
from huggingface_hub import snapshot_download
import os
repo_id = "BrentLab/harbison_2004"
hf_token = os.getenv("HF_TOKEN")
# Download entire repo to local directory
repo_path = snapshot_download(
repo_id=repo_id,
repo_type="dataset",
token=hf_token
)
print(f"\n✓ Repository downloaded to: {repo_path}")
# Construct path to the rossi_annotated_features parquet file
parquet_path = os.path.join(repo_path, "harbison_2004.parquet")
print(f"✓ Parquet file at: {parquet_path}")
```
| 26 | 0 | [
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"genomics",
"yeast",
"transcription",
"binding"
] | 2025-08-22T14:52:18+00:00 | 2025-11-12T16:46:50+00:00 | 0 |
ontocord/MixtureVitae-211BT-decontaminated | # MixtureVitae-211BT (Decontaminated)
This repository mirrors the file structure of `ontocord/MixtureVitae-211BT` under the `data/` tree,
but each file has been **decontaminated** offline.
- Source repo: `ontocord/MixtureVitae-211BT`
- Method: offline decontamination pipeline
| # MixtureVitae-211BT (Decontaminated)
This repository mirrors the file structure of `ontocord/MixtureVitae-211BT` under the `data/` tree,
but each file has been **decontaminated** offline.
- Source repo: `ontocord/MixtureVitae-211BT`
- Method: offline decontamination pipeline
| 50 | 0 | [
"size_categories:10M<n<100M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | 2025-11-12T00:51:29+00:00 | 2025-11-12T16:45:35+00:00 | 0 |
westchestercleaning/westchester-cleaning-services-schema |
# Westchester Cleaning Services — Structured Data Feed
This dataset contains the verified public structured data for **Westchester Cleaning Services, LLC**, a certified MBE commercial cleaning company founded in 2015 and based in Tarrytown, NY.
It includes:
- `org.jsonld` — Organization profile
- `faq.jsonld` — FAQPage
- `sitemap.xml` and `robots.txt` — crawl directives
- Canonical source: [https://westchestercleaning.github.io/wcs-schema/](https://westchestercleaning.github.io/wcs-schema/)
**License:** CC-BY 4.0
**Contact:** info@westchestercleanings.com
# WCS Schema Push v3.3 — Clean Reset (Single Sitemap)
**What changed**
- One simple `sitemap.xml` (no index file).
- `robots.txt` points directly to `/sitemap.xml`.
- Valid `<lastmod>` values for all URLs.
**How to clean replace**
1. Delete old sitemap files in your repo root:
- robots.txt, sitemap.xml, sitemap-index.xml, sitemap_index.xml, sitemap_main.xml
2. Upload the contents of this zip (drag/drop to repo root) and commit.
3. Verify in browser:
- https://westchestercleaning.github.io/wcs-schema/robots.txt
- https://westchestercleaning.github.io/wcs-schema/sitemap.xml
4. In Google Search Console (property: https://westchestercleaning.github.io/wcs-schema/), submit `sitemap.xml` under Indexing → Sitemaps.
5. If you previously submitted other sitemap names, remove them and keep only `sitemap.xml`.
|
# Westchester Cleaning Services — Structured Data Feed
This dataset contains the verified public structured data for **Westchester Cleaning Services, LLC**, a certified MBE commercial cleaning company founded in 2015 and based in Tarrytown, NY.
It includes:
- `org.jsonld` — Organization profile
- `faq.jsonld` — FAQPage
- `sitemap.xml` and `robots.txt` — crawl directives
- Canonical source: [https://westchestercleaning.github.io/wcs-schema/](https://westchestercleaning.github.io/wcs-schema/)
**License:** CC-BY 4.0
**Contact:** info@westchestercleanings.com
# WCS Schema Push v3.3 — Clean Reset (Single Sitemap)
**What changed**
- One simple `sitemap.xml` (no index file).
- `robots.txt` points directly to `/sitemap.xml`.
- Valid `<lastmod>` values for all URLs.
**How to clean replace**
1. Delete old sitemap files in your repo root:
- robots.txt, sitemap.xml, sitemap-index.xml, sitemap_index.xml, sitemap_main.xml
2. Upload the contents of this zip (drag/drop to repo root) and commit.
3. Verify in browser:
- https://westchestercleaning.github.io/wcs-schema/robots.txt
- https://westchestercleaning.github.io/wcs-schema/sitemap.xml
4. In Google Search Console (property: https://westchestercleaning.github.io/wcs-schema/), submit `sitemap.xml` under Indexing → Sitemaps.
5. If you previously submitted other sitemap names, remove them and keep only `sitemap.xml`.
| 0 | 0 | [
"task_categories:other",
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"region:us",
"schema",
"organization",
"westchester",
"cleaning",
"jsonld",
"open-data"
] | 2025-11-12T16:39:39+00:00 | 2025-11-12T16:44:13+00:00 | 0 |
BrentLab/barkai_compendium |
# Barkai Compendium
This collects the ChEC-seq data from the following GEO series:
- [GSE179430](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE179430)
- [GSE209631](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE209631)
- [GSE222268](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE222268)
The metadata for each is parsed out from the SraRunTable, or in the case of GSE222268,
the NCBI series matrix file (the genotype isn't in the SraRunTable)
The [Barkai lab](https://barkailab.wixsite.com/barkai) refers to this set as their
binding compendium.
The genotypes for GSE222268 are not clear enough to me currently to parse well.
This repo provides 4 datasets:
- **GSE178430_metadata**: Metadata for GSE178430.
- **GSE209631_metadata**: ChEC-seq experiment metadata for transcription factor variant
studies.
- **GSE222268_metadata**: General experiment metadata for genomic studies.
- **genome_map**: Genomic coverage data with pileup counts at specific positions.
## Usage
The python package `tfbpapi` provides an interface to this data which eases
examining the datasets, field definitions and other operations. You may also
download the parquet datasets directly from hugging face by clicking on
"Files and Versions", or by using the huggingface_cli and duckdb directly.
In both cases, this provides a method of retrieving dataset and field definitions.
### `tfbpapi`
After [installing
tfbpapi](https://github.com/BrentLab/tfbpapi/?tab=readme-ov-file#installation), you can
adapt this [tutorial](https://brentlab.github.io/tfbpapi/tutorials/hfqueryapi_tutorial/)
in order to explore the contents of this repository.
### huggingface_cli/duckdb
You can retrieves and displays the file paths for each configuration of
the "BrentLab/barkai_compendium" dataset from Hugging Face Hub.
```python
from huggingface_hub import ModelCard
from pprint import pprint
card = ModelCard.load("BrentLab/barkai_compendium", repo_type="dataset")
# cast to dict
card_dict = card.data.to_dict()
# Get partition information
dataset_paths_dict = {d.get("config_name"): d.get("data_files")[0].get("path") for d in card_dict.get("configs")}
pprint(dataset_paths_dict)
```
The entire repository is large. It may be preferrable to only retrieve specific files or
partitions. You canuse the metadata files to choose which files to pull.
```python
from huggingface_hub import snapshot_download
import duckdb
import os
# Download only the partitioned dataset directory
repo_path = snapshot_download(
repo_id="BrentLab/barkai_compendium",
repo_type="dataset",
allow_patterns="*metadata.parquet"
)
dataset_path = os.path.join(repo_path, "GSE178430_metadata.parquet")
conn = duckdb.connect()
meta_res = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10", [dataset_path]).df()
print(meta_res)
```
We might choose to take a look at the file with accession `GSM5417602`
```python
# Download only the partitioned dataset directory
repo_path = snapshot_download(
repo_id="BrentLab/barkai_compendium",
repo_type="dataset",
allow_patterns="genome_map/series=GSE179430/accession=GSM5417602/*parquet" # Only the parquet data
)
# Query the specific partition
dataset_path = os.path.join(repo_path, "genome_map")
result = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10",
[f"{dataset_path}/**/*.parquet"]).df()
print(result)
```
If you wish to pull the entire repo, due to its size you may need to use an
[authentication token](https://huggingface.co/docs/hub/en/security-tokens).
If you do not have one, try omitting the token related code below and see if
it works. Else, create a token and provide it like so:
```python
repo_id = "BrentLab/barkai_compendium"
hf_token = os.getenv("HF_TOKEN")
# Download entire repo to local directory
repo_path = snapshot_download(
repo_id=repo_id,
repo_type="dataset",
token=hf_token
)
print(f"\n✓ Repository downloaded to: {repo_path}")
# Construct path to the genome_map parquet file
parquet_path = os.path.join(repo_path, "genome_map.parquet")
print(f"✓ Parquet file at: {parquet_path}")
``` |
# Barkai Compendium
This collects the ChEC-seq data from the following GEO series:
- [GSE179430](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE179430)
- [GSE209631](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE209631)
- [GSE222268](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE222268)
The metadata for each is parsed out from the SraRunTable, or in the case of GSE222268,
the NCBI series matrix file (the genotype isn't in the SraRunTable)
The [Barkai lab](https://barkailab.wixsite.com/barkai) refers to this set as their
binding compendium.
The genotypes for GSE222268 are not clear enough to me currently to parse well.
This repo provides 4 datasets:
- **GSE178430_metadata**: Metadata for GSE178430.
- **GSE209631_metadata**: ChEC-seq experiment metadata for transcription factor variant
studies.
- **GSE222268_metadata**: General experiment metadata for genomic studies.
- **genome_map**: Genomic coverage data with pileup counts at specific positions.
## Usage
The python package `tfbpapi` provides an interface to this data which eases
examining the datasets, field definitions and other operations. You may also
download the parquet datasets directly from hugging face by clicking on
"Files and Versions", or by using the huggingface_cli and duckdb directly.
In both cases, this provides a method of retrieving dataset and field definitions.
### `tfbpapi`
After [installing
tfbpapi](https://github.com/BrentLab/tfbpapi/?tab=readme-ov-file#installation), you can
adapt this [tutorial](https://brentlab.github.io/tfbpapi/tutorials/hfqueryapi_tutorial/)
in order to explore the contents of this repository.
### huggingface_cli/duckdb
You can retrieves and displays the file paths for each configuration of
the "BrentLab/barkai_compendium" dataset from Hugging Face Hub.
```python
from huggingface_hub import ModelCard
from pprint import pprint
card = ModelCard.load("BrentLab/barkai_compendium", repo_type="dataset")
# cast to dict
card_dict = card.data.to_dict()
# Get partition information
dataset_paths_dict = {d.get("config_name"): d.get("data_files")[0].get("path") for d in card_dict.get("configs")}
pprint(dataset_paths_dict)
```
The entire repository is large. It may be preferrable to only retrieve specific files or
partitions. You canuse the metadata files to choose which files to pull.
```python
from huggingface_hub import snapshot_download
import duckdb
import os
# Download only the partitioned dataset directory
repo_path = snapshot_download(
repo_id="BrentLab/barkai_compendium",
repo_type="dataset",
allow_patterns="*metadata.parquet"
)
dataset_path = os.path.join(repo_path, "GSE178430_metadata.parquet")
conn = duckdb.connect()
meta_res = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10", [dataset_path]).df()
print(meta_res)
```
We might choose to take a look at the file with accession `GSM5417602`
```python
# Download only the partitioned dataset directory
repo_path = snapshot_download(
repo_id="BrentLab/barkai_compendium",
repo_type="dataset",
allow_patterns="genome_map/series=GSE179430/accession=GSM5417602/*parquet" # Only the parquet data
)
# Query the specific partition
dataset_path = os.path.join(repo_path, "genome_map")
result = conn.execute("SELECT * FROM read_parquet(?) LIMIT 10",
[f"{dataset_path}/**/*.parquet"]).df()
print(result)
```
If you wish to pull the entire repo, due to its size you may need to use an
[authentication token](https://huggingface.co/docs/hub/en/security-tokens).
If you do not have one, try omitting the token related code below and see if
it works. Else, create a token and provide it like so:
```python
repo_id = "BrentLab/barkai_compendium"
hf_token = os.getenv("HF_TOKEN")
# Download entire repo to local directory
repo_path = snapshot_download(
repo_id=repo_id,
repo_type="dataset",
token=hf_token
)
print(f"\n✓ Repository downloaded to: {repo_path}")
# Construct path to the genome_map parquet file
parquet_path = os.path.join(repo_path, "genome_map.parquet")
print(f"✓ Parquet file at: {parquet_path}")
``` | 512 | 0 | [
"language:en",
"license:mit",
"size_categories:1B<n<10B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"transcription-factor",
"binding",
"chec-seq",
"genomics",
"biology"
] | 2025-08-28T09:41:48+00:00 | 2025-11-12T16:45:38+00:00 | 0 |
zhangwei217245/hf_model_metadata_all | # hf_models
| # hf_models
| 0 | 0 | [
"region:us"
] | 2025-11-12T15:16:05+00:00 | 2025-11-12T16:37:25+00:00 | 0 |
diffusers/community-pipelines-mirror | # Community Pipeline Examples
> **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).**
**Community pipeline** examples consist pipelines that have been added by the community.
Please have a look at the following tables to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste ready code example that you can try out.
If a community pipeline doesn't work as expected, please open an issue and ping the author on it.
Please also check out our [Community Scripts](https://github.com/huggingface/diffusers/blob/main/examples/community/README_community_scripts.md) examples for tips and tricks that you can use with diffusers without having to run a community pipeline.
| Example | Description | Code Example | Colab | Author |
|:--------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------:|
|Differential Diffusion|[Differential Diffusion](https://github.com/exx8/differential-diffusion) modifies an image according to a text prompt, and according to a map that specifies the amount of change in each region.|[Differential Diffusion](#differential-diffusion)|[](https://huggingface.co/spaces/exx8/differential-diffusion) [](https://colab.research.google.com/github/exx8/differential-diffusion/blob/main/examples/SD2.ipynb)|[Eran Levin](https://github.com/exx8) and [Ohad Fried](https://www.ohadf.com/)|
| HD-Painter | [HD-Painter](https://github.com/Picsart-AI-Research/HD-Painter) enables prompt-faithfull and high resolution (up to 2k) image inpainting upon any diffusion-based image inpainting method. | [HD-Painter](#hd-painter) | [](https://huggingface.co/spaces/PAIR/HD-Painter) | [Manukyan Hayk](https://github.com/haikmanukyan) and [Sargsyan Andranik](https://github.com/AndranikSargsyan) |
| Marigold Monocular Depth Estimation | A universal monocular depth estimator, utilizing Stable Diffusion, delivering sharp predictions in the wild. (See the [project page](https://marigoldmonodepth.github.io) and [full codebase](https://github.com/prs-eth/marigold) for more details.) | [Marigold Depth Estimation](#marigold-depth-estimation) | [](https://huggingface.co/spaces/toshas/marigold) [](https://colab.research.google.com/drive/12G8reD13DdpMie5ZQlaFNo2WCGeNUH-u?usp=sharing) | [Bingxin Ke](https://github.com/markkua) and [Anton Obukhov](https://github.com/toshas) |
| LLM-grounded Diffusion (LMD+) | LMD greatly improves the prompt following ability of text-to-image generation models by introducing an LLM as a front-end prompt parser and layout planner. [Project page.](https://llm-grounded-diffusion.github.io/) [See our full codebase (also with diffusers).](https://github.com/TonyLianLong/LLM-groundedDiffusion) | [LLM-grounded Diffusion (LMD+)](#llm-grounded-diffusion) | [Huggingface Demo](https://huggingface.co/spaces/longlian/llm-grounded-diffusion) [](https://colab.research.google.com/drive/1SXzMSeAB-LJYISb2yrUOdypLz4OYWUKj) | [Long (Tony) Lian](https://tonylian.com/) |
| CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion | [CLIP Guided Stable Diffusion](#clip-guided-stable-diffusion) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | [Suraj Patil](https://github.com/patil-suraj/) |
| One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see <https://github.com/huggingface/diffusers/issues/841>) | [One Step U-Net](#one-step-unet) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
| Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Stable Diffusion Interpolation](#stable-diffusion-interpolation) | - | [Nate Raw](https://github.com/nateraw/) |
| Stable Diffusion Mega | **One** Stable Diffusion Pipeline with all functionalities of [Text2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py), [Image2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) and [Inpainting](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | [Stable Diffusion Mega](#stable-diffusion-mega) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
| Long Prompt Weighting Stable Diffusion | **One** Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. | [Long Prompt Weighting Stable Diffusion](#long-prompt-weighting-stable-diffusion) | - | [SkyTNT](https://github.com/SkyTNT) |
| Speech to Image | Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images | [Speech to Image](#speech-to-image) | - | [Mikail Duzenli](https://github.com/MikailINTech)
| Wild Card Stable Diffusion | Stable Diffusion Pipeline that supports prompts that contain wildcard terms (indicated by surrounding double underscores), with values instantiated randomly from a corresponding txt file or a dictionary of possible values | [Wildcard Stable Diffusion](#wildcard-stable-diffusion) | - | [Shyam Sudhakaran](https://github.com/shyamsn97) |
| [Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) | Stable Diffusion Pipeline that supports prompts that contain "|" in prompts (as an AND condition) and weights (separated by "|" as well) to positively / negatively weight prompts. | [Composable Stable Diffusion](#composable-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
| Seed Resizing Stable Diffusion | Stable Diffusion Pipeline that supports resizing an image and retaining the concepts of the 512 by 512 generation. | [Seed Resizing](#seed-resizing) | - | [Mark Rich](https://github.com/MarkRich) |
| Imagic Stable Diffusion | Stable Diffusion Pipeline that enables writing a text prompt to edit an existing image | [Imagic Stable Diffusion](#imagic-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
| Multilingual Stable Diffusion | Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | - | [Juan Carlos Piñeros](https://github.com/juancopi81) |
| GlueGen Stable Diffusion | Stable Diffusion Pipeline that supports prompts in different languages using GlueGen adapter. | [GlueGen Stable Diffusion](#gluegen-stable-diffusion-pipeline) | - | [Phạm Hồng Vinh](https://github.com/rootonchair) |
| Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting | [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
| Text Based Inpainting Stable Diffusion | Stable Diffusion Inpainting Pipeline that enables passing a text prompt to generate the mask for inpainting | [Text Based Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Dhruv Karan](https://github.com/unography) |
| Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - | [Stuti R.](https://github.com/kingstut) |
| K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
| Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
| Stable Diffusion v1.1-1.4 Comparison | Run all 4 model checkpoints for Stable Diffusion and compare their results together | [Stable Diffusion Comparison](#stable-diffusion-comparisons) | - | [Suvaditya Mukherjee](https://github.com/suvadityamuk) |
| MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) | - | [Partho Das](https://github.com/daspartho) |
| Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) | - | [Ray Wang](https://wrong.wang) |
| UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
| UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
| DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | - | [Aengus (Duc-Anh)](https://github.com/aengusng8) |
| CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
| TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
| EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | - | [Joqsan Azocar](https://github.com/Joqsan) |
| Stable Diffusion RePaint | Stable Diffusion pipeline using [RePaint](https://arxiv.org/abs/2201.0986) for inpainting. | [Stable Diffusion RePaint](#stable-diffusion-repaint ) | - | [Markus Pobitzer](https://github.com/Markus-Pobitzer) |
| TensorRT Stable Diffusion Image to Image Pipeline | Accelerates the Stable Diffusion Image2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Image to Image Pipeline](#tensorrt-image2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
| Stable Diffusion IPEX Pipeline | Accelerate Stable Diffusion inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion on IPEX](#stable-diffusion-on-ipex) | - | [Yingjie Han](https://github.com/yingjie-han/) |
| CLIP Guided Images Mixing Stable Diffusion Pipeline | Сombine images using usual diffusion models. | [CLIP Guided Images Mixing Using Stable Diffusion](#clip-guided-images-mixing-with-stable-diffusion) | - | [Karachev Denis](https://github.com/TheDenk) |
| TensorRT Stable Diffusion Inpainting Pipeline | Accelerates the Stable Diffusion Inpainting Pipeline using TensorRT | [TensorRT Stable Diffusion Inpainting Pipeline](#tensorrt-inpainting-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
| IADB Pipeline | Implementation of [Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) | [IADB Pipeline](#iadb-pipeline) | - | [Thomas Chambon](https://github.com/tchambon)
| Zero1to3 Pipeline | Implementation of [Zero-1-to-3: Zero-shot One Image to 3D Object](https://arxiv.org/abs/2303.11328) | [Zero1to3 Pipeline](#zero1to3-pipeline) | - | [Xin Kong](https://github.com/kxhit) |
| Stable Diffusion XL Long Weighted Prompt Pipeline | A pipeline support unlimited length of prompt and negative prompt, use A1111 style of prompt weighting | [Stable Diffusion XL Long Weighted Prompt Pipeline](#stable-diffusion-xl-long-weighted-prompt-pipeline) | [](https://colab.research.google.com/drive/1LsqilswLR40XLLcp6XFOl5nKb_wOe26W?usp=sharing) | [Andrew Zhu](https://xhinker.medium.com/) |
| FABRIC - Stable Diffusion with feedback Pipeline | pipeline supports feedback from liked and disliked images | [Stable Diffusion Fabric Pipeline](#stable-diffusion-fabric-pipeline) | - | [Shauray Singh](https://shauray8.github.io/about_shauray/) |
| sketch inpaint - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion Pipeline](#stable-diffusion-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
| prompt-to-prompt | change parts of a prompt and retain image structure (see [paper page](https://prompt-to-prompt.github.io/)) | [Prompt2Prompt Pipeline](#prompt2prompt-pipeline) | - | [Umer H. Adil](https://twitter.com/UmerHAdil) |
| Latent Consistency Pipeline | Implementation of [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) | [Latent Consistency Pipeline](#latent-consistency-pipeline) | - | [Simian Luo](https://github.com/luosiallen) |
| Latent Consistency Img2img Pipeline | Img2img pipeline for Latent Consistency Models | [Latent Consistency Img2Img Pipeline](#latent-consistency-img2img-pipeline) | - | [Logan Zoellner](https://github.com/nagolinc) |
| Latent Consistency Interpolation Pipeline | Interpolate the latent space of Latent Consistency Models with multiple prompts | [Latent Consistency Interpolation Pipeline](#latent-consistency-interpolation-pipeline) | [](https://colab.research.google.com/drive/1pK3NrLWJSiJsBynLns1K1-IDTW9zbPvl?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) |
| SDE Drag Pipeline | The pipeline supports drag editing of images using stochastic differential equations | [SDE Drag Pipeline](#sde-drag-pipeline) | - | [NieShen](https://github.com/NieShenRuc) [Fengqi Zhu](https://github.com/Monohydroxides) |
| Regional Prompting Pipeline | Assign multiple prompts for different regions | [Regional Prompting Pipeline](#regional-prompting-pipeline) | - | [hako-mikan](https://github.com/hako-mikan) |
| LDM3D-sr (LDM3D upscaler) | Upscale low resolution RGB and depth inputs to high resolution | [StableDiffusionUpscaleLDM3D Pipeline](https://github.com/estelleafl/diffusers/tree/ldm3d_upscaler_community/examples/community#stablediffusionupscaleldm3d-pipeline) | - | [Estelle Aflalo](https://github.com/estelleafl) |
| AnimateDiff ControlNet Pipeline | Combines AnimateDiff with precise motion control using ControlNets | [AnimateDiff ControlNet Pipeline](#animatediff-controlnet-pipeline) | [](https://colab.research.google.com/drive/1SKboYeGjEQmQPWoFC0aLYpBlYdHXkvAu?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) and [Edoardo Botta](https://github.com/EdoardoBotta) |
| DemoFusion Pipeline | Implementation of [DemoFusion: Democratising High-Resolution Image Generation With No $$$](https://arxiv.org/abs/2311.16973) | [DemoFusion Pipeline](#demofusion) | - | [Ruoyi Du](https://github.com/RuoyiDu) |
| Instaflow Pipeline | Implementation of [InstaFlow! One-Step Stable Diffusion with Rectified Flow](https://arxiv.org/abs/2309.06380) | [Instaflow Pipeline](#instaflow-pipeline) | - | [Ayush Mangal](https://github.com/ayushtues) |
| Null-Text Inversion Pipeline | Implement [Null-text Inversion for Editing Real Images using Guided Diffusion Models](https://arxiv.org/abs/2211.09794) as a pipeline. | [Null-Text Inversion](https://github.com/google/prompt-to-prompt/) | - | [Junsheng Luan](https://github.com/Junsheng121) |
| Rerender A Video Pipeline | Implementation of [[SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation](https://arxiv.org/abs/2306.07954) | [Rerender A Video Pipeline](#rerender-a-video) | - | [Yifan Zhou](https://github.com/SingleZombie) |
| StyleAligned Pipeline | Implementation of [Style Aligned Image Generation via Shared Attention](https://arxiv.org/abs/2312.02133) | [StyleAligned Pipeline](#stylealigned-pipeline) | [](https://drive.google.com/file/d/15X2E0jFPTajUIjS0FzX50OaHsCbP2lQ0/view?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) |
| AnimateDiff Image-To-Video Pipeline | Experimental Image-To-Video support for AnimateDiff (open to improvements) | [AnimateDiff Image To Video Pipeline](#animatediff-image-to-video-pipeline) | [](https://drive.google.com/file/d/1TvzCDPHhfFtdcJZe4RLloAwyoLKuttWK/view?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) |
| IP Adapter FaceID Stable Diffusion | Stable Diffusion Pipeline that supports IP Adapter Face ID | [IP Adapter Face ID](#ip-adapter-face-id) | - | [Fabio Rigano](https://github.com/fabiorigano) |
| InstantID Pipeline | Stable Diffusion XL Pipeline that supports InstantID | [InstantID Pipeline](#instantid-pipeline) | [](https://huggingface.co/spaces/InstantX/InstantID) | [Haofan Wang](https://github.com/haofanwang) |
| UFOGen Scheduler | Scheduler for UFOGen Model (compatible with Stable Diffusion pipelines) | [UFOGen Scheduler](#ufogen-scheduler) | - | [dg845](https://github.com/dg845) |
| Stable Diffusion XL IPEX Pipeline | Accelerate Stable Diffusion XL inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion XL on IPEX](#stable-diffusion-xl-on-ipex) | - | [Dan Li](https://github.com/ustcuna/) |
| Stable Diffusion BoxDiff Pipeline | Training-free controlled generation with bounding boxes using [BoxDiff](https://github.com/showlab/BoxDiff) | [Stable Diffusion BoxDiff Pipeline](#stable-diffusion-boxdiff) | - | [Jingyang Zhang](https://github.com/zjysteven/) |
| FRESCO V2V Pipeline | Implementation of [[CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation](https://arxiv.org/abs/2403.12962) | [FRESCO V2V Pipeline](#fresco) | - | [Yifan Zhou](https://github.com/SingleZombie) |
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
```py
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="filename_in_the_community_folder")
```
## Example usages
### Differential Diffusion
**Eran Levin, Ohad Fried**
**Tel Aviv University, Reichman University**
Diffusion models have revolutionized image generation and editing, producing state-of-the-art results in conditioned and unconditioned image synthesis. While current techniques enable user control over the degree of change in an image edit, the controllability is limited to global changes over an entire edited region. This paper introduces a novel framework that enables customization of the amount of change per pixel or per image region. Our framework can be integrated into any existing diffusion model, enhancing it with this capability. Such granular control on the quantity of change opens up a diverse array of new editing capabilities, such as control of the extent to which individual objects are modified, or the ability to introduce gradual spatial changes. Furthermore, we showcase the framework's effectiveness in soft-inpainting---the completion of portions of an image while subtly adjusting the surrounding areas to ensure seamless integration. Additionally, we introduce a new tool for exploring the effects of different change quantities. Our framework operates solely during inference, requiring no model training or fine-tuning. We demonstrate our method with the current open state-of-the-art models, and validate it via both quantitative and qualitative comparisons, and a user study.

You can find additional information about Differential Diffusion in the [paper](https://differential-diffusion.github.io/paper.pdf) or in the [project website](https://differential-diffusion.github.io/).
#### Usage example
```python
import torch
from torchvision import transforms
from diffusers import DPMSolverMultistepScheduler
from diffusers.utils import load_image
from examples.community.pipeline_stable_diffusion_xl_differential_img2img import (
StableDiffusionXLDifferentialImg2ImgPipeline,
)
pipeline = StableDiffusionXLDifferentialImg2ImgPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, use_karras_sigmas=True)
def preprocess_image(image):
image = image.convert("RGB")
image = transforms.CenterCrop((image.size[1] // 64 * 64, image.size[0] // 64 * 64))(image)
image = transforms.ToTensor()(image)
image = image * 2 - 1
image = image.unsqueeze(0).to("cuda")
return image
def preprocess_map(map):
map = map.convert("L")
map = transforms.CenterCrop((map.size[1] // 64 * 64, map.size[0] // 64 * 64))(map)
map = transforms.ToTensor()(map)
map = map.to("cuda")
return map
image = preprocess_image(
load_image(
"https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/20240329211129_4024911930.png?download=true"
)
)
mask = preprocess_map(
load_image(
"https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/gradient_mask.png?download=true"
)
)
prompt = "a green pear"
negative_prompt = "blurry"
image = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
guidance_scale=7.5,
num_inference_steps=25,
original_image=image,
image=image,
strength=1.0,
map=mask,
).images[0]
image.save("result.png")
```
### HD-Painter
Implementation of [HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models](https://arxiv.org/abs/2312.14091).

The abstract from the paper is:
Recent progress in text-guided image inpainting, based on the unprecedented success of text-to-image diffusion models, has led to exceptionally realistic and visually plausible results.
However, there is still significant potential for improvement in current text-to-image inpainting models, particularly in better aligning the inpainted area with user prompts and performing high-resolution inpainting.
Therefore, in this paper we introduce _HD-Painter_, a completely **training-free** approach that **accurately follows to prompts** and coherently **scales to high-resolution** image inpainting.
To this end, we design the _Prompt-Aware Introverted Attention (PAIntA)_ layer enhancing self-attention scores by prompt information and resulting in better text alignment generations.
To further improve the prompt coherence we introduce the _Reweighting Attention Score Guidance (RASG)_ mechanism seamlessly integrating a post-hoc sampling strategy into general form of DDIM to prevent out-of-distribution latent shifts.
Moreover, HD-Painter allows extension to larger scales by introducing a specialized super-resolution technique customized for inpainting, enabling the completion of missing regions in images of up to 2K resolution.
Our experiments demonstrate that HD-Painter surpasses existing state-of-the-art approaches qualitatively and quantitatively, achieving an impressive generation accuracy improvement of **61.4** vs **51.9**.
We will make the codes publicly available.
You can find additional information about Text2Video-Zero in the [paper](https://arxiv.org/abs/2312.14091) or the [original codebase](https://github.com/Picsart-AI-Research/HD-Painter).
#### Usage example
```python
import torch
from diffusers import DiffusionPipeline, DDIMScheduler
from diffusers.utils import load_image, make_image_grid
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-inpainting",
custom_pipeline="hd_painter"
)
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
prompt = "wooden boat"
init_image = load_image("https://raw.githubusercontent.com/Picsart-AI-Research/HD-Painter/main/__assets__/samples/images/2.jpg")
mask_image = load_image("https://raw.githubusercontent.com/Picsart-AI-Research/HD-Painter/main/__assets__/samples/masks/2.png")
image = pipe (prompt, init_image, mask_image, use_rasg = True, use_painta = True, generator=torch.manual_seed(12345)).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```
### Marigold Depth Estimation
Marigold is a universal monocular depth estimator that delivers accurate and sharp predictions in the wild. Based on Stable Diffusion, it is trained exclusively with synthetic depth data and excels in zero-shot adaptation to real-world imagery. This pipeline is an official implementation of the inference process. More details can be found on our [project page](https://marigoldmonodepth.github.io) and [full codebase](https://github.com/prs-eth/marigold) (also implemented with diffusers).

This depth estimation pipeline processes a single input image through multiple diffusion denoising stages to estimate depth maps. These maps are subsequently merged to produce the final output. Below is an example code snippet, including optional arguments:
```python
import numpy as np
import torch
from PIL import Image
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
# Original DDIM version (higher quality)
pipe = DiffusionPipeline.from_pretrained(
"prs-eth/marigold-v1-0",
custom_pipeline="marigold_depth_estimation"
# torch_dtype=torch.float16, # (optional) Run with half-precision (16-bit float).
# variant="fp16", # (optional) Use with `torch_dtype=torch.float16`, to directly load fp16 checkpoint
)
# (New) LCM version (faster speed)
pipe = DiffusionPipeline.from_pretrained(
"prs-eth/marigold-lcm-v1-0",
custom_pipeline="marigold_depth_estimation"
# torch_dtype=torch.float16, # (optional) Run with half-precision (16-bit float).
# variant="fp16", # (optional) Use with `torch_dtype=torch.float16`, to directly load fp16 checkpoint
)
pipe.to("cuda")
img_path_or_url = "https://share.phys.ethz.ch/~pf/bingkedata/marigold/pipeline_example.jpg"
image: Image.Image = load_image(img_path_or_url)
pipeline_output = pipe(
image, # Input image.
# ----- recommended setting for DDIM version -----
# denoising_steps=10, # (optional) Number of denoising steps of each inference pass. Default: 10.
# ensemble_size=10, # (optional) Number of inference passes in the ensemble. Default: 10.
# ------------------------------------------------
# ----- recommended setting for LCM version ------
# denoising_steps=4,
# ensemble_size=5,
# -------------------------------------------------
# processing_res=768, # (optional) Maximum resolution of processing. If set to 0: will not resize at all. Defaults to 768.
# match_input_res=True, # (optional) Resize depth prediction to match input resolution.
# batch_size=0, # (optional) Inference batch size, no bigger than `num_ensemble`. If set to 0, the script will automatically decide the proper batch size. Defaults to 0.
# seed=2024, # (optional) Random seed can be set to ensure additional reproducibility. Default: None (unseeded). Note: forcing --batch_size 1 helps to increase reproducibility. To ensure full reproducibility, deterministic mode needs to be used.
# color_map="Spectral", # (optional) Colormap used to colorize the depth map. Defaults to "Spectral". Set to `None` to skip colormap generation.
# show_progress_bar=True, # (optional) If true, will show progress bars of the inference progress.
)
depth: np.ndarray = pipeline_output.depth_np # Predicted depth map
depth_colored: Image.Image = pipeline_output.depth_colored # Colorized prediction
# Save as uint16 PNG
depth_uint16 = (depth * 65535.0).astype(np.uint16)
Image.fromarray(depth_uint16).save("./depth_map.png", mode="I;16")
# Save colorized depth map
depth_colored.save("./depth_colored.png")
```
### LLM-grounded Diffusion
LMD and LMD+ greatly improves the prompt understanding ability of text-to-image generation models by introducing an LLM as a front-end prompt parser and layout planner. It improves spatial reasoning, the understanding of negation, attribute binding, generative numeracy, etc. in a unified manner without explicitly aiming for each. LMD is completely training-free (i.e., uses SD model off-the-shelf). LMD+ takes in additional adapters for better control. This is a reproduction of LMD+ model used in our work. [Project page.](https://llm-grounded-diffusion.github.io/) [See our full codebase (also with diffusers).](https://github.com/TonyLianLong/LLM-groundedDiffusion)


This pipeline can be used with an LLM or on its own. We provide a parser that parses LLM outputs to the layouts. You can obtain the prompt to input to the LLM for layout generation [here](https://github.com/TonyLianLong/LLM-groundedDiffusion/blob/main/prompt.py). After feeding the prompt to an LLM (e.g., GPT-4 on ChatGPT website), you can feed the LLM response into our pipeline.
The following code has been tested on 1x RTX 4090, but it should also support GPUs with lower GPU memory.
#### Use this pipeline with an LLM
```python
import torch
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"longlian/lmd_plus",
custom_pipeline="llm_grounded_diffusion",
custom_revision="main",
variant="fp16", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()
# Generate directly from a text prompt and an LLM response
prompt = "a waterfall and a modern high speed train in a beautiful forest with fall foliage"
phrases, boxes, bg_prompt, neg_prompt = pipe.parse_llm_response("""
[('a waterfall', [71, 105, 148, 258]), ('a modern high speed train', [255, 223, 181, 149])]
Background prompt: A beautiful forest with fall foliage
Negative prompt:
""")
images = pipe(
prompt=prompt,
negative_prompt=neg_prompt,
phrases=phrases,
boxes=boxes,
gligen_scheduled_sampling_beta=0.4,
output_type="pil",
num_inference_steps=50,
lmd_guidance_kwargs={}
).images
images[0].save("./lmd_plus_generation.jpg")
```
#### Use this pipeline on its own for layout generation
```python
import torch
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"longlian/lmd_plus",
custom_pipeline="llm_grounded_diffusion",
variant="fp16", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()
# Generate an image described by the prompt and
# insert objects described by text at the region defined by bounding boxes
prompt = "a waterfall and a modern high speed train in a beautiful forest with fall foliage"
boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]]
phrases = ["a waterfall", "a modern high speed train"]
images = pipe(
prompt=prompt,
phrases=phrases,
boxes=boxes,
gligen_scheduled_sampling_beta=0.4,
output_type="pil",
num_inference_steps=50,
lmd_guidance_kwargs={}
).images
images[0].save("./lmd_plus_generation.jpg")
```
### CLIP Guided Stable Diffusion
CLIP guided stable diffusion can help to generate more realistic images
by guiding stable diffusion at every denoising step with an additional CLIP model.
The following code requires roughly 12GB of GPU RAM.
```python
from diffusers import DiffusionPipeline
from transformers import CLIPImageProcessor, CLIPModel
import torch
feature_extractor = CLIPImageProcessor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K")
clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16)
guided_pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
custom_pipeline="clip_guided_stable_diffusion",
clip_model=clip_model,
feature_extractor=feature_extractor,
torch_dtype=torch.float16,
)
guided_pipeline.enable_attention_slicing()
guided_pipeline = guided_pipeline.to("cuda")
prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
generator = torch.Generator(device="cuda").manual_seed(0)
images = []
for i in range(4):
image = guided_pipeline(
prompt,
num_inference_steps=50,
guidance_scale=7.5,
clip_guidance_scale=100,
num_cutouts=4,
use_cutouts=False,
generator=generator,
).images[0]
images.append(image)
# save images locally
for i, img in enumerate(images):
img.save(f"./clip_guided_sd/image_{i}.png")
```
The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab.
Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images:
.
### One Step Unet
The dummy "one-step-unet" can be run as follows:
```python
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet")
pipe()
```
**Note**: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see <https://github.com/huggingface/diffusers/issues/841>).
### Stable Diffusion Interpolation
The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes.
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
revision='fp16',
torch_dtype=torch.float16,
safety_checker=None, # Very important for videos...lots of false positives while interpolating
custom_pipeline="interpolate_stable_diffusion",
).to('cuda')
pipe.enable_attention_slicing()
frame_filepaths = pipe.walk(
prompts=['a dog', 'a cat', 'a horse'],
seeds=[42, 1337, 1234],
num_interpolation_steps=16,
output_dir='./dreams',
batch_size=4,
height=512,
width=512,
guidance_scale=8.5,
num_inference_steps=50,
)
```
The output of the `walk(...)` function returns a list of images saved under the folder as defined in `output_dir`. You can use these images to create videos of stable diffusion.
> **Please have a look at <https://github.com/nateraw/stable-diffusion-videos> for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.**
### Stable Diffusion Mega
The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class.
```python
#!/usr/bin/env python3
from diffusers import DiffusionPipeline
import PIL
import requests
from io import BytesIO
import torch
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_mega", torch_dtype=torch.float16, revision="fp16")
pipe.to("cuda")
pipe.enable_attention_slicing()
### Text-to-Image
images = pipe.text2img("An astronaut riding a horse").images
### Image-to-Image
init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
prompt = "A fantasy landscape, trending on artstation"
images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
### Inpainting
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
prompt = "a cat sitting on a bench"
images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images
```
As shown above this one pipeline can run all both "text-to-image", "image-to-image", and "inpainting" in one pipeline.
### Long Prompt Weighting Stable Diffusion
Features of this custom pipeline:
- Input a prompt without the 77 token length limit.
- Includes tx2img, img2img. and inpainting pipelines.
- Emphasize/weigh part of your prompt with parentheses as so: `a baby deer with (big eyes)`
- De-emphasize part of your prompt as so: `a [baby] deer with big eyes`
- Precisely weigh part of your prompt as so: `a baby deer with (big eyes:1.3)`
Prompt weighting equivalents:
- `a baby deer with` == `(a baby deer with:1.0)`
- `(big eyes)` == `(big eyes:1.1)`
- `((big eyes))` == `(big eyes:1.21)`
- `[big eyes]` == `(big eyes:0.91)`
You can run this custom pipeline as so:
#### pytorch
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
'hakurei/waifu-diffusion',
custom_pipeline="lpw_stable_diffusion",
torch_dtype=torch.float16
)
pipe=pipe.to("cuda")
prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms"
neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry"
pipe.text2img(prompt, negative_prompt=neg_prompt, width=512,height=512,max_embeddings_multiples=3).images[0]
```
#### onnxruntime
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
'CompVis/stable-diffusion-v1-4',
custom_pipeline="lpw_stable_diffusion_onnx",
revision="onnx",
provider="CUDAExecutionProvider"
)
prompt = "a photo of an astronaut riding a horse on mars, best quality"
neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
pipe.text2img(prompt,negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
```
if you see `Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors`. Do not worry, it is normal.
### Speech to Image
The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion.
```Python
import torch
import matplotlib.pyplot as plt
from datasets import load_dataset
from diffusers import DiffusionPipeline
from transformers import (
WhisperForConditionalGeneration,
WhisperProcessor,
)
device = "cuda" if torch.cuda.is_available() else "cpu"
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[3]
text = audio_sample["text"].lower()
speech_data = audio_sample["audio"]["array"]
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device)
processor = WhisperProcessor.from_pretrained("openai/whisper-small")
diffuser_pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="speech_to_image_diffusion",
speech_model=model,
speech_processor=processor,
torch_dtype=torch.float16,
)
diffuser_pipeline.enable_attention_slicing()
diffuser_pipeline = diffuser_pipeline.to(device)
output = diffuser_pipeline(speech_data)
plt.imshow(output.images[0])
```
This example produces the following image:

### Wildcard Stable Diffusion
Following the great examples from <https://github.com/jtkelm2/stable-diffusion-webui-1/blob/master/scripts/wildcards.py> and <https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts#wildcards>, here's a minimal implementation that allows for users to add "wildcards", denoted by `__wildcard__` to prompts that are used as placeholders for randomly sampled values given by either a dictionary or a `.txt` file. For example:
Say we have a prompt:
```
prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
```
We can then define possible values to be sampled for `animal`, `object`, and `clothing`. These can either be from a `.txt` with the same name as the category.
The possible values can also be defined / combined by using a dictionary like: `{"animal":["dog", "cat", mouse"]}`.
The actual pipeline works just like `StableDiffusionPipeline`, except the `__call__` method takes in:
`wildcard_files`: list of file paths for wild card replacement
`wildcard_option_dict`: dict with key as `wildcard` and values as a list of possible replacements
`num_prompt_samples`: number of prompts to sample, uniformly sampling wildcards
A full example:
create `animal.txt`, with contents like:
```
dog
cat
mouse
```
create `object.txt`, with contents like:
```
chair
sofa
bench
```
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="wildcard_stable_diffusion",
torch_dtype=torch.float16,
)
prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
out = pipe(
prompt,
wildcard_option_dict={
"clothing":["hat", "shirt", "scarf", "beret"]
},
wildcard_files=["object.txt", "animal.txt"],
num_prompt_samples=1
)
```
### Composable Stable diffusion
[Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) proposes conjunction and negation (negative prompts) operators for compositional generation with conditional diffusion models.
```python
import torch as th
import numpy as np
import torchvision.utils as tvu
from diffusers import DiffusionPipeline
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--prompt", type=str, default="mystical trees | A magical pond | dark",
help="use '|' as the delimiter to compose separate sentences.")
parser.add_argument("--steps", type=int, default=50)
parser.add_argument("--scale", type=float, default=7.5)
parser.add_argument("--weights", type=str, default="7.5 | 7.5 | -7.5")
parser.add_argument("--seed", type=int, default=2)
parser.add_argument("--model_path", type=str, default="CompVis/stable-diffusion-v1-4")
parser.add_argument("--num_images", type=int, default=1)
args = parser.parse_args()
has_cuda = th.cuda.is_available()
device = th.device('cpu' if not has_cuda else 'cuda')
prompt = args.prompt
scale = args.scale
steps = args.steps
pipe = DiffusionPipeline.from_pretrained(
args.model_path,
custom_pipeline="composable_stable_diffusion",
).to(device)
pipe.safety_checker = None
images = []
generator = th.Generator("cuda").manual_seed(args.seed)
for i in range(args.num_images):
image = pipe(prompt, guidance_scale=scale, num_inference_steps=steps,
weights=args.weights, generator=generator).images[0]
images.append(th.from_numpy(np.array(image)).permute(2, 0, 1) / 255.)
grid = tvu.make_grid(th.stack(images, dim=0), nrow=4, padding=0)
tvu.save_image(grid, f'{prompt}_{args.weights}' + '.png')
```
### Imagic Stable Diffusion
Allows you to edit an image using stable diffusion.
```python
import requests
from PIL import Image
from io import BytesIO
import torch
import os
from diffusers import DiffusionPipeline, DDIMScheduler
has_cuda = torch.cuda.is_available()
device = torch.device('cpu' if not has_cuda else 'cuda')
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
safety_checker=None,
custom_pipeline="imagic_stable_diffusion",
scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
).to(device)
generator = torch.Generator("cuda").manual_seed(0)
seed = 0
prompt = "A photo of Barack Obama smiling with a big grin"
url = 'https://www.dropbox.com/s/6tlwzr73jd1r9yk/obama.png?dl=1'
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))
res = pipe.train(
prompt,
image=init_image,
generator=generator)
res = pipe(alpha=1, guidance_scale=7.5, num_inference_steps=50)
os.makedirs("imagic", exist_ok=True)
image = res.images[0]
image.save('./imagic/imagic_image_alpha_1.png')
res = pipe(alpha=1.5, guidance_scale=7.5, num_inference_steps=50)
image = res.images[0]
image.save('./imagic/imagic_image_alpha_1_5.png')
res = pipe(alpha=2, guidance_scale=7.5, num_inference_steps=50)
image = res.images[0]
image.save('./imagic/imagic_image_alpha_2.png')
```
### Seed Resizing
Test seed resizing. Originally generate an image in 512 by 512, then generate image with same seed at 512 by 592 using seed resizing. Finally, generate 512 by 592 using original stable diffusion pipeline.
```python
import torch as th
import numpy as np
from diffusers import DiffusionPipeline
has_cuda = th.cuda.is_available()
device = th.device('cpu' if not has_cuda else 'cuda')
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="seed_resize_stable_diffusion"
).to(device)
def dummy(images, **kwargs):
return images, False
pipe.safety_checker = dummy
images = []
th.manual_seed(0)
generator = th.Generator("cuda").manual_seed(0)
seed = 0
prompt = "A painting of a futuristic cop"
width = 512
height = 512
res = pipe(
prompt,
guidance_scale=7.5,
num_inference_steps=50,
height=height,
width=width,
generator=generator)
image = res.images[0]
image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
th.manual_seed(0)
generator = th.Generator("cuda").manual_seed(0)
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
).to(device)
width = 512
height = 592
res = pipe(
prompt,
guidance_scale=7.5,
num_inference_steps=50,
height=height,
width=width,
generator=generator)
image = res.images[0]
image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
pipe_compare = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
).to(device)
res = pipe_compare(
prompt,
guidance_scale=7.5,
num_inference_steps=50,
height=height,
width=width,
generator=generator
)
image = res.images[0]
image.save('./seed_resize/seed_resize_{w}_{h}_image_compare.png'.format(w=width, h=height))
```
### Multilingual Stable Diffusion Pipeline
The following code can generate an images from texts in different languages using the pre-trained [mBART-50 many-to-one multilingual machine translation model](https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt) and Stable Diffusion.
```python
from PIL import Image
import torch
from diffusers import DiffusionPipeline
from transformers import (
pipeline,
MBart50TokenizerFast,
MBartForConditionalGeneration,
)
device = "cuda" if torch.cuda.is_available() else "cpu"
device_dict = {"cuda": 0, "cpu": -1}
# helper function taken from: https://huggingface.co/blog/stable_diffusion
def image_grid(imgs, rows, cols):
assert len(imgs) == rows*cols
w, h = imgs[0].size
grid = Image.new('RGB', size=(cols*w, rows*h))
grid_w, grid_h = grid.size
for i, img in enumerate(imgs):
grid.paste(img, box=(i%cols*w, i//cols*h))
return grid
# Add language detection pipeline
language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection"
language_detection_pipeline = pipeline("text-classification",
model=language_detection_model_ckpt,
device=device_dict[device])
# Add model for language translation
trans_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
trans_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device)
diffuser_pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="multilingual_stable_diffusion",
detection_pipeline=language_detection_pipeline,
translation_model=trans_model,
translation_tokenizer=trans_tokenizer,
torch_dtype=torch.float16,
)
diffuser_pipeline.enable_attention_slicing()
diffuser_pipeline = diffuser_pipeline.to(device)
prompt = ["a photograph of an astronaut riding a horse",
"Una casa en la playa",
"Ein Hund, der Orange isst",
"Un restaurant parisien"]
output = diffuser_pipeline(prompt)
images = output.images
grid = image_grid(images, rows=2, cols=2)
```
This example produces the following images:

### GlueGen Stable Diffusion Pipeline
GlueGen is a minimal adapter that allow alignment between any encoder (Text Encoder of different language, Multilingual Roberta, AudioClip) and CLIP text encoder used in standard Stable Diffusion model. This method allows easy language adaptation to available english Stable Diffusion checkpoints without the need of an image captioning dataset as well as long training hours.
Make sure you downloaded `gluenet_French_clip_overnorm_over3_noln.ckpt` for French (there are also pre-trained weights for Chinese, Italian, Japanese, Spanish or train your own) at [GlueGen's official repo](https://github.com/salesforce/GlueGen/tree/main)
```python
from PIL import Image
import torch
from transformers import AutoModel, AutoTokenizer
from diffusers import DiffusionPipeline
if __name__ == "__main__":
device = "cuda"
lm_model_id = "xlm-roberta-large"
token_max_length = 77
text_encoder = AutoModel.from_pretrained(lm_model_id)
tokenizer = AutoTokenizer.from_pretrained(lm_model_id, model_max_length=token_max_length, use_fast=False)
tensor_norm = torch.Tensor([[43.8203],[28.3668],[27.9345],[28.0084],[28.2958],[28.2576],[28.3373],[28.2695],[28.4097],[28.2790],[28.2825],[28.2807],[28.2775],[28.2708],[28.2682],[28.2624],[28.2589],[28.2611],[28.2616],[28.2639],[28.2613],[28.2566],[28.2615],[28.2665],[28.2799],[28.2885],[28.2852],[28.2863],[28.2780],[28.2818],[28.2764],[28.2532],[28.2412],[28.2336],[28.2514],[28.2734],[28.2763],[28.2977],[28.2971],[28.2948],[28.2818],[28.2676],[28.2831],[28.2890],[28.2979],[28.2999],[28.3117],[28.3363],[28.3554],[28.3626],[28.3589],[28.3597],[28.3543],[28.3660],[28.3731],[28.3717],[28.3812],[28.3753],[28.3810],[28.3777],[28.3693],[28.3713],[28.3670],[28.3691],[28.3679],[28.3624],[28.3703],[28.3703],[28.3720],[28.3594],[28.3576],[28.3562],[28.3438],[28.3376],[28.3389],[28.3433],[28.3191]])
pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
text_encoder=text_encoder,
tokenizer=tokenizer,
custom_pipeline="gluegen"
).to(device)
pipeline.load_language_adapter("gluenet_French_clip_overnorm_over3_noln.ckpt", num_token=token_max_length, dim=1024, dim_out=768, tensor_norm=tensor_norm)
prompt = "une voiture sur la plage"
generator = torch.Generator(device=device).manual_seed(42)
image = pipeline(prompt, generator=generator).images[0]
image.save("gluegen_output_fr.png")
```
Which will produce:

### Image to Image Inpainting Stable Diffusion
Similar to the standard stable diffusion inpainting example, except with the addition of an `inner_image` argument.
`image`, `inner_image`, and `mask` should have the same dimensions. `inner_image` should have an alpha (transparency) channel.
The aim is to overlay two images, then mask out the boundary between `image` and `inner_image` to allow stable diffusion to make the connection more seamless.
For example, this could be used to place a logo on a shirt and make it blend seamlessly.
```python
import PIL
import torch
from diffusers import DiffusionPipeline
image_path = "./path-to-image.png"
inner_image_path = "./path-to-inner-image.png"
mask_path = "./path-to-mask.png"
init_image = PIL.Image.open(image_path).convert("RGB").resize((512, 512))
inner_image = PIL.Image.open(inner_image_path).convert("RGBA").resize((512, 512))
mask_image = PIL.Image.open(mask_path).convert("RGB").resize((512, 512))
pipe = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
custom_pipeline="img2img_inpainting",
torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
prompt = "Your prompt here!"
image = pipe(prompt=prompt, image=init_image, inner_image=inner_image, mask_image=mask_image).images[0]
```

### Text Based Inpainting Stable Diffusion
Use a text prompt to generate the mask for the area to be inpainted.
Currently uses the CLIPSeg model for mask generation, then calls the standard Stable Diffusion Inpainting pipeline to perform the inpainting.
```python
from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation
from diffusers import DiffusionPipeline
from PIL import Image
import requests
processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined")
pipe = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
custom_pipeline="text_inpainting",
segmentation_model=model,
segmentation_processor=processor
)
pipe = pipe.to("cuda")
url = "https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true"
image = Image.open(requests.get(url, stream=True).raw).resize((512, 512))
text = "a glass" # will mask out this text
prompt = "a cup" # the masked out region will be replaced with this
image = pipe(image=image, text=text, prompt=prompt).images[0]
```
### Bit Diffusion
Based <https://arxiv.org/abs/2208.04202>, this is used for diffusion on discrete data - eg, discreate image data, DNA sequence data. An unconditional discreate image can be generated like this:
```python
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="bit_diffusion")
image = pipe().images[0]
```
### Stable Diffusion with K Diffusion
Make sure you have @crowsonkb's <https://github.com/crowsonkb/k-diffusion> installed:
```sh
pip install k-diffusion
```
You can use the community pipeline as follows:
```python
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion")
pipe = pipe.to("cuda")
prompt = "an astronaut riding a horse on mars"
pipe.set_scheduler("sample_heun")
generator = torch.Generator(device="cuda").manual_seed(seed)
image = pipe(prompt, generator=generator, num_inference_steps=20).images[0]
image.save("./astronaut_heun_k_diffusion.png")
```
To make sure that K Diffusion and `diffusers` yield the same results:
**Diffusers**:
```python
from diffusers import DiffusionPipeline, EulerDiscreteScheduler
seed = 33
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
generator = torch.Generator(device="cuda").manual_seed(seed)
image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
```

**K Diffusion**:
```python
from diffusers import DiffusionPipeline, EulerDiscreteScheduler
seed = 33
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion")
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
pipe.set_scheduler("sample_euler")
generator = torch.Generator(device="cuda").manual_seed(seed)
image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
```

### Checkpoint Merger Pipeline
Based on the AUTOMATIC1111/webui for checkpoint merging. This is a custom pipeline that merges upto 3 pretrained model checkpoints as long as they are in the HuggingFace model_index.json format.
The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect at least 13GB RAM Usage on Kaggle GPU kernels and
on colab you might run out of the 12GB memory even while merging two checkpoints.
Usage:-
```python
from diffusers import DiffusionPipeline
#Return a CheckpointMergerPipeline class that allows you to merge checkpoints.
#The checkpoint passed here is ignored. But still pass one of the checkpoints you plan to
#merge for convenience
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="checkpoint_merger")
#There are multiple possible scenarios:
#The pipeline with the merged checkpoints is returned in all the scenarios
#Compatible checkpoints a.k.a matched model_index.json files. Ignores the meta attributes in model_index.json during comparison.( attrs with _ as prefix )
merged_pipe = pipe.merge(["CompVis/stable-diffusion-v1-4","CompVis/stable-diffusion-v1-2"], interp = "sigmoid", alpha = 0.4)
#Incompatible checkpoints in model_index.json but merge might be possible. Use force = True to ignore model_index.json compatibility
merged_pipe_1 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion"], force = True, interp = "sigmoid", alpha = 0.4)
#Three checkpoint merging. Only "add_difference" method actually works on all three checkpoints. Using any other options will ignore the 3rd checkpoint.
merged_pipe_2 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion","prompthero/openjourney"], force = True, interp = "add_difference", alpha = 0.4)
prompt = "An astronaut riding a horse on Mars"
image = merged_pipe(prompt).images[0]
```
Some examples along with the merge details:
1. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" ; Sigmoid interpolation; alpha = 0.8

2. "hakurei/waifu-diffusion" + "prompthero/openjourney" ; Inverse Sigmoid interpolation; alpha = 0.8

3. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" + "prompthero/openjourney"; Add Difference interpolation; alpha = 0.5

### Stable Diffusion Comparisons
This Community Pipeline enables the comparison between the 4 checkpoints that exist for Stable Diffusion. They can be found through the following links:
1. [Stable Diffusion v1.1](https://huggingface.co/CompVis/stable-diffusion-v1-1)
2. [Stable Diffusion v1.2](https://huggingface.co/CompVis/stable-diffusion-v1-2)
3. [Stable Diffusion v1.3](https://huggingface.co/CompVis/stable-diffusion-v1-3)
4. [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
```python
from diffusers import DiffusionPipeline
import matplotlib.pyplot as plt
pipe = DiffusionPipeline.from_pretrained('CompVis/stable-diffusion-v1-4', custom_pipeline='suvadityamuk/StableDiffusionComparison')
pipe.enable_attention_slicing()
pipe = pipe.to('cuda')
prompt = "an astronaut riding a horse on mars"
output = pipe(prompt)
plt.subplots(2,2,1)
plt.imshow(output.images[0])
plt.title('Stable Diffusion v1.1')
plt.axis('off')
plt.subplots(2,2,2)
plt.imshow(output.images[1])
plt.title('Stable Diffusion v1.2')
plt.axis('off')
plt.subplots(2,2,3)
plt.imshow(output.images[2])
plt.title('Stable Diffusion v1.3')
plt.axis('off')
plt.subplots(2,2,4)
plt.imshow(output.images[3])
plt.title('Stable Diffusion v1.4')
plt.axis('off')
plt.show()
```
As a result, you can look at a grid of all 4 generated images being shown together, that captures a difference the advancement of the training between the 4 checkpoints.
### Magic Mix
Implementation of the [MagicMix: Semantic Mixing with Diffusion Models](https://arxiv.org/abs/2210.16056) paper. This is a Diffusion Pipeline for semantic mixing of an image and a text prompt to create a new concept while preserving the spatial layout and geometry of the subject in the image. The pipeline takes an image that provides the layout semantics and a prompt that provides the content semantics for the mixing process.
There are 3 parameters for the method-
- `mix_factor`: It is the interpolation constant used in the layout generation phase. The greater the value of `mix_factor`, the greater the influence of the prompt on the layout generation process.
- `kmax` and `kmin`: These determine the range for the layout and content generation process. A higher value of kmax results in loss of more information about the layout of the original image and a higher value of kmin results in more steps for content generation process.
Here is an example usage-
```python
from diffusers import DiffusionPipeline, DDIMScheduler
from PIL import Image
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="magic_mix",
scheduler = DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"),
).to('cuda')
img = Image.open('phone.jpg')
mix_img = pipe(
img,
prompt = 'bed',
kmin = 0.3,
kmax = 0.5,
mix_factor = 0.5,
)
mix_img.save('phone_bed_mix.jpg')
```
The `mix_img` is a PIL image that can be saved locally or displayed directly in a google colab. Generated image is a mix of the layout semantics of the given image and the content semantics of the prompt.
E.g. the above script generates the following image:
`phone.jpg`

`phone_bed_mix.jpg`

For more example generations check out this [demo notebook](https://github.com/daspartho/MagicMix/blob/main/demo.ipynb).
### Stable UnCLIP
UnCLIPPipeline("kakaobrain/karlo-v1-alpha") provide a prior model that can generate clip image embedding from text.
StableDiffusionImageVariationPipeline("lambdalabs/sd-image-variations-diffusers") provide a decoder model than can generate images from clip image embedding.
```python
import torch
from diffusers import DiffusionPipeline
device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
pipeline = DiffusionPipeline.from_pretrained(
"kakaobrain/karlo-v1-alpha",
torch_dtype=torch.float16,
custom_pipeline="stable_unclip",
decoder_pipe_kwargs=dict(
image_encoder=None,
),
)
pipeline.to(device)
prompt = "a shiba inu wearing a beret and black turtleneck"
random_generator = torch.Generator(device=device).manual_seed(1000)
output = pipeline(
prompt=prompt,
width=512,
height=512,
generator=random_generator,
prior_guidance_scale=4,
prior_num_inference_steps=25,
decoder_guidance_scale=8,
decoder_num_inference_steps=50,
)
image = output.images[0]
image.save("./shiba-inu.jpg")
# debug
# `pipeline.decoder_pipe` is a regular StableDiffusionImageVariationPipeline instance.
# It is used to convert clip image embedding to latents, then fed into VAE decoder.
print(pipeline.decoder_pipe.__class__)
# <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_image_variation.StableDiffusionImageVariationPipeline'>
# this pipeline only use prior module in "kakaobrain/karlo-v1-alpha"
# It is used to convert clip text embedding to clip image embedding.
print(pipeline)
# StableUnCLIPPipeline {
# "_class_name": "StableUnCLIPPipeline",
# "_diffusers_version": "0.12.0.dev0",
# "prior": [
# "diffusers",
# "PriorTransformer"
# ],
# "prior_scheduler": [
# "diffusers",
# "UnCLIPScheduler"
# ],
# "text_encoder": [
# "transformers",
# "CLIPTextModelWithProjection"
# ],
# "tokenizer": [
# "transformers",
# "CLIPTokenizer"
# ]
# }
# pipeline.prior_scheduler is the scheduler used for prior in UnCLIP.
print(pipeline.prior_scheduler)
# UnCLIPScheduler {
# "_class_name": "UnCLIPScheduler",
# "_diffusers_version": "0.12.0.dev0",
# "clip_sample": true,
# "clip_sample_range": 5.0,
# "num_train_timesteps": 1000,
# "prediction_type": "sample",
# "variance_type": "fixed_small_log"
# }
```
`shiba-inu.jpg`

### UnCLIP Text Interpolation Pipeline
This Diffusion Pipeline takes two prompts and interpolates between the two input prompts using spherical interpolation ( slerp ). The input prompts are converted to text embeddings by the pipeline's text_encoder and the interpolation is done on the resulting text_embeddings over the number of steps specified. Defaults to 5 steps.
```python
import torch
from diffusers import DiffusionPipeline
device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
pipe = DiffusionPipeline.from_pretrained(
"kakaobrain/karlo-v1-alpha",
torch_dtype=torch.float16,
custom_pipeline="unclip_text_interpolation"
)
pipe.to(device)
start_prompt = "A photograph of an adult lion"
end_prompt = "A photograph of a lion cub"
#For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
generator = torch.Generator(device=device).manual_seed(42)
output = pipe(start_prompt, end_prompt, steps = 6, generator = generator, enable_sequential_cpu_offload=False)
for i,image in enumerate(output.images):
img.save('result%s.jpg' % i)
```
The resulting images in order:-






### UnCLIP Image Interpolation Pipeline
This Diffusion Pipeline takes two images or an image_embeddings tensor of size 2 and interpolates between their embeddings using spherical interpolation ( slerp ). The input images/image_embeddings are converted to image embeddings by the pipeline's image_encoder and the interpolation is done on the resulting image_embeddings over the number of steps specified. Defaults to 5 steps.
```python
import torch
from diffusers import DiffusionPipeline
from PIL import Image
device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
dtype = torch.float16 if torch.cuda.is_available() else torch.bfloat16
pipe = DiffusionPipeline.from_pretrained(
"kakaobrain/karlo-v1-alpha-image-variations",
torch_dtype=dtype,
custom_pipeline="unclip_image_interpolation"
)
pipe.to(device)
images = [Image.open('./starry_night.jpg'), Image.open('./flowers.jpg')]
#For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
generator = torch.Generator(device=device).manual_seed(42)
output = pipe(image = images ,steps = 6, generator = generator)
for i,image in enumerate(output.images):
image.save('starry_to_flowers_%s.jpg' % i)
```
The original images:-


The resulting images in order:-






### DDIM Noise Comparative Analysis Pipeline
#### **Research question: What visual concepts do the diffusion models learn from each noise level during training?**
The [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227) paper proposed an approach to answer the above question, which is their second contribution.
The approach consists of the following steps:
1. The input is an image x0.
2. Perturb it to xt using a diffusion process q(xt|x0).
- `strength` is a value between 0.0 and 1.0, that controls the amount of noise that is added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input.
3. Reconstruct the image with the learned denoising process pθ(ˆx0|xt).
4. Compare x0 and ˆx0 among various t to show how each step contributes to the sample.
The authors used [openai/guided-diffusion](https://github.com/openai/guided-diffusion) model to denoise images in FFHQ dataset. This pipeline extends their second contribution by investigating DDIM on any input image.
```python
import torch
from PIL import Image
import numpy as np
image_path = "path/to/your/image" # images from CelebA-HQ might be better
image_pil = Image.open(image_path)
image_name = image_path.split("/")[-1].split(".")[0]
device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
pipe = DiffusionPipeline.from_pretrained(
"google/ddpm-ema-celebahq-256",
custom_pipeline="ddim_noise_comparative_analysis",
)
pipe = pipe.to(device)
for strength in np.linspace(0.1, 1, 25):
denoised_image, latent_timestep = pipe(
image_pil, strength=strength, return_dict=False
)
denoised_image = denoised_image[0]
denoised_image.save(
f"noise_comparative_analysis_{image_name}_{latent_timestep}.png"
)
```
Here is the result of this pipeline (which is DDIM) on CelebA-HQ dataset.

### CLIP Guided Img2Img Stable Diffusion
CLIP guided Img2Img stable diffusion can help to generate more realistic images with an initial image
by guiding stable diffusion at every denoising step with an additional CLIP model.
The following code requires roughly 12GB of GPU RAM.
```python
from io import BytesIO
import requests
import torch
from diffusers import DiffusionPipeline
from PIL import Image
from transformers import CLIPFeatureExtractor, CLIPModel
feature_extractor = CLIPFeatureExtractor.from_pretrained(
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
)
clip_model = CLIPModel.from_pretrained(
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16
)
guided_pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
# custom_pipeline="clip_guided_stable_diffusion",
custom_pipeline="/home/njindal/diffusers/examples/community/clip_guided_stable_diffusion.py",
clip_model=clip_model,
feature_extractor=feature_extractor,
torch_dtype=torch.float16,
)
guided_pipeline.enable_attention_slicing()
guided_pipeline = guided_pipeline.to("cuda")
prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
image = guided_pipeline(
prompt=prompt,
num_inference_steps=30,
image=init_image,
strength=0.75,
guidance_scale=7.5,
clip_guidance_scale=100,
num_cutouts=4,
use_cutouts=False,
).images[0]
display(image)
```
Init Image

Output Image

### TensorRT Text2Image Stable Diffusion Pipeline
The TensorRT Pipeline can be used to accelerate the Text2Image Stable Diffusion Inference run.
NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
```python
import torch
from diffusers import DDIMScheduler
from diffusers.pipelines.stable_diffusion import StableDiffusionPipeline
# Use the DDIMScheduler scheduler here instead
scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-2-1",
subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1",
custom_pipeline="stable_diffusion_tensorrt_txt2img",
revision='fp16',
torch_dtype=torch.float16,
scheduler=scheduler,)
# re-use cached folder to save ONNX models and TensorRT Engines
pipe.set_cached_folder("stabilityai/stable-diffusion-2-1", revision='fp16',)
pipe = pipe.to("cuda")
prompt = "a beautiful photograph of Mt. Fuji during cherry blossom"
image = pipe(prompt).images[0]
image.save('tensorrt_mt_fuji.png')
```
### EDICT Image Editing Pipeline
This pipeline implements the text-guided image editing approach from the paper [EDICT: Exact Diffusion Inversion via Coupled Transformations](https://arxiv.org/abs/2211.12446). You have to pass:
- (`PIL`) `image` you want to edit.
- `base_prompt`: the text prompt describing the current image (before editing).
- `target_prompt`: the text prompt describing with the edits.
```python
from diffusers import DiffusionPipeline, DDIMScheduler
from transformers import CLIPTextModel
import torch, PIL, requests
from io import BytesIO
from IPython.display import display
def center_crop_and_resize(im):
width, height = im.size
d = min(width, height)
left = (width - d) / 2
upper = (height - d) / 2
right = (width + d) / 2
lower = (height + d) / 2
return im.crop((left, upper, right, lower)).resize((512, 512))
torch_dtype = torch.float16
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# scheduler and text_encoder param values as in the paper
scheduler = DDIMScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
set_alpha_to_one=False,
clip_sample=False,
)
text_encoder = CLIPTextModel.from_pretrained(
pretrained_model_name_or_path="openai/clip-vit-large-patch14",
torch_dtype=torch_dtype,
)
# initialize pipeline
pipeline = DiffusionPipeline.from_pretrained(
pretrained_model_name_or_path="CompVis/stable-diffusion-v1-4",
custom_pipeline="edict_pipeline",
revision="fp16",
scheduler=scheduler,
text_encoder=text_encoder,
leapfrog_steps=True,
torch_dtype=torch_dtype,
).to(device)
# download image
image_url = "https://huggingface.co/datasets/Joqsan/images/resolve/main/imagenet_dog_1.jpeg"
response = requests.get(image_url)
image = PIL.Image.open(BytesIO(response.content))
# preprocess it
cropped_image = center_crop_and_resize(image)
# define the prompts
base_prompt = "A dog"
target_prompt = "A golden retriever"
# run the pipeline
result_image = pipeline(
base_prompt=base_prompt,
target_prompt=target_prompt,
image=cropped_image,
)
display(result_image)
```
Init Image

Output Image

### Stable Diffusion RePaint
This pipeline uses the [RePaint](https://arxiv.org/abs/2201.09865) logic on the latent space of stable diffusion. It can
be used similarly to other image inpainting pipelines but does not rely on a specific inpainting model. This means you can use
models that are not specifically created for inpainting.
Make sure to use the ```RePaintScheduler``` as shown in the example below.
Disclaimer: The mask gets transferred into latent space, this may lead to unexpected changes on the edge of the masked part.
The inference time is a lot slower.
```py
import PIL
import requests
import torch
from io import BytesIO
from diffusers import StableDiffusionPipeline, RePaintScheduler
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
mask_image = PIL.ImageOps.invert(mask_image)
pipe = StableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, custom_pipeline="stable_diffusion_repaint",
)
pipe.scheduler = RePaintScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```
### TensorRT Image2Image Stable Diffusion Pipeline
The TensorRT Pipeline can be used to accelerate the Image2Image Stable Diffusion Inference run.
NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
```python
import requests
from io import BytesIO
from PIL import Image
import torch
from diffusers import DDIMScheduler
from diffusers.pipelines.stable_diffusion import StableDiffusionImg2ImgPipeline
# Use the DDIMScheduler scheduler here instead
scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-2-1",
subfolder="scheduler")
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-1",
custom_pipeline="stable_diffusion_tensorrt_img2img",
revision='fp16',
torch_dtype=torch.float16,
scheduler=scheduler,)
# re-use cached folder to save ONNX models and TensorRT Engines
pipe.set_cached_folder("stabilityai/stable-diffusion-2-1", revision='fp16',)
pipe = pipe.to("cuda")
url = "https://pajoca.com/wp-content/uploads/2022/09/tekito-yamakawa-1.png"
response = requests.get(url)
input_image = Image.open(BytesIO(response.content)).convert("RGB")
prompt = "photorealistic new zealand hills"
image = pipe(prompt, image=input_image, strength=0.75,).images[0]
image.save('tensorrt_img2img_new_zealand_hills.png')
```
### Stable Diffusion BoxDiff
BoxDiff is a training-free method for controlled generation with bounding box coordinates. It shoud work with any Stable Diffusion model. Below shows an example with `stable-diffusion-2-1-base`.
```py
import torch
from PIL import Image, ImageDraw
from copy import deepcopy
from examples.community.pipeline_stable_diffusion_boxdiff import StableDiffusionBoxDiffPipeline
def draw_box_with_text(img, boxes, names):
colors = ["red", "olive", "blue", "green", "orange", "brown", "cyan", "purple"]
img_new = deepcopy(img)
draw = ImageDraw.Draw(img_new)
W, H = img.size
for bid, box in enumerate(boxes):
draw.rectangle([box[0] * W, box[1] * H, box[2] * W, box[3] * H], outline=colors[bid % len(colors)], width=4)
draw.text((box[0] * W, box[1] * H), names[bid], fill=colors[bid % len(colors)])
return img_new
pipe = StableDiffusionBoxDiffPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1-base",
torch_dtype=torch.float16,
)
pipe.to("cuda")
# example 1
prompt = "as the aurora lights up the sky, a herd of reindeer leisurely wanders on the grassy meadow, admiring the breathtaking view, a serene lake quietly reflects the magnificent display, and in the distance, a snow-capped mountain stands majestically, fantasy, 8k, highly detailed"
phrases = [
"aurora",
"reindeer",
"meadow",
"lake",
"mountain"
]
boxes = [[1,3,512,202], [75,344,421,495], [1,327,508,507], [2,217,507,341], [1,135,509,242]]
# example 2
# prompt = "A rabbit wearing sunglasses looks very proud"
# phrases = ["rabbit", "sunglasses"]
# boxes = [[67,87,366,512], [66,130,364,262]]
boxes = [[x / 512 for x in box] for box in boxes]
images = pipe(
prompt,
boxdiff_phrases=phrases,
boxdiff_boxes=boxes,
boxdiff_kwargs={
"attention_res": 16,
"normalize_eot": True
},
num_inference_steps=50,
guidance_scale=7.5,
generator=torch.manual_seed(42),
safety_checker=None
).images
draw_box_with_text(images[0], boxes, phrases).save("output.png")
```
### Stable Diffusion Reference
This pipeline uses the Reference Control. Refer to the [sd-webui-controlnet discussion: Reference-only Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1236)[sd-webui-controlnet discussion: Reference-adain Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1280).
Based on [this issue](https://github.com/huggingface/diffusers/issues/3566),
- `EulerAncestralDiscreteScheduler` got poor results.
```py
import torch
from diffusers import UniPCMultistepScheduler
from diffusers.utils import load_image
input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
pipe = StableDiffusionReferencePipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
safety_checker=None,
torch_dtype=torch.float16
).to('cuda:0')
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
result_img = pipe(ref_image=input_image,
prompt="1girl",
num_inference_steps=20,
reference_attn=True,
reference_adain=True).images[0]
```
Reference Image

Output Image of `reference_attn=True` and `reference_adain=False`

Output Image of `reference_attn=False` and `reference_adain=True`

Output Image of `reference_attn=True` and `reference_adain=True`

### Stable Diffusion ControlNet Reference
This pipeline uses the Reference Control with ControlNet. Refer to the [sd-webui-controlnet discussion: Reference-only Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1236)[sd-webui-controlnet discussion: Reference-adain Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1280).
Based on [this issue](https://github.com/huggingface/diffusers/issues/3566),
- `EulerAncestralDiscreteScheduler` got poor results.
- `guess_mode=True` works well for ControlNet v1.1
```py
import cv2
import torch
import numpy as np
from PIL import Image
from diffusers import UniPCMultistepScheduler
from diffusers.utils import load_image
input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
# get canny image
image = cv2.Canny(np.array(input_image), 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetReferencePipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
controlnet=controlnet,
safety_checker=None,
torch_dtype=torch.float16
).to('cuda:0')
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
result_img = pipe(ref_image=input_image,
prompt="1girl",
image=canny_image,
num_inference_steps=20,
reference_attn=True,
reference_adain=True).images[0]
```
Reference Image

Output Image

### Stable Diffusion on IPEX
This diffusion pipeline aims to accelarate the inference of Stable-Diffusion on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
To use this pipeline, you need to:
1. Install [IPEX](https://github.com/intel/intel-extension-for-pytorch)
**Note:** For each PyTorch release, there is a corresponding release of the IPEX. Here is the mapping relationship. It is recommended to install Pytorch/IPEX2.0 to get the best performance.
|PyTorch Version|IPEX Version|
|--|--|
|[v2.0.\*](https://github.com/pytorch/pytorch/tree/v2.0.1 "v2.0.1")|[v2.0.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v2.0.100+cpu)|
|[v1.13.\*](https://github.com/pytorch/pytorch/tree/v1.13.0 "v1.13.0")|[v1.13.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.100+cpu)|
You can simply use pip to install IPEX with the latest version.
```sh
python -m pip install intel_extension_for_pytorch
```
**Note:** To install a specific version, run with the following command:
```sh
python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
```
2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX accelaration. Supported inference datatypes are Float32 and BFloat16.
**Note:** The setting of generated image height/width for `prepare_for_ipex()` should be same as the setting of pipeline inference.
```python
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_ipex")
# For Float32
pipe.prepare_for_ipex(prompt, dtype=torch.float32, height=512, width=512) #value of image height/width should be consistent with the pipeline inference
# For BFloat16
pipe.prepare_for_ipex(prompt, dtype=torch.bfloat16, height=512, width=512) #value of image height/width should be consistent with the pipeline inference
```
Then you can use the ipex pipeline in a similar way to the default stable diffusion pipeline.
```python
# For Float32
image = pipe(prompt, num_inference_steps=20, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()'
# For BFloat16
with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
image = pipe(prompt, num_inference_steps=20, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()'
```
The following code compares the performance of the original stable diffusion pipeline with the ipex-optimized pipeline.
```python
import torch
import intel_extension_for_pytorch as ipex
from diffusers import StableDiffusionPipeline
import time
prompt = "sailing ship in storm by Rembrandt"
model_id = "runwayml/stable-diffusion-v1-5"
# Helper function for time evaluation
def elapsed_time(pipeline, nb_pass=3, num_inference_steps=20):
# warmup
for _ in range(2):
images = pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512).images
#time evaluation
start = time.time()
for _ in range(nb_pass):
pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512)
end = time.time()
return (end - start) / nb_pass
############## bf16 inference performance ###############
# 1. IPEX Pipeline initialization
pipe = DiffusionPipeline.from_pretrained(model_id, custom_pipeline="stable_diffusion_ipex")
pipe.prepare_for_ipex(prompt, dtype=torch.bfloat16, height=512, width=512)
# 2. Original Pipeline initialization
pipe2 = StableDiffusionPipeline.from_pretrained(model_id)
# 3. Compare performance between Original Pipeline and IPEX Pipeline
with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
latency = elapsed_time(pipe)
print("Latency of StableDiffusionIPEXPipeline--bf16", latency)
latency = elapsed_time(pipe2)
print("Latency of StableDiffusionPipeline--bf16",latency)
############## fp32 inference performance ###############
# 1. IPEX Pipeline initialization
pipe3 = DiffusionPipeline.from_pretrained(model_id, custom_pipeline="stable_diffusion_ipex")
pipe3.prepare_for_ipex(prompt, dtype=torch.float32, height=512, width=512)
# 2. Original Pipeline initialization
pipe4 = StableDiffusionPipeline.from_pretrained(model_id)
# 3. Compare performance between Original Pipeline and IPEX Pipeline
latency = elapsed_time(pipe3)
print("Latency of StableDiffusionIPEXPipeline--fp32", latency)
latency = elapsed_time(pipe4)
print("Latency of StableDiffusionPipeline--fp32",latency)
```
### Stable Diffusion XL on IPEX
This diffusion pipeline aims to accelarate the inference of Stable-Diffusion XL on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
To use this pipeline, you need to:
1. Install [IPEX](https://github.com/intel/intel-extension-for-pytorch)
**Note:** For each PyTorch release, there is a corresponding release of IPEX. Here is the mapping relationship. It is recommended to install Pytorch/IPEX2.0 to get the best performance.
|PyTorch Version|IPEX Version|
|--|--|
|[v2.0.\*](https://github.com/pytorch/pytorch/tree/v2.0.1 "v2.0.1")|[v2.0.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v2.0.100+cpu)|
|[v1.13.\*](https://github.com/pytorch/pytorch/tree/v1.13.0 "v1.13.0")|[v1.13.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.100+cpu)|
You can simply use pip to install IPEX with the latest version.
```sh
python -m pip install intel_extension_for_pytorch
```
**Note:** To install a specific version, run with the following command:
```sh
python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
```
2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX accelaration. Supported inference datatypes are Float32 and BFloat16.
**Note:** The values of `height` and `width` used during preparation with `prepare_for_ipex()` should be the same when running inference with the prepared pipeline.
```python
pipe = StableDiffusionXLPipelineIpex.from_pretrained("stabilityai/sdxl-turbo", low_cpu_mem_usage=True, use_safetensors=True)
# value of image height/width should be consistent with the pipeline inference
# For Float32
pipe.prepare_for_ipex(torch.float32, prompt, height=512, width=512)
# For BFloat16
pipe.prepare_for_ipex(torch.bfloat16, prompt, height=512, width=512)
```
Then you can use the ipex pipeline in a similar way to the default stable diffusion xl pipeline.
```python
# value of image height/width should be consistent with 'prepare_for_ipex()'
# For Float32
image = pipe(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=guidance_scale).images[0]
# For BFloat16
with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
image = pipe(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=guidance_scale).images[0]
```
The following code compares the performance of the original stable diffusion xl pipeline with the ipex-optimized pipeline.
By using this optimized pipeline, we can get about 1.4-2 times performance boost with BFloat16 on fourth generation of Intel Xeon CPUs,
code-named Sapphire Rapids.
```python
import torch
from diffusers import StableDiffusionXLPipeline
from pipeline_stable_diffusion_xl_ipex import StableDiffusionXLPipelineIpex
import time
prompt = "sailing ship in storm by Rembrandt"
model_id = "stabilityai/sdxl-turbo"
steps = 4
# Helper function for time evaluation
def elapsed_time(pipeline, nb_pass=3, num_inference_steps=1):
# warmup
for _ in range(2):
images = pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=0.0).images
#time evaluation
start = time.time()
for _ in range(nb_pass):
pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=0.0)
end = time.time()
return (end - start) / nb_pass
############## bf16 inference performance ###############
# 1. IPEX Pipeline initialization
pipe = StableDiffusionXLPipelineIpex.from_pretrained(model_id, low_cpu_mem_usage=True, use_safetensors=True)
pipe.prepare_for_ipex(torch.bfloat16, prompt, height=512, width=512)
# 2. Original Pipeline initialization
pipe2 = StableDiffusionXLPipeline.from_pretrained(model_id, low_cpu_mem_usage=True, use_safetensors=True)
# 3. Compare performance between Original Pipeline and IPEX Pipeline
with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
latency = elapsed_time(pipe, num_inference_steps=steps)
print("Latency of StableDiffusionXLPipelineIpex--bf16", latency, "s for total", steps, "steps")
latency = elapsed_time(pipe2, num_inference_steps=steps)
print("Latency of StableDiffusionXLPipeline--bf16", latency, "s for total", steps, "steps")
############## fp32 inference performance ###############
# 1. IPEX Pipeline initialization
pipe3 = StableDiffusionXLPipelineIpex.from_pretrained(model_id, low_cpu_mem_usage=True, use_safetensors=True)
pipe3.prepare_for_ipex(torch.float32, prompt, height=512, width=512)
# 2. Original Pipeline initialization
pipe4 = StableDiffusionXLPipeline.from_pretrained(model_id, low_cpu_mem_usage=True, use_safetensors=True)
# 3. Compare performance between Original Pipeline and IPEX Pipeline
latency = elapsed_time(pipe3, num_inference_steps=steps)
print("Latency of StableDiffusionXLPipelineIpex--fp32", latency, "s for total", steps, "steps")
latency = elapsed_time(pipe4, num_inference_steps=steps)
print("Latency of StableDiffusionXLPipeline--fp32",latency, "s for total", steps, "steps")
```
### CLIP Guided Images Mixing With Stable Diffusion

CLIP guided stable diffusion images mixing pipeline allows to combine two images using standard diffusion models.
This approach is using (optional) CoCa model to avoid writing image description.
[More code examples](https://github.com/TheDenk/images_mixing)
### Stable Diffusion XL Long Weighted Prompt Pipeline
This SDXL pipeline support unlimited length prompt and negative prompt, compatible with A1111 prompt weighted style.
You can provide both `prompt` and `prompt_2`. If only one prompt is provided, `prompt_2` will be a copy of the provided `prompt`. Here is a sample code to use this pipeline.
```python
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
import torch
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0"
, torch_dtype = torch.float16
, use_safetensors = True
, variant = "fp16"
, custom_pipeline = "lpw_stable_diffusion_xl",
)
prompt = "photo of a cute (white) cat running on the grass" * 20
prompt2 = "chasing (birds:1.5)" * 20
prompt = f"{prompt},{prompt2}"
neg_prompt = "blur, low quality, carton, animate"
pipe.to("cuda")
# text2img
t2i_images = pipe(
prompt=prompt,
negative_prompt=neg_prompt,
).images # alternatively, you can call the .text2img() function
# img2img
input_image = load_image("/path/to/local/image.png") # or URL to your input image
i2i_images = pipe.img2img(
prompt=prompt,
negative_prompt=neg_prompt,
image=input_image,
strength=0.8, # higher strength will result in more variation compared to original image
).images
# inpaint
input_mask = load_image("/path/to/local/mask.png") # or URL to your input inpainting mask
inpaint_images = pipe.inpaint(
prompt="photo of a cute (black) cat running on the grass" * 20,
negative_prompt=neg_prompt,
image=input_image,
mask=input_mask,
strength=0.6, # higher strength will result in more variation compared to original image
).images
pipe.to("cpu")
torch.cuda.empty_cache()
from IPython.display import display # assuming you are using this code in a notebook
display(t2i_images[0])
display(i2i_images[0])
display(inpaint_images[0])
```
In the above code, the `prompt2` is appended to the `prompt`, which is more than 77 tokens. "birds" are showing up in the result.

For more results, checkout [PR #6114](https://github.com/huggingface/diffusers/pull/6114).
### Example Images Mixing (with CoCa)
```python
import requests
from io import BytesIO
import PIL
import torch
import open_clip
from open_clip import SimpleTokenizer
from diffusers import DiffusionPipeline
from transformers import CLIPFeatureExtractor, CLIPModel
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
# Loading additional models
feature_extractor = CLIPFeatureExtractor.from_pretrained(
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
)
clip_model = CLIPModel.from_pretrained(
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16
)
coca_model = open_clip.create_model('coca_ViT-L-14', pretrained='laion2B-s13B-b90k').to('cuda')
coca_model.dtype = torch.float16
coca_transform = open_clip.image_transform(
coca_model.visual.image_size,
is_train = False,
mean = getattr(coca_model.visual, 'image_mean', None),
std = getattr(coca_model.visual, 'image_std', None),
)
coca_tokenizer = SimpleTokenizer()
# Pipeline creating
mixing_pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="clip_guided_images_mixing_stable_diffusion",
clip_model=clip_model,
feature_extractor=feature_extractor,
coca_model=coca_model,
coca_tokenizer=coca_tokenizer,
coca_transform=coca_transform,
torch_dtype=torch.float16,
)
mixing_pipeline.enable_attention_slicing()
mixing_pipeline = mixing_pipeline.to("cuda")
# Pipeline running
generator = torch.Generator(device="cuda").manual_seed(17)
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
content_image = download_image("https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/boromir.jpg")
style_image = download_image("https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/gigachad.jpg")
pipe_images = mixing_pipeline(
num_inference_steps=50,
content_image=content_image,
style_image=style_image,
noise_strength=0.65,
slerp_latent_style_strength=0.9,
slerp_prompt_style_strength=0.1,
slerp_clip_image_style_strength=0.1,
guidance_scale=9.0,
batch_size=1,
clip_guidance_scale=100,
generator=generator,
).images
```

### Stable Diffusion Mixture Tiling
This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
```python
from diffusers import LMSDiscreteScheduler, DiffusionPipeline
# Creater scheduler and model (similar to StableDiffusionPipeline)
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler, custom_pipeline="mixture_tiling")
pipeline.to("cuda")
# Mixture of Diffusers generation
image = pipeline(
prompt=[[
"A charming house in the countryside, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece",
"A dirt road in the countryside crossing pastures, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece",
"An old and rusty giant robot lying on a dirt road, by jakub rozalski, dark sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece"
]],
tile_height=640,
tile_width=640,
tile_row_overlap=0,
tile_col_overlap=256,
guidance_scale=8,
seed=7178915308,
num_inference_steps=50,
)["images"][0]
```

### TensorRT Inpainting Stable Diffusion Pipeline
The TensorRT Pipeline can be used to accelerate the Inpainting Stable Diffusion Inference run.
NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
```python
import requests
from io import BytesIO
from PIL import Image
import torch
from diffusers import PNDMScheduler
from diffusers.pipelines.stable_diffusion import StableDiffusionInpaintPipeline
# Use the PNDMScheduler scheduler here instead
scheduler = PNDMScheduler.from_pretrained("stabilityai/stable-diffusion-2-inpainting", subfolder="scheduler")
pipe = StableDiffusionInpaintPipeline.from_pretrained("stabilityai/stable-diffusion-2-inpainting",
custom_pipeline="stable_diffusion_tensorrt_inpaint",
revision='fp16',
torch_dtype=torch.float16,
scheduler=scheduler,
)
# re-use cached folder to save ONNX models and TensorRT Engines
pipe.set_cached_folder("stabilityai/stable-diffusion-2-inpainting", revision='fp16',)
pipe = pipe.to("cuda")
url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
response = requests.get(url)
input_image = Image.open(BytesIO(response.content)).convert("RGB")
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
response = requests.get(mask_url)
mask_image = Image.open(BytesIO(response.content)).convert("RGB")
prompt = "a mecha robot sitting on a bench"
image = pipe(prompt, image=input_image, mask_image=mask_image, strength=0.75,).images[0]
image.save('tensorrt_inpaint_mecha_robot.png')
```
### Stable Diffusion Mixture Canvas
This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
```python
from PIL import Image
from diffusers import LMSDiscreteScheduler, DiffusionPipeline
from diffusers.pipelines.pipeline_utils import Image2ImageRegion, Text2ImageRegion, preprocess_image
# Load and preprocess guide image
iic_image = preprocess_image(Image.open("input_image.png").convert("RGB"))
# Creater scheduler and model (similar to StableDiffusionPipeline)
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler).to("cuda:0", custom_pipeline="mixture_canvas")
pipeline.to("cuda")
# Mixture of Diffusers generation
output = pipeline(
canvas_height=800,
canvas_width=352,
regions=[
Text2ImageRegion(0, 800, 0, 352, guidance_scale=8,
prompt=f"best quality, masterpiece, WLOP, sakimichan, art contest winner on pixiv, 8K, intricate details, wet effects, rain drops, ethereal, mysterious, futuristic, UHD, HDR, cinematic lighting, in a beautiful forest, rainy day, award winning, trending on artstation, beautiful confident cheerful young woman, wearing a futuristic sleeveless dress, ultra beautiful detailed eyes, hyper-detailed face, complex, perfect, model, textured, chiaroscuro, professional make-up, realistic, figure in frame, "),
Image2ImageRegion(352-800, 352, 0, 352, reference_image=iic_image, strength=1.0),
],
num_inference_steps=100,
seed=5525475061,
)["images"][0]
```


### IADB pipeline
This pipeline is the implementation of the [α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) paper.
It is a simple and minimalist diffusion model.
The following code shows how to use the IADB pipeline to generate images using a pretrained celebahq-256 model.
```python
pipeline_iadb = DiffusionPipeline.from_pretrained("thomasc4/iadb-celebahq-256", custom_pipeline='iadb')
pipeline_iadb = pipeline_iadb.to('cuda')
output = pipeline_iadb(batch_size=4,num_inference_steps=128)
for i in range(len(output[0])):
plt.imshow(output[0][i])
plt.show()
```
Sampling with the IADB formulation is easy, and can be done in a few lines (the pipeline already implements it):
```python
def sample_iadb(model, x0, nb_step):
x_alpha = x0
for t in range(nb_step):
alpha = (t/nb_step)
alpha_next =((t+1)/nb_step)
d = model(x_alpha, torch.tensor(alpha, device=x_alpha.device))['sample']
x_alpha = x_alpha + (alpha_next-alpha)*d
return x_alpha
```
The training loop is also straightforward:
```python
# Training loop
while True:
x0 = sample_noise()
x1 = sample_dataset()
alpha = torch.rand(batch_size)
# Blend
x_alpha = (1-alpha) * x0 + alpha * x1
# Loss
loss = torch.sum((D(x_alpha, alpha)- (x1-x0))**2)
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
### Zero1to3 pipeline
This pipeline is the implementation of the [Zero-1-to-3: Zero-shot One Image to 3D Object](https://arxiv.org/abs/2303.11328) paper.
The original pytorch-lightning [repo](https://github.com/cvlab-columbia/zero123) and a diffusers [repo](https://github.com/kxhit/zero123-hf).
The following code shows how to use the Zero1to3 pipeline to generate novel view synthesis images using a pretrained stable diffusion model.
```python
import os
import torch
from pipeline_zero1to3 import Zero1to3StableDiffusionPipeline
from diffusers.utils import load_image
model_id = "kxic/zero123-165000" # zero123-105000, zero123-165000, zero123-xl
pipe = Zero1to3StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_vae_tiling()
pipe.enable_attention_slicing()
pipe = pipe.to("cuda")
num_images_per_prompt = 4
# test inference pipeline
# x y z, Polar angle (vertical rotation in degrees) Azimuth angle (horizontal rotation in degrees) Zoom (relative distance from center)
query_pose1 = [-75.0, 100.0, 0.0]
query_pose2 = [-20.0, 125.0, 0.0]
query_pose3 = [-55.0, 90.0, 0.0]
# load image
# H, W = (256, 256) # H, W = (512, 512) # zero123 training is 256,256
# for batch input
input_image1 = load_image("./demo/4_blackarm.png") #load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/4_blackarm.png")
input_image2 = load_image("./demo/8_motor.png") #load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/8_motor.png")
input_image3 = load_image("./demo/7_london.png") #load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/7_london.png")
input_images = [input_image1, input_image2, input_image3]
query_poses = [query_pose1, query_pose2, query_pose3]
# # for single input
# H, W = (256, 256)
# input_images = [input_image2.resize((H, W), PIL.Image.NEAREST)]
# query_poses = [query_pose2]
# better do preprocessing
from gradio_new import preprocess_image, create_carvekit_interface
import numpy as np
import PIL.Image as Image
pre_images = []
models = dict()
print('Instantiating Carvekit HiInterface...')
models['carvekit'] = create_carvekit_interface()
if not isinstance(input_images, list):
input_images = [input_images]
for raw_im in input_images:
input_im = preprocess_image(models, raw_im, True)
H, W = input_im.shape[:2]
pre_images.append(Image.fromarray((input_im * 255.0).astype(np.uint8)))
input_images = pre_images
# infer pipeline, in original zero123 num_inference_steps=76
images = pipe(input_imgs=input_images, prompt_imgs=input_images, poses=query_poses, height=H, width=W,
guidance_scale=3.0, num_images_per_prompt=num_images_per_prompt, num_inference_steps=50).images
# save imgs
log_dir = "logs"
os.makedirs(log_dir, exist_ok=True)
bs = len(input_images)
i = 0
for obj in range(bs):
for idx in range(num_images_per_prompt):
images[i].save(os.path.join(log_dir,f"obj{obj}_{idx}.jpg"))
i += 1
```
### Stable Diffusion XL Reference
This pipeline uses the Reference . Refer to the [stable_diffusion_reference](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#stable-diffusion-reference).
```py
import torch
from PIL import Image
from diffusers.utils import load_image
from diffusers import DiffusionPipeline
from diffusers.schedulers import UniPCMultistepScheduler
input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
# pipe = DiffusionPipeline.from_pretrained(
# "stabilityai/stable-diffusion-xl-base-1.0",
# custom_pipeline="stable_diffusion_xl_reference",
# torch_dtype=torch.float16,
# use_safetensors=True,
# variant="fp16").to('cuda:0')
pipe = StableDiffusionXLReferencePipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16").to('cuda:0')
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
result_img = pipe(ref_image=input_image,
prompt="1girl",
num_inference_steps=20,
reference_attn=True,
reference_adain=True).images[0]
```
Reference Image

Output Image
`prompt: 1 girl`
`reference_attn=True, reference_adain=True, num_inference_steps=20`

Reference Image

Output Image
`prompt: A dog`
`reference_attn=True, reference_adain=False, num_inference_steps=20`

Reference Image

Output Image
`prompt: An astronaut riding a lion`
`reference_attn=True, reference_adain=True, num_inference_steps=20`

### Stable diffusion fabric pipeline
FABRIC approach applicable to a wide range of popular diffusion models, which exploits
the self-attention layer present in the most widely used architectures to condition
the diffusion process on a set of feedback images.
```python
import requests
import torch
from PIL import Image
from io import BytesIO
from diffusers import DiffusionPipeline
# load the pipeline
# make sure you're logged in with `huggingface-cli login`
model_id_or_path = "runwayml/stable-diffusion-v1-5"
#can also be used with dreamlike-art/dreamlike-photoreal-2.0
pipe = DiffusionPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16, custom_pipeline="pipeline_fabric").to("cuda")
# let's specify a prompt
prompt = "An astronaut riding an elephant"
negative_prompt = "lowres, cropped"
# call the pipeline
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=20,
generator=torch.manual_seed(12)
).images[0]
image.save("horse_to_elephant.jpg")
# let's try another example with feedback
url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
prompt = "photo, A blue colored car, fish eye"
liked = [init_image]
## same goes with disliked
# call the pipeline
torch.manual_seed(0)
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
liked = liked,
num_inference_steps=20,
).images[0]
image.save("black_to_blue.png")
```
*With enough feedbacks you can create very similar high quality images.*
The original codebase can be found at [sd-fabric/fabric](https://github.com/sd-fabric/fabric), and available checkpoints are [dreamlike-art/dreamlike-photoreal-2.0](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0), [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5), and [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1) (may give unexpected results).
Let's have a look at the images (_512X512_)
| Without Feedback | With Feedback (1st image) |
|---------------------|---------------------|
|  |  |
### Masked Im2Im Stable Diffusion Pipeline
This pipeline reimplements sketch inpaint feature from A1111 for non-inpaint models. The following code reads two images, original and one with mask painted over it. It computes mask as a difference of two images and does the inpainting in the area defined by the mask.
```python
img = PIL.Image.open("./mech.png")
# read image with mask painted over
img_paint = PIL.Image.open("./mech_painted.png")
neq = numpy.any(numpy.array(img) != numpy.array(img_paint), axis=-1)
mask = neq / neq.max()
pipeline = MaskedStableDiffusionImg2ImgPipeline.from_pretrained("frankjoshua/icbinpICantBelieveIts_v8")
# works best with EulerAncestralDiscreteScheduler
pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config)
generator = torch.Generator(device="cpu").manual_seed(4)
prompt = "a man wearing a mask"
result = pipeline(prompt=prompt, image=img_paint, mask=mask, strength=0.75,
generator=generator)
result.images[0].save("result.png")
```
original image mech.png
<img src=<https://github.com/noskill/diffusers/assets/733626/10ad972d-d655-43cb-8de1-039e3d79e849> width="25%" >
image with mask mech_painted.png
<img src=<https://github.com/noskill/diffusers/assets/733626/c334466a-67fe-4377-9ff7-f46021b9c224> width="25%" >
result:
<img src=<https://github.com/noskill/diffusers/assets/733626/23a0a71d-51db-471e-926a-107ac62512a8> width="25%" >
### Prompt2Prompt Pipeline
Prompt2Prompt allows the following edits:
- ReplaceEdit (change words in prompt)
- ReplaceEdit with local blend (change words in prompt, keep image part unrelated to changes constant)
- RefineEdit (add words to prompt)
- RefineEdit with local blend (add words to prompt, keep image part unrelated to changes constant)
- ReweightEdit (modulate importance of words)
Here's a full example for `ReplaceEdit``:
```python
import torch
import numpy as np
import matplotlib.pyplot as plt
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="pipeline_prompt2prompt").to("cuda")
prompts = ["A turtle playing with a ball",
"A monkey playing with a ball"]
cross_attention_kwargs = {
"edit_type": "replace",
"cross_replace_steps": 0.4,
"self_replace_steps": 0.4
}
outputs = pipe(prompt=prompts, height=512, width=512, num_inference_steps=50, cross_attention_kwargs=cross_attention_kwargs)
```
And abbreviated examples for the other edits:
`ReplaceEdit with local blend`
```python
prompts = ["A turtle playing with a ball",
"A monkey playing with a ball"]
cross_attention_kwargs = {
"edit_type": "replace",
"cross_replace_steps": 0.4,
"self_replace_steps": 0.4,
"local_blend_words": ["turtle", "monkey"]
}
```
`RefineEdit`
```python
prompts = ["A turtle",
"A turtle in a forest"]
cross_attention_kwargs = {
"edit_type": "refine",
"cross_replace_steps": 0.4,
"self_replace_steps": 0.4,
}
```
`RefineEdit with local blend`
```python
prompts = ["A turtle",
"A turtle in a forest"]
cross_attention_kwargs = {
"edit_type": "refine",
"cross_replace_steps": 0.4,
"self_replace_steps": 0.4,
"local_blend_words": ["in", "a" , "forest"]
}
```
`ReweightEdit`
```python
prompts = ["A smiling turtle"] * 2
edit_kcross_attention_kwargswargs = {
"edit_type": "reweight",
"cross_replace_steps": 0.4,
"self_replace_steps": 0.4,
"equalizer_words": ["smiling"],
"equalizer_strengths": [5]
}
```
Side note: See [this GitHub gist](https://gist.github.com/UmerHA/b65bb5fb9626c9c73f3ade2869e36164) if you want to visualize the attention maps.
### Latent Consistency Pipeline
Latent Consistency Models was proposed in [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) by _Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, Hang Zhao_ from Tsinghua University.
The abstract of the paper reads as follows:
*Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: [this https URL](https://latent-consistency-models.github.io/)*
The model can be used with `diffusers` as follows:
- *1. Load the model from the community pipeline.*
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img", custom_revision="main")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
```
- 2. Run inference with as little as 4 steps:
```py
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
num_inference_steps = 4
images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
```
For any questions or feedback, feel free to reach out to [Simian Luo](https://github.com/luosiallen).
You can also try this pipeline directly in the [🚀 official spaces](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model).
### Latent Consistency Img2img Pipeline
This pipeline extends the Latent Consistency Pipeline to allow it to take an input image.
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_img2img")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
```
- 2. Run inference with as little as 4 steps:
```py
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
input_image=Image.open("myimg.png")
strength = 0.5 #strength =0 (no change) strength=1 (completely overwrite image)
# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
num_inference_steps = 4
images = pipe(prompt=prompt, image=input_image, strength=strength, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
```
### Latent Consistency Interpolation Pipeline
This pipeline extends the Latent Consistency Pipeline to allow for interpolation of the latent space between multiple prompts. It is similar to the [Stable Diffusion Interpolate](https://github.com/huggingface/diffusers/blob/main/examples/community/interpolate_stable_diffusion.py) and [unCLIP Interpolate](https://github.com/huggingface/diffusers/blob/main/examples/community/unclip_text_interpolation.py) community pipelines.
```py
import torch
import numpy as np
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_interpolate")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
prompts = [
"Self-portrait oil painting, a beautiful cyborg with golden hair, Margot Robbie, 8k",
"Self-portrait oil painting, an extremely strong man, body builder, Huge Jackman, 8k",
"An astronaut floating in space, renaissance art, realistic, high quality, 8k",
"Oil painting of a cat, cute, dream-like",
"Hugging face emoji, cute, realistic"
]
num_inference_steps = 4
num_interpolation_steps = 60
seed = 1337
torch.manual_seed(seed)
np.random.seed(seed)
images = pipe(
prompt=prompts,
height=512,
width=512,
num_inference_steps=num_inference_steps,
num_interpolation_steps=num_interpolation_steps,
guidance_scale=8.0,
embedding_interpolation_type="lerp",
latent_interpolation_type="slerp",
process_batch_size=4, # Make it higher or lower based on your GPU memory
generator=torch.Generator(seed),
)
assert len(images) == (len(prompts) - 1) * num_interpolation_steps
```
### StableDiffusionUpscaleLDM3D Pipeline
[LDM3D-VR](https://arxiv.org/pdf/2311.03226.pdf) is an extended version of LDM3D.
The abstract from the paper is:
*Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods*
Two checkpoints are available for use:
- [ldm3d-pano](https://huggingface.co/Intel/ldm3d-pano). This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used.
- [ldm3d-sr](https://huggingface.co/Intel/ldm3d-sr). This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline pipeline.
'''py
from PIL import Image
import os
import torch
from diffusers import StableDiffusionLDM3DPipeline, DiffusionPipeline
# Generate a rgb/depth output from LDM3D
pipe_ldm3d = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c")
pipe_ldm3d.to("cuda")
prompt =f"A picture of some lemons on a table"
output = pipe_ldm3d(prompt)
rgb_image, depth_image = output.rgb, output.depth
rgb_image[0].save(f"lemons_ldm3d_rgb.jpg")
depth_image[0].save(f"lemons_ldm3d_depth.png")
# Upscale the previous output to a resolution of (1024, 1024)
pipe_ldm3d_upscale = DiffusionPipeline.from_pretrained("Intel/ldm3d-sr", custom_pipeline="pipeline_stable_diffusion_upscale_ldm3d")
pipe_ldm3d_upscale.to("cuda")
low_res_img = Image.open(f"lemons_ldm3d_rgb.jpg").convert("RGB")
low_res_depth = Image.open(f"lemons_ldm3d_depth.png").convert("L")
outputs = pipe_ldm3d_upscale(prompt="high quality high resolution uhd 4k image", rgb=low_res_img, depth=low_res_depth, num_inference_steps=50, target_res=[1024, 1024])
upscaled_rgb, upscaled_depth =outputs.rgb[0], outputs.depth[0]
upscaled_rgb.save(f"upscaled_lemons_rgb.png")
upscaled_depth.save(f"upscaled_lemons_depth.png")
'''
### ControlNet + T2I Adapter Pipeline
This pipelines combines both ControlNet and T2IAdapter into a single pipeline, where the forward pass is executed once.
It receives `control_image` and `adapter_image`, as well as `controlnet_conditioning_scale` and `adapter_conditioning_scale`, for the ControlNet and Adapter modules, respectively. Whenever `adapter_conditioning_scale = 0` or `controlnet_conditioning_scale = 0`, it will act as a full ControlNet module or as a full T2IAdapter module, respectively.
```py
import cv2
import numpy as np
import torch
from controlnet_aux.midas import MidasDetector
from PIL import Image
from diffusers import AutoencoderKL, ControlNetModel, MultiAdapter, T2IAdapter
from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel
from diffusers.utils import load_image
from examples.community.pipeline_stable_diffusion_xl_controlnet_adapter import (
StableDiffusionXLControlNetAdapterPipeline,
)
controlnet_depth = ControlNetModel.from_pretrained(
"diffusers/controlnet-depth-sdxl-1.0",
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
adapter_depth = T2IAdapter.from_pretrained(
"TencentARC/t2i-adapter-depth-midas-sdxl-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionXLControlNetAdapterPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet_depth,
adapter=adapter_depth,
vae=vae,
variant="fp16",
use_safetensors=True,
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()
# pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
midas_depth = MidasDetector.from_pretrained(
"valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large"
).to("cuda")
prompt = "a tiger sitting on a park bench"
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
image = load_image(img_url).resize((1024, 1024))
depth_image = midas_depth(
image, detect_resolution=512, image_resolution=1024
)
strength = 0.5
images = pipe(
prompt,
control_image=depth_image,
adapter_image=depth_image,
num_inference_steps=30,
controlnet_conditioning_scale=strength,
adapter_conditioning_scale=strength,
).images
images[0].save("controlnet_and_adapter.png")
```
### ControlNet + T2I Adapter + Inpainting Pipeline
```py
import cv2
import numpy as np
import torch
from controlnet_aux.midas import MidasDetector
from PIL import Image
from diffusers import AutoencoderKL, ControlNetModel, MultiAdapter, T2IAdapter
from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel
from diffusers.utils import load_image
from examples.community.pipeline_stable_diffusion_xl_controlnet_adapter_inpaint import (
StableDiffusionXLControlNetAdapterInpaintPipeline,
)
controlnet_depth = ControlNetModel.from_pretrained(
"diffusers/controlnet-depth-sdxl-1.0",
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
adapter_depth = T2IAdapter.from_pretrained(
"TencentARC/t2i-adapter-depth-midas-sdxl-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionXLControlNetAdapterInpaintPipeline.from_pretrained(
"diffusers/stable-diffusion-xl-1.0-inpainting-0.1",
controlnet=controlnet_depth,
adapter=adapter_depth,
vae=vae,
variant="fp16",
use_safetensors=True,
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()
# pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
midas_depth = MidasDetector.from_pretrained(
"valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large"
).to("cuda")
prompt = "a tiger sitting on a park bench"
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
image = load_image(img_url).resize((1024, 1024))
mask_image = load_image(mask_url).resize((1024, 1024))
depth_image = midas_depth(
image, detect_resolution=512, image_resolution=1024
)
strength = 0.4
images = pipe(
prompt,
image=image,
mask_image=mask_image,
control_image=depth_image,
adapter_image=depth_image,
num_inference_steps=30,
controlnet_conditioning_scale=strength,
adapter_conditioning_scale=strength,
strength=0.7,
).images
images[0].save("controlnet_and_adapter_inpaint.png")
```
### Regional Prompting Pipeline
This pipeline is a port of the [Regional Prompter extension](https://github.com/hako-mikan/sd-webui-regional-prompter) for [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to diffusers.
This code implements a pipeline for the Stable Diffusion model, enabling the division of the canvas into multiple regions, with different prompts applicable to each region. Users can specify regions in two ways: using `Cols` and `Rows` modes for grid-like divisions, or the `Prompt` mode for regions calculated based on prompts.

### Usage
### Sample Code
```py
from examples.community.regional_prompting_stable_diffusion import RegionalPromptingStableDiffusionPipeline
pipe = RegionalPromptingStableDiffusionPipeline.from_single_file(model_path, vae=vae)
rp_args = {
"mode":"rows",
"div": "1;1;1"
}
prompt ="""
green hair twintail BREAK
red blouse BREAK
blue skirt
"""
images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
guidance_scale=7.5,
height = 768,
width = 512,
num_inference_steps =20,
num_images_per_prompt = 1,
rp_args = rp_args
).images
time = time.strftime(r"%Y%m%d%H%M%S")
i = 1
for image in images:
i += 1
fileName = f'img-{time}-{i+1}.png'
image.save(fileName)
```
### Cols, Rows mode
In the Cols, Rows mode, you can split the screen vertically and horizontally and assign prompts to each region. The split ratio can be specified by 'div', and you can set the division ratio like '3;3;2' or '0.1;0.5'. Furthermore, as will be described later, you can also subdivide the split Cols, Rows to specify more complex regions.
In this image, the image is divided into three parts, and a separate prompt is applied to each. The prompts are divided by 'BREAK', and each is applied to the respective region.

```
green hair twintail BREAK
red blouse BREAK
blue skirt
```
### 2-Dimentional division
The prompt consists of instructions separated by the term `BREAK` and is assigned to different regions of a two-dimensional space. The image is initially split in the main splitting direction, which in this case is rows, due to the presence of a single semicolon`;`, dividing the space into an upper and a lower section. Additional sub-splitting is then applied, indicated by commas. The upper row is split into ratios of `2:1:1`, while the lower row is split into a ratio of `4:6`. Rows themselves are split in a `1:2` ratio. According to the reference image, the blue sky is designated as the first region, green hair as the second, the bookshelf as the third, and so on, in a sequence based on their position from the top left. The terrarium is placed on the desk in the fourth region, and the orange dress and sofa are in the fifth region, conforming to their respective splits.
```
rp_args = {
"mode":"rows",
"div": "1,2,1,1;2,4,6"
}
prompt ="""
blue sky BREAK
green hair BREAK
book shelf BREAK
terrarium on desk BREAK
orange dress and sofa
"""
```

### Prompt Mode
There are limitations to methods of specifying regions in advance. This is because specifying regions can be a hindrance when designating complex shapes or dynamic compositions. In the region specified by the prompt, the regions is determined after the image generation has begun. This allows us to accommodate compositions and complex regions.
For further infomagen, see [here](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/main/prompt_en.md).
### syntax
```
baseprompt target1 target2 BREAK
effect1, target1 BREAK
effect2 ,target2
```
First, write the base prompt. In the base prompt, write the words (target1, target2) for which you want to create a mask. Next, separate them with BREAK. Next, write the prompt corresponding to target1. Then enter a comma and write target1. The order of the targets in the base prompt and the order of the BREAK-separated targets can be back to back.
```
target2 baseprompt target1 BREAK
effect1, target1 BREAK
effect2 ,target2
```
is also effective.
### Sample
In this example, masks are calculated for shirt, tie, skirt, and color prompts are specified only for those regions.
```
rp_args = {
"mode":"prompt-ex",
"save_mask":True,
"th": "0.4,0.6,0.6",
}
prompt ="""
a girl in street with shirt, tie, skirt BREAK
red, shirt BREAK
green, tie BREAK
blue , skirt
"""
```

### threshold
The threshold used to determine the mask created by the prompt. This can be set as many times as there are masks, as the range varies widely depending on the target prompt. If multiple regions are used, enter them separated by commas. For example, hair tends to be ambiguous and requires a small value, while face tends to be large and requires a small value. These should be ordered by BREAK.
```
a lady ,hair, face BREAK
red, hair BREAK
tanned ,face
```
`threshold : 0.4,0.6`
If only one input is given for multiple regions, they are all assumed to be the same value.
### Prompt and Prompt-EX
The difference is that in Prompt, duplicate regions are added, whereas in Prompt-EX, duplicate regions are overwritten sequentially. Since they are processed in order, setting a TARGET with a large regions first makes it easier for the effect of small regions to remain unmuffled.
### Accuracy
In the case of a 512 x 512 image, Attention mode reduces the size of the region to about 8 x 8 pixels deep in the U-Net, so that small regions get mixed up; Latent mode calculates 64*64, so that the region is exact.
```
girl hair twintail frills,ribbons, dress, face BREAK
girl, ,face
```
### Mask
When an image is generated, the generated mask is displayed. It is generated at the same size as the image, but is actually used at a much smaller size.
### Use common prompt
You can attach the prompt up to ADDCOMM to all prompts by separating it first with ADDCOMM. This is useful when you want to include elements common to all regions. For example, when generating pictures of three people with different appearances, it's necessary to include the instruction of 'three people' in all regions. It's also useful when inserting quality tags and other things."For example, if you write as follows:
```
best quality, 3persons in garden, ADDCOMM
a girl white dress BREAK
a boy blue shirt BREAK
an old man red suit
```
If common is enabled, this prompt is converted to the following:
```
best quality, 3persons in garden, a girl white dress BREAK
best quality, 3persons in garden, a boy blue shirt BREAK
best quality, 3persons in garden, an old man red suit
```
### Negative prompt
Negative prompts are equally effective across all regions, but it is possible to set region-specific prompts for negative prompts as well. The number of BREAKs must be the same as the number of prompts. If the number of prompts does not match, the negative prompts will be used without being divided into regions.
### Parameters
To activate Regional Prompter, it is necessary to enter settings in rp_args. The items that can be set are as follows. rp_args is a dictionary type.
### Input Parameters
Parameters are specified through the `rp_arg`(dictionary type).
```
rp_args = {
"mode":"rows",
"div": "1;1;1"
}
pipe(prompt =prompt, rp_args = rp_args)
```
### Required Parameters
- `mode`: Specifies the method for defining regions. Choose from `Cols`, `Rows`, `Prompt` or `Prompt-Ex`. This parameter is case-insensitive.
- `divide`: Used in `Cols` and `Rows` modes. Details on how to specify this are provided under the respective `Cols` and `Rows` sections.
- `th`: Used in `Prompt` mode. The method of specification is detailed under the `Prompt` section.
### Optional Parameters
- `save_mask`: In `Prompt` mode, choose whether to output the generated mask along with the image. The default is `False`.
The Pipeline supports `compel` syntax. Input prompts using the `compel` structure will be automatically applied and processed.
### Diffusion Posterior Sampling Pipeline
- Reference paper
```
@article{chung2022diffusion,
title={Diffusion posterior sampling for general noisy inverse problems},
author={Chung, Hyungjin and Kim, Jeongsol and Mccann, Michael T and Klasky, Marc L and Ye, Jong Chul},
journal={arXiv preprint arXiv:2209.14687},
year={2022}
}
```
- This pipeline allows zero-shot conditional sampling from the posterior distribution $p(x|y)$, given observation on $y$, unconditional generative model $p(x)$ and differentiable operator $y=f(x)$.
- For example, $f(.)$ can be downsample operator, then $y$ is a downsampled image, and the pipeline becomes a super-resolution pipeline.
- To use this pipeline, you need to know your operator $f(.)$ and corrupted image $y$, and pass them during the call. For example, as in the main function of dps_pipeline.py, you need to first define the Gaussian blurring operator $f(.)$. The operator should be a callable nn.Module, with all the parameter gradient disabled:
```python
import torch.nn.functional as F
import scipy
from torch import nn
# define the Gaussian blurring operator first
class GaussialBlurOperator(nn.Module):
def __init__(self, kernel_size, intensity):
super().__init__()
class Blurkernel(nn.Module):
def __init__(self, blur_type='gaussian', kernel_size=31, std=3.0):
super().__init__()
self.blur_type = blur_type
self.kernel_size = kernel_size
self.std = std
self.seq = nn.Sequential(
nn.ReflectionPad2d(self.kernel_size//2),
nn.Conv2d(3, 3, self.kernel_size, stride=1, padding=0, bias=False, groups=3)
)
self.weights_init()
def forward(self, x):
return self.seq(x)
def weights_init(self):
if self.blur_type == "gaussian":
n = np.zeros((self.kernel_size, self.kernel_size))
n[self.kernel_size // 2,self.kernel_size // 2] = 1
k = scipy.ndimage.gaussian_filter(n, sigma=self.std)
k = torch.from_numpy(k)
self.k = k
for name, f in self.named_parameters():
f.data.copy_(k)
elif self.blur_type == "motion":
k = Kernel(size=(self.kernel_size, self.kernel_size), intensity=self.std).kernelMatrix
k = torch.from_numpy(k)
self.k = k
for name, f in self.named_parameters():
f.data.copy_(k)
def update_weights(self, k):
if not torch.is_tensor(k):
k = torch.from_numpy(k)
for name, f in self.named_parameters():
f.data.copy_(k)
def get_kernel(self):
return self.k
self.kernel_size = kernel_size
self.conv = Blurkernel(blur_type='gaussian',
kernel_size=kernel_size,
std=intensity)
self.kernel = self.conv.get_kernel()
self.conv.update_weights(self.kernel.type(torch.float32))
for param in self.parameters():
param.requires_grad=False
def forward(self, data, **kwargs):
return self.conv(data)
def transpose(self, data, **kwargs):
return data
def get_kernel(self):
return self.kernel.view(1, 1, self.kernel_size, self.kernel_size)
```
- Next, you should obtain the corrupted image $y$ by the operator. In this example, we generate $y$ from the source image $x$. However in practice, having the operator $f(.)$ and corrupted image $y$ is enough:
```python
# set up source image
src = Image.open('sample.png')
# read image into [1,3,H,W]
src = torch.from_numpy(np.array(src, dtype=np.float32)).permute(2,0,1)[None]
# normalize image to [-1,1]
src = (src / 127.5) - 1.0
src = src.to("cuda")
# set up operator and measurement
operator = GaussialBlurOperator(kernel_size=61, intensity=3.0).to("cuda")
measurement = operator(src)
# save the source and corrupted images
save_image((src+1.0)/2.0, "dps_src.png")
save_image((measurement+1.0)/2.0, "dps_mea.png")
```
- We provide an example pair of saved source and corrupted images, using the Gaussian blur operator above
- Source image:
- 
- Gaussian blurred image:
- 
- You can download those image to run the example on your own.
- Next, we need to define a loss function used for diffusion posterior sample. For most of the cases, the RMSE is fine:
```python
def RMSELoss(yhat, y):
return torch.sqrt(torch.sum((yhat-y)**2))
```
- And next, as any other diffusion models, we need the score estimator and scheduler. As we are working with $256x256$ face images, we use ddmp-celebahq-256:
```python
# set up scheduler
scheduler = DDPMScheduler.from_pretrained("google/ddpm-celebahq-256")
scheduler.set_timesteps(1000)
# set up model
model = UNet2DModel.from_pretrained("google/ddpm-celebahq-256").to("cuda")
```
- And finally, run the pipeline:
```python
# finally, the pipeline
dpspipe = DPSPipeline(model, scheduler)
image = dpspipe(
measurement = measurement,
operator = operator,
loss_fn = RMSELoss,
zeta = 1.0,
).images[0]
image.save("dps_generated_image.png")
```
- The zeta is a hyperparameter that is in range of $[0,1]$. It need to be tuned for best effect. By setting zeta=1, you should be able to have the reconstructed result:
- Reconstructed image:
- 
- The reconstruction is perceptually similar to the source image, but different in details.
- In dps_pipeline.py, we also provide a super-resolution example, which should produce:
- Downsampled image:
- 
- Reconstructed image:
- 
### AnimateDiff ControlNet Pipeline
This pipeline combines AnimateDiff and ControlNet. Enjoy precise motion control for your videos! Refer to [this](https://github.com/huggingface/diffusers/issues/5866) issue for more details.
```py
import torch
from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter
from diffusers.pipelines import DiffusionPipeline
from diffusers.schedulers import DPMSolverMultistepScheduler
from PIL import Image
motion_id = "guoyww/animatediff-motion-adapter-v1-5-2"
adapter = MotionAdapter.from_pretrained(motion_id)
controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16)
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
pipe = DiffusionPipeline.from_pretrained(
model_id,
motion_adapter=adapter,
controlnet=controlnet,
vae=vae,
custom_pipeline="pipeline_animatediff_controlnet",
).to(device="cuda", dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained(
model_id, subfolder="scheduler", beta_schedule="linear", clip_sample=False, timestep_spacing="linspace", steps_offset=1
)
pipe.enable_vae_slicing()
conditioning_frames = []
for i in range(1, 16 + 1):
conditioning_frames.append(Image.open(f"frame_{i}.png"))
prompt = "astronaut in space, dancing"
negative_prompt = "bad quality, worst quality, jpeg artifacts, ugly"
result = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=512,
height=768,
conditioning_frames=conditioning_frames,
num_inference_steps=20,
).frames[0]
from diffusers.utils import export_to_gif
export_to_gif(result.frames[0], "result.gif")
```
<table>
<tr><td colspan="2" align=center><b>Conditioning Frames</b></td></tr>
<tr align=center>
<td align=center><img src="https://user-images.githubusercontent.com/7365912/265043418-23291941-864d-495a-8ba8-d02e05756396.gif" alt="input-frames"></td>
</tr>
<tr><td colspan="2" align=center><b>AnimateDiff model: SG161222/Realistic_Vision_V5.1_noVAE</b></td></tr>
<tr>
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/baf301e2-d03c-4129-bd84-203a1de2b2be" alt="gif-1"></td>
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/9f923475-ecaf-452b-92c8-4e42171182d8" alt="gif-2"></td>
</tr>
<tr><td colspan="2" align=center><b>AnimateDiff model: CardosAnime</b></td></tr>
<tr>
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/b2c41028-38a0-45d6-86ed-fec7446b87f7" alt="gif-1"></td>
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/eb7d2952-72e4-44fa-b664-077c79b4fc70" alt="gif-2"></td>
</tr>
</table>
You can also use multiple controlnets at once!
```python
import torch
from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter
from diffusers.pipelines import DiffusionPipeline
from diffusers.schedulers import DPMSolverMultistepScheduler
from PIL import Image
motion_id = "guoyww/animatediff-motion-adapter-v1-5-2"
adapter = MotionAdapter.from_pretrained(motion_id)
controlnet1 = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16)
controlnet2 = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
pipe = DiffusionPipeline.from_pretrained(
model_id,
motion_adapter=adapter,
controlnet=[controlnet1, controlnet2],
vae=vae,
custom_pipeline="pipeline_animatediff_controlnet",
).to(device="cuda", dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained(
model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", steps_offset=1, beta_schedule="linear",
)
pipe.enable_vae_slicing()
def load_video(file_path: str):
images = []
if file_path.startswith(('http://', 'https://')):
# If the file_path is a URL
response = requests.get(file_path)
response.raise_for_status()
content = BytesIO(response.content)
vid = imageio.get_reader(content)
else:
# Assuming it's a local file path
vid = imageio.get_reader(file_path)
for frame in vid:
pil_image = Image.fromarray(frame)
images.append(pil_image)
return images
video = load_video("dance.gif")
# You need to install it using `pip install controlnet_aux`
from controlnet_aux.processor import Processor
p1 = Processor("openpose_full")
cn1 = [p1(frame) for frame in video]
p2 = Processor("canny")
cn2 = [p2(frame) for frame in video]
prompt = "astronaut in space, dancing"
negative_prompt = "bad quality, worst quality, jpeg artifacts, ugly"
result = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=512,
height=768,
conditioning_frames=[cn1, cn2],
num_inference_steps=20,
)
from diffusers.utils import export_to_gif
export_to_gif(result.frames[0], "result.gif")
```
### DemoFusion
This pipeline is the official implementation of [DemoFusion: Democratising High-Resolution Image Generation With No $$$](https://arxiv.org/abs/2311.16973).
The original repo can be found at [repo](https://github.com/PRIS-CV/DemoFusion).
- `view_batch_size` (`int`, defaults to 16):
The batch size for multiple denoising paths. Typically, a larger batch size can result in higher efficiency but comes with increased GPU memory requirements.
- `stride` (`int`, defaults to 64):
The stride of moving local patches. A smaller stride is better for alleviating seam issues, but it also introduces additional computational overhead and inference time.
- `cosine_scale_1` (`float`, defaults to 3):
Control the strength of skip-residual. For specific impacts, please refer to Appendix C in the DemoFusion paper.
- `cosine_scale_2` (`float`, defaults to 1):
Control the strength of dilated sampling. For specific impacts, please refer to Appendix C in the DemoFusion paper.
- `cosine_scale_3` (`float`, defaults to 1):
Control the strength of the Gaussian filter. For specific impacts, please refer to Appendix C in the DemoFusion paper.
- `sigma` (`float`, defaults to 1):
The standard value of the Gaussian filter. Larger sigma promotes the global guidance of dilated sampling, but has the potential of over-smoothing.
- `multi_decoder` (`bool`, defaults to True):
Determine whether to use a tiled decoder. Generally, when the resolution exceeds 3072x3072, a tiled decoder becomes necessary.
- `show_image` (`bool`, defaults to False):
Determine whether to show intermediate results during generation.
```py
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
custom_pipeline="pipeline_demofusion_sdxl",
custom_revision="main",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "Envision a portrait of an elderly woman, her face a canvas of time, framed by a headscarf with muted tones of rust and cream. Her eyes, blue like faded denim. Her attire, simple yet dignified."
negative_prompt = "blurry, ugly, duplicate, poorly drawn, deformed, mosaic"
images = pipe(
prompt,
negative_prompt=negative_prompt,
height=3072,
width=3072,
view_batch_size=16,
stride=64,
num_inference_steps=50,
guidance_scale=7.5,
cosine_scale_1=3,
cosine_scale_2=1,
cosine_scale_3=1,
sigma=0.8,
multi_decoder=True,
show_image=True
)
```
You can display and save the generated images as:
```py
def image_grid(imgs, save_path=None):
w = 0
for i, img in enumerate(imgs):
h_, w_ = imgs[i].size
w += w_
h = h_
grid = Image.new('RGB', size=(w, h))
grid_w, grid_h = grid.size
w = 0
for i, img in enumerate(imgs):
h_, w_ = imgs[i].size
grid.paste(img, box=(w, h - h_))
if save_path != None:
img.save(save_path + "/img_{}.jpg".format((i + 1) * 1024))
w += w_
return grid
image_grid(images, save_path="./outputs/")
```

### SDE Drag pipeline
This pipeline provides drag-and-drop image editing using stochastic differential equations. It enables image editing by inputting prompt, image, mask_image, source_points, and target_points.

See [paper](https://arxiv.org/abs/2311.01410), [paper page](https://ml-gsai.github.io/SDE-Drag-demo/), [original repo](https://github.com/ML-GSAI/SDE-Drag) for more infomation.
```py
import PIL
import torch
from diffusers import DDIMScheduler, DiffusionPipeline
# Load the pipeline
model_path = "runwayml/stable-diffusion-v1-5"
scheduler = DDIMScheduler.from_pretrained(model_path, subfolder="scheduler")
pipe = DiffusionPipeline.from_pretrained(model_path, scheduler=scheduler, custom_pipeline="sde_drag")
pipe.to('cuda')
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
# If not training LoRA, please avoid using torch.float16
# pipe.to(torch.float16)
# Provide prompt, image, mask image, and the starting and target points for drag editing.
prompt = "prompt of the image"
image = PIL.Image.open('/path/to/image')
mask_image = PIL.Image.open('/path/to/mask_image')
source_points = [[123, 456]]
target_points = [[234, 567]]
# train_lora is optional, and in most cases, using train_lora can better preserve consistency with the original image.
pipe.train_lora(prompt, image)
output = pipe(prompt, image, mask_image, source_points, target_points)
output_image = PIL.Image.fromarray(output)
output_image.save("./output.png")
```
### Instaflow Pipeline
InstaFlow is an ultra-fast, one-step image generator that achieves image quality close to Stable Diffusion, significantly reducing the demand of computational resources. This efficiency is made possible through a recent [Rectified Flow](https://github.com/gnobitab/RectifiedFlow) technique, which trains probability flows with straight trajectories, hence inherently requiring only a single step for fast inference.
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("XCLIU/instaflow_0_9B_from_sd_1_5", torch_dtype=torch.float16, custom_pipeline="instaflow_one_step")
pipe.to("cuda") ### if GPU is not available, comment this line
prompt = "A hyper-realistic photo of a cute cat."
images = pipe(prompt=prompt,
num_inference_steps=1,
guidance_scale=0.0).images
images[0].save("./image.png")
```

You can also combine it with LORA out of the box, like <https://huggingface.co/artificialguybr/logo-redmond-1-5v-logo-lora-for-liberteredmond-sd-1-5>, to unlock cool use cases in single step!
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("XCLIU/instaflow_0_9B_from_sd_1_5", torch_dtype=torch.float16, custom_pipeline="instaflow_one_step")
pipe.to("cuda") ### if GPU is not available, comment this line
pipe.load_lora_weights("artificialguybr/logo-redmond-1-5v-logo-lora-for-liberteredmond-sd-1-5")
prompt = "logo, A logo for a fitness app, dynamic running figure, energetic colors (red, orange) ),LogoRedAF ,"
images = pipe(prompt=prompt,
num_inference_steps=1,
guidance_scale=0.0).images
images[0].save("./image.png")
```

### Null-Text Inversion pipeline
This pipeline provides null-text inversion for editing real images. It enables null-text optimization, and DDIM reconstruction via w, w/o null-text optimization. No prompt-to-prompt code is implemented as there is a Prompt2PromptPipeline.
- Reference paper
```@article{hertz2022prompt,
title={Prompt-to-prompt image editing with cross attention control},
author={Hertz, Amir and Mokady, Ron and Tenenbaum, Jay and Aberman, Kfir and Pritch, Yael and Cohen-Or, Daniel},
booktitle={arXiv preprint arXiv:2208.01626},
year={2022}
```}
```py
from diffusers.schedulers import DDIMScheduler
from examples.community.pipeline_null_text_inversion import NullTextPipeline
import torch
# Load the pipeline
device = "cuda"
# Provide invert_prompt and the image for null-text optimization.
invert_prompt = "A lying cat"
input_image = "siamese.jpg"
steps = 50
# Provide prompt used for generation. Same if reconstruction
prompt = "A lying cat"
# or different if editing.
prompt = "A lying dog"
#Float32 is essential to a well optimization
model_path = "runwayml/stable-diffusion-v1-5"
scheduler = DDIMScheduler(num_train_timesteps=1000, beta_start=0.00085, beta_end=0.0120, beta_schedule="scaled_linear")
pipeline = NullTextPipeline.from_pretrained(model_path, scheduler = scheduler, torch_dtype=torch.float32).to(device)
#Saves the inverted_latent to save time
inverted_latent, uncond = pipeline.invert(input_image, invert_prompt, num_inner_steps=10, early_stop_epsilon= 1e-5, num_inference_steps = steps)
pipeline(prompt, uncond, inverted_latent, guidance_scale=7.5, num_inference_steps=steps).images[0].save(input_image+".output.jpg")
```
### Rerender A Video
This is the Diffusers implementation of zero-shot video-to-video translation pipeline [Rerender A Video](https://github.com/williamyang1991/Rerender_A_Video) (without Ebsynth postprocessing). To run the code, please install gmflow. Then modify the path in `gmflow_dir`. After that, you can run the pipeline with:
```py
import sys
gmflow_dir = "/path/to/gmflow"
sys.path.insert(0, gmflow_dir)
from diffusers import ControlNetModel, AutoencoderKL, DDIMScheduler
from diffusers.utils import export_to_video
import numpy as np
import torch
import cv2
from PIL import Image
def video_to_frame(video_path: str, interval: int):
vidcap = cv2.VideoCapture(video_path)
success = True
count = 0
res = []
while success:
count += 1
success, image = vidcap.read()
if count % interval != 1:
continue
if image is not None:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
res.append(image)
vidcap.release()
return res
input_video_path = 'path/to/video'
input_interval = 10
frames = video_to_frame(
input_video_path, input_interval)
control_frames = []
# get canny image
for frame in frames:
np_image = cv2.Canny(frame, 50, 100)
np_image = np_image[:, :, None]
np_image = np.concatenate([np_image, np_image, np_image], axis=2)
canny_image = Image.fromarray(np_image)
control_frames.append(canny_image)
# You can use any ControlNet here
controlnet = ControlNetModel.from_pretrained(
"lllyasviel/sd-controlnet-canny").to('cuda')
# You can use any fintuned SD here
pipe = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, custom_pipeline='rerender_a_video').to('cuda')
# Optional: you can download vae-ft-mse-840000-ema-pruned.ckpt to enhance the results
# pipe.vae = AutoencoderKL.from_single_file(
# "path/to/vae-ft-mse-840000-ema-pruned.ckpt").to('cuda')
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
generator = torch.manual_seed(0)
frames = [Image.fromarray(frame) for frame in frames]
output_frames = pipe(
"a beautiful woman in CG style, best quality, extremely detailed",
frames,
control_frames,
num_inference_steps=20,
strength=0.75,
controlnet_conditioning_scale=0.7,
generator=generator,
warp_start=0.0,
warp_end=0.1,
mask_start=0.5,
mask_end=0.8,
mask_strength=0.5,
negative_prompt='longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
).frames[0]
export_to_video(
output_frames, "/path/to/video.mp4", 5)
```
### StyleAligned Pipeline
This pipeline is the implementation of [Style Aligned Image Generation via Shared Attention](https://arxiv.org/abs/2312.02133). You can find more results [here](https://github.com/huggingface/diffusers/pull/6489#issuecomment-1881209354).
> Large-scale Text-to-Image (T2I) models have rapidly gained prominence across creative fields, generating visually compelling outputs from textual prompts. However, controlling these models to ensure consistent style remains challenging, with existing methods necessitating fine-tuning and manual intervention to disentangle content and style. In this paper, we introduce StyleAligned, a novel technique designed to establish style alignment among a series of generated images. By employing minimal `attention sharing' during the diffusion process, our method maintains style consistency across images within T2I models. This approach allows for the creation of style-consistent images using a reference style through a straightforward inversion operation. Our method's evaluation across diverse styles and text prompts demonstrates high-quality synthesis and fidelity, underscoring its efficacy in achieving consistent style across various inputs.
```python
from typing import List
import torch
from diffusers.pipelines.pipeline_utils import DiffusionPipeline
from PIL import Image
model_id = "a-r-r-o-w/dreamshaper-xl-turbo"
pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16", custom_pipeline="pipeline_sdxl_style_aligned")
pipe = pipe.to("cuda")
# Enable memory saving techniques
pipe.enable_vae_slicing()
pipe.enable_vae_tiling()
prompt = [
"a toy train. macro photo. 3d game asset",
"a toy airplane. macro photo. 3d game asset",
"a toy bicycle. macro photo. 3d game asset",
"a toy car. macro photo. 3d game asset",
]
negative_prompt = "low quality, worst quality, "
# Enable StyleAligned
pipe.enable_style_aligned(
share_group_norm=False,
share_layer_norm=False,
share_attention=True,
adain_queries=True,
adain_keys=True,
adain_values=False,
full_attention_share=False,
shared_score_scale=1.0,
shared_score_shift=0.0,
only_self_level=0.0,
)
# Run inference
images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
guidance_scale=2,
height=1024,
width=1024,
num_inference_steps=10,
generator=torch.Generator().manual_seed(42),
).images
# Disable StyleAligned if you do not wish to use it anymore
pipe.disable_style_aligned()
```
### AnimateDiff Image-To-Video Pipeline
This pipeline adds experimental support for the image-to-video task using AnimateDiff. Refer to [this](https://github.com/huggingface/diffusers/pull/6328) PR for more examples and results.
This pipeline relies on a "hack" discovered by the community that allows the generation of videos given an input image with AnimateDiff. It works by creating a copy of the image `num_frames` times and progressively adding more noise to the image based on the strength and latent interpolation method.
```py
import torch
from diffusers import MotionAdapter, DiffusionPipeline, DDIMScheduler
from diffusers.utils import export_to_gif, load_image
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2")
pipe = DiffusionPipeline.from_pretrained(model_id, motion_adapter=adapter, custom_pipeline="pipeline_animatediff_img2video").to("cuda")
pipe.scheduler = DDIMScheduler.from_pretrained(model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", beta_schedule="linear", steps_offset=1)
image = load_image("snail.png")
output = pipe(
image=image,
prompt="A snail moving on the ground",
strength=0.8,
latent_interpolation_method="slerp", # can be lerp, slerp, or your own callback
)
frames = output.frames[0]
export_to_gif(frames, "animation.gif")
```
### IP Adapter Face ID
IP Adapter FaceID is an experimental IP Adapter model that uses image embeddings generated by `insightface`, so no image encoder needs to be loaded.
You need to install `insightface` and all its requirements to use this model.
You must pass the image embedding tensor as `image_embeds` to the StableDiffusionPipeline instead of `ip_adapter_image`.
You can find more results [here](https://github.com/huggingface/diffusers/pull/6276).
```py
import diffusers
import torch
from diffusers.utils import load_image
import cv2
import numpy as np
from diffusers import DiffusionPipeline, AutoencoderKL, DDIMScheduler
from insightface.app import FaceAnalysis
noise_scheduler = DDIMScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
steps_offset=1,
)
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse").to(dtype=torch.float16)
pipeline = DiffusionPipeline.from_pretrained(
"SG161222/Realistic_Vision_V4.0_noVAE",
torch_dtype=torch.float16,
scheduler=noise_scheduler,
vae=vae,
custom_pipeline="ip_adapter_face_id"
)
pipeline.load_ip_adapter_face_id("h94/IP-Adapter-FaceID", "ip-adapter-faceid_sd15.bin")
pipeline.to("cuda")
generator = torch.Generator(device="cpu").manual_seed(42)
num_images=2
image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png")
app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
image = cv2.cvtColor(np.asarray(image), cv2.COLOR_BGR2RGB)
faces = app.get(image)
image = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0)
images = pipeline(
prompt="A photo of a girl wearing a black dress, holding red roses in hand, upper body, behind is the Eiffel Tower",
image_embeds=image,
negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",
num_inference_steps=20, num_images_per_prompt=num_images, width=512, height=704,
generator=generator
).images
for i in range(num_images):
images[i].save(f"c{i}.png")
```
### InstantID Pipeline
InstantID is a new state-of-the-art tuning-free method to achieve ID-Preserving generation with only single image, supporting various downstream tasks. For any usgae question, please refer to the [official implementation](https://github.com/InstantID/InstantID).
```py
# !pip install opencv-python transformers accelerate insightface
import diffusers
from diffusers.utils import load_image
from diffusers.models import ControlNetModel
import cv2
import torch
import numpy as np
from PIL import Image
from insightface.app import FaceAnalysis
from pipeline_stable_diffusion_xl_instantid import StableDiffusionXLInstantIDPipeline, draw_kps
# prepare 'antelopev2' under ./models
# https://github.com/deepinsight/insightface/issues/1896#issuecomment-1023867304
app = FaceAnalysis(name='antelopev2', root='./', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
# prepare models under ./checkpoints
# https://huggingface.co/InstantX/InstantID
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/config.json", local_dir="./checkpoints")
hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/diffusion_pytorch_model.safetensors", local_dir="./checkpoints")
hf_hub_download(repo_id="InstantX/InstantID", filename="ip-adapter.bin", local_dir="./checkpoints")
face_adapter = f'./checkpoints/ip-adapter.bin'
controlnet_path = f'./checkpoints/ControlNetModel'
# load IdentityNet
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
base_model = 'wangqixun/YamerMIX_v8'
pipe = StableDiffusionXLInstantIDPipeline.from_pretrained(
base_model,
controlnet=controlnet,
torch_dtype=torch.float16
)
pipe.cuda()
# load adapter
pipe.load_ip_adapter_instantid(face_adapter)
# load an image
face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png")
# prepare face emb
face_info = app.get(cv2.cvtColor(np.array(face_image), cv2.COLOR_RGB2BGR))
face_info = sorted(face_info, key=lambda x:(x['bbox'][2]-x['bbox'][0])*x['bbox'][3]-x['bbox'][1])[-1] # only use the maximum face
face_emb = face_info['embedding']
face_kps = draw_kps(face_image, face_info['kps'])
# prompt
prompt = "film noir style, ink sketch|vector, male man, highly detailed, sharp focus, ultra sharpness, monochrome, high contrast, dramatic shadows, 1940s style, mysterious, cinematic"
negative_prompt = "ugly, deformed, noisy, blurry, low contrast, realism, photorealistic, vibrant, colorful"
# generate image
pipe.set_ip_adapter_scale(0.8)
image = pipe(
prompt,
image_embeds=face_emb,
image=face_kps,
controlnet_conditioning_scale=0.8,
).images[0]
```
### UFOGen Scheduler
[UFOGen](https://arxiv.org/abs/2311.09257) is a generative model designed for fast one-step text-to-image generation, trained via adversarial training starting from an initial pretrained diffusion model such as Stable Diffusion. `scheduling_ufogen.py` implements a onestep and multistep sampling algorithm for UFOGen models compatible with pipelines like `StableDiffusionPipeline`. A usage example is as follows:
```py
import torch
from diffusers import StableDiffusionPipeline
from scheduling_ufogen import UFOGenScheduler
# NOTE: currently, I am not aware of any publicly available UFOGen model checkpoints trained from SD v1.5.
ufogen_model_id_or_path = "/path/to/ufogen/model"
pipe = StableDiffusionPipeline(
ufogen_model_id_or_path,
torch_dtype=torch.float16,
)
# You can initialize a UFOGenScheduler as follows:
pipe.scheduler = UFOGenScheduler.from_config(pipe.scheduler.config)
prompt = "Three cats having dinner at a table at new years eve, cinematic shot, 8k."
# Onestep sampling
onestep_image = pipe(prompt, num_inference_steps=1).images[0]
# Multistep sampling
multistep_image = pipe(prompt, num_inference_steps=4).images[0]
```
### FRESCO
This is the Diffusers implementation of zero-shot video-to-video translation pipeline [FRESCO](https://github.com/williamyang1991/FRESCO) (without Ebsynth postprocessing and background smooth). To run the code, please install gmflow. Then modify the path in `gmflow_dir`. After that, you can run the pipeline with:
```py
from PIL import Image
import cv2
import torch
import numpy as np
from diffusers import ControlNetModel,DDIMScheduler, DiffusionPipeline
import sys
gmflow_dir = "/path/to/gmflow"
sys.path.insert(0, gmflow_dir)
def video_to_frame(video_path: str, interval: int):
vidcap = cv2.VideoCapture(video_path)
success = True
count = 0
res = []
while success:
count += 1
success, image = vidcap.read()
if count % interval != 1:
continue
if image is not None:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
res.append(image)
if len(res) >= 8:
break
vidcap.release()
return res
input_video_path = 'https://github.com/williamyang1991/FRESCO/raw/main/data/car-turn.mp4'
output_video_path = 'car.gif'
# You can use any fintuned SD here
model_path = 'SG161222/Realistic_Vision_V2.0'
prompt = 'a red car turns in the winter'
a_prompt = ', RAW photo, subject, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3, '
n_prompt = '(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation'
input_interval = 5
frames = video_to_frame(
input_video_path, input_interval)
control_frames = []
# get canny image
for frame in frames:
image = cv2.Canny(frame, 50, 100)
np_image = np.array(image)
np_image = np_image[:, :, None]
np_image = np.concatenate([np_image, np_image, np_image], axis=2)
canny_image = Image.fromarray(np_image)
control_frames.append(canny_image)
# You can use any ControlNet here
controlnet = ControlNetModel.from_pretrained(
"lllyasviel/sd-controlnet-canny").to('cuda')
pipe = DiffusionPipeline.from_pretrained(
model_path, controlnet=controlnet, custom_pipeline='fresco_v2v').to('cuda')
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
generator = torch.manual_seed(0)
frames = [Image.fromarray(frame) for frame in frames]
output_frames = pipe(
prompt + a_prompt,
frames,
control_frames,
num_inference_steps=20,
strength=0.75,
controlnet_conditioning_scale=0.7,
generator=generator,
negative_prompt=n_prompt
).images
output_frames[0].save(output_video_path, save_all=True,
append_images=output_frames[1:], duration=100, loop=0)
```
# Perturbed-Attention Guidance
[Project](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) / [arXiv](https://arxiv.org/abs/2403.17377) / [GitHub](https://github.com/KU-CVLAB/Perturbed-Attention-Guidance)
This implementation is based on [Diffusers](https://huggingface.co/docs/diffusers/index). StableDiffusionPAGPipeline is a modification of StableDiffusionPipeline to support Perturbed-Attention Guidance (PAG).
## Example Usage
```py
import os
import torch
from accelerate.utils import set_seed
from diffusers import StableDiffusionPipeline
from diffusers.utils import load_image, make_image_grid
from diffusers.utils.torch_utils import randn_tensor
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
custom_pipeline="hyoungwoncho/sd_perturbed_attention_guidance",
torch_dtype=torch.float16
)
device="cuda"
pipe = pipe.to(device)
pag_scale = 5.0
pag_applied_layers_index = ['m0']
batch_size = 4
seed=10
base_dir = "./results/"
grid_dir = base_dir + "/pag" + str(pag_scale) + "/"
if not os.path.exists(grid_dir):
os.makedirs(grid_dir)
set_seed(seed)
latent_input = randn_tensor(shape=(batch_size,4,64,64),generator=None, device=device, dtype=torch.float16)
output_baseline = pipe(
"",
width=512,
height=512,
num_inference_steps=50,
guidance_scale=0.0,
pag_scale=0.0,
pag_applied_layers_index=pag_applied_layers_index,
num_images_per_prompt=batch_size,
latents=latent_input
).images
output_pag = pipe(
"",
width=512,
height=512,
num_inference_steps=50,
guidance_scale=0.0,
pag_scale=5.0,
pag_applied_layers_index=pag_applied_layers_index,
num_images_per_prompt=batch_size,
latents=latent_input
).images
grid_image = make_image_grid(output_baseline + output_pag, rows=2, cols=batch_size)
grid_image.save(grid_dir + "sample.png")
```
## PAG Parameters
pag_scale : gudiance scale of PAG (ex: 5.0)
pag_applied_layers_index : index of the layer to apply perturbation (ex: ['m0'])
| # Community Pipeline Examples
> **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).**
**Community pipeline** examples consist pipelines that have been added by the community.
Please have a look at the following tables to get an overview of all community examples. Click on the **Code Example** to get a copy-and-paste ready code example that you can try out.
If a community pipeline doesn't work as expected, please open an issue and ping the author on it.
Please also check out our [Community Scripts](https://github.com/huggingface/diffusers/blob/main/examples/community/README_community_scripts.md) examples for tips and tricks that you can use with diffusers without having to run a community pipeline.
| Example | Description | Code Example | Colab | Author |
|:--------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------:|
|Differential Diffusion|[Differential Diffusion](https://github.com/exx8/differential-diffusion) modifies an image according to a text prompt, and according to a map that specifies the amount of change in each region.|[Differential Diffusion](#differential-diffusion)|[](https://huggingface.co/spaces/exx8/differential-diffusion) [](https://colab.research.google.com/github/exx8/differential-diffusion/blob/main/examples/SD2.ipynb)|[Eran Levin](https://github.com/exx8) and [Ohad Fried](https://www.ohadf.com/)|
| HD-Painter | [HD-Painter](https://github.com/Picsart-AI-Research/HD-Painter) enables prompt-faithfull and high resolution (up to 2k) image inpainting upon any diffusion-based image inpainting method. | [HD-Painter](#hd-painter) | [](https://huggingface.co/spaces/PAIR/HD-Painter) | [Manukyan Hayk](https://github.com/haikmanukyan) and [Sargsyan Andranik](https://github.com/AndranikSargsyan) |
| Marigold Monocular Depth Estimation | A universal monocular depth estimator, utilizing Stable Diffusion, delivering sharp predictions in the wild. (See the [project page](https://marigoldmonodepth.github.io) and [full codebase](https://github.com/prs-eth/marigold) for more details.) | [Marigold Depth Estimation](#marigold-depth-estimation) | [](https://huggingface.co/spaces/toshas/marigold) [](https://colab.research.google.com/drive/12G8reD13DdpMie5ZQlaFNo2WCGeNUH-u?usp=sharing) | [Bingxin Ke](https://github.com/markkua) and [Anton Obukhov](https://github.com/toshas) |
| LLM-grounded Diffusion (LMD+) | LMD greatly improves the prompt following ability of text-to-image generation models by introducing an LLM as a front-end prompt parser and layout planner. [Project page.](https://llm-grounded-diffusion.github.io/) [See our full codebase (also with diffusers).](https://github.com/TonyLianLong/LLM-groundedDiffusion) | [LLM-grounded Diffusion (LMD+)](#llm-grounded-diffusion) | [Huggingface Demo](https://huggingface.co/spaces/longlian/llm-grounded-diffusion) [](https://colab.research.google.com/drive/1SXzMSeAB-LJYISb2yrUOdypLz4OYWUKj) | [Long (Tony) Lian](https://tonylian.com/) |
| CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion | [CLIP Guided Stable Diffusion](#clip-guided-stable-diffusion) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | [Suraj Patil](https://github.com/patil-suraj/) |
| One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see <https://github.com/huggingface/diffusers/issues/841>) | [One Step U-Net](#one-step-unet) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
| Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Stable Diffusion Interpolation](#stable-diffusion-interpolation) | - | [Nate Raw](https://github.com/nateraw/) |
| Stable Diffusion Mega | **One** Stable Diffusion Pipeline with all functionalities of [Text2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py), [Image2Image](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) and [Inpainting](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | [Stable Diffusion Mega](#stable-diffusion-mega) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
| Long Prompt Weighting Stable Diffusion | **One** Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. | [Long Prompt Weighting Stable Diffusion](#long-prompt-weighting-stable-diffusion) | - | [SkyTNT](https://github.com/SkyTNT) |
| Speech to Image | Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images | [Speech to Image](#speech-to-image) | - | [Mikail Duzenli](https://github.com/MikailINTech)
| Wild Card Stable Diffusion | Stable Diffusion Pipeline that supports prompts that contain wildcard terms (indicated by surrounding double underscores), with values instantiated randomly from a corresponding txt file or a dictionary of possible values | [Wildcard Stable Diffusion](#wildcard-stable-diffusion) | - | [Shyam Sudhakaran](https://github.com/shyamsn97) |
| [Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) | Stable Diffusion Pipeline that supports prompts that contain "|" in prompts (as an AND condition) and weights (separated by "|" as well) to positively / negatively weight prompts. | [Composable Stable Diffusion](#composable-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
| Seed Resizing Stable Diffusion | Stable Diffusion Pipeline that supports resizing an image and retaining the concepts of the 512 by 512 generation. | [Seed Resizing](#seed-resizing) | - | [Mark Rich](https://github.com/MarkRich) |
| Imagic Stable Diffusion | Stable Diffusion Pipeline that enables writing a text prompt to edit an existing image | [Imagic Stable Diffusion](#imagic-stable-diffusion) | - | [Mark Rich](https://github.com/MarkRich) |
| Multilingual Stable Diffusion | Stable Diffusion Pipeline that supports prompts in 50 different languages. | [Multilingual Stable Diffusion](#multilingual-stable-diffusion-pipeline) | - | [Juan Carlos Piñeros](https://github.com/juancopi81) |
| GlueGen Stable Diffusion | Stable Diffusion Pipeline that supports prompts in different languages using GlueGen adapter. | [GlueGen Stable Diffusion](#gluegen-stable-diffusion-pipeline) | - | [Phạm Hồng Vinh](https://github.com/rootonchair) |
| Image to Image Inpainting Stable Diffusion | Stable Diffusion Pipeline that enables the overlaying of two images and subsequent inpainting | [Image to Image Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Alex McKinney](https://github.com/vvvm23) |
| Text Based Inpainting Stable Diffusion | Stable Diffusion Inpainting Pipeline that enables passing a text prompt to generate the mask for inpainting | [Text Based Inpainting Stable Diffusion](#image-to-image-inpainting-stable-diffusion) | - | [Dhruv Karan](https://github.com/unography) |
| Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - | [Stuti R.](https://github.com/kingstut) |
| K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
| Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
| Stable Diffusion v1.1-1.4 Comparison | Run all 4 model checkpoints for Stable Diffusion and compare their results together | [Stable Diffusion Comparison](#stable-diffusion-comparisons) | - | [Suvaditya Mukherjee](https://github.com/suvadityamuk) |
| MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) | - | [Partho Das](https://github.com/daspartho) |
| Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) | - | [Ray Wang](https://wrong.wang) |
| UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
| UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
| DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | - | [Aengus (Duc-Anh)](https://github.com/aengusng8) |
| CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
| TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
| EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | - | [Joqsan Azocar](https://github.com/Joqsan) |
| Stable Diffusion RePaint | Stable Diffusion pipeline using [RePaint](https://arxiv.org/abs/2201.0986) for inpainting. | [Stable Diffusion RePaint](#stable-diffusion-repaint ) | - | [Markus Pobitzer](https://github.com/Markus-Pobitzer) |
| TensorRT Stable Diffusion Image to Image Pipeline | Accelerates the Stable Diffusion Image2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Image to Image Pipeline](#tensorrt-image2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
| Stable Diffusion IPEX Pipeline | Accelerate Stable Diffusion inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion on IPEX](#stable-diffusion-on-ipex) | - | [Yingjie Han](https://github.com/yingjie-han/) |
| CLIP Guided Images Mixing Stable Diffusion Pipeline | Сombine images using usual diffusion models. | [CLIP Guided Images Mixing Using Stable Diffusion](#clip-guided-images-mixing-with-stable-diffusion) | - | [Karachev Denis](https://github.com/TheDenk) |
| TensorRT Stable Diffusion Inpainting Pipeline | Accelerates the Stable Diffusion Inpainting Pipeline using TensorRT | [TensorRT Stable Diffusion Inpainting Pipeline](#tensorrt-inpainting-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
| IADB Pipeline | Implementation of [Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) | [IADB Pipeline](#iadb-pipeline) | - | [Thomas Chambon](https://github.com/tchambon)
| Zero1to3 Pipeline | Implementation of [Zero-1-to-3: Zero-shot One Image to 3D Object](https://arxiv.org/abs/2303.11328) | [Zero1to3 Pipeline](#zero1to3-pipeline) | - | [Xin Kong](https://github.com/kxhit) |
| Stable Diffusion XL Long Weighted Prompt Pipeline | A pipeline support unlimited length of prompt and negative prompt, use A1111 style of prompt weighting | [Stable Diffusion XL Long Weighted Prompt Pipeline](#stable-diffusion-xl-long-weighted-prompt-pipeline) | [](https://colab.research.google.com/drive/1LsqilswLR40XLLcp6XFOl5nKb_wOe26W?usp=sharing) | [Andrew Zhu](https://xhinker.medium.com/) |
| FABRIC - Stable Diffusion with feedback Pipeline | pipeline supports feedback from liked and disliked images | [Stable Diffusion Fabric Pipeline](#stable-diffusion-fabric-pipeline) | - | [Shauray Singh](https://shauray8.github.io/about_shauray/) |
| sketch inpaint - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion Pipeline](#stable-diffusion-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
| prompt-to-prompt | change parts of a prompt and retain image structure (see [paper page](https://prompt-to-prompt.github.io/)) | [Prompt2Prompt Pipeline](#prompt2prompt-pipeline) | - | [Umer H. Adil](https://twitter.com/UmerHAdil) |
| Latent Consistency Pipeline | Implementation of [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) | [Latent Consistency Pipeline](#latent-consistency-pipeline) | - | [Simian Luo](https://github.com/luosiallen) |
| Latent Consistency Img2img Pipeline | Img2img pipeline for Latent Consistency Models | [Latent Consistency Img2Img Pipeline](#latent-consistency-img2img-pipeline) | - | [Logan Zoellner](https://github.com/nagolinc) |
| Latent Consistency Interpolation Pipeline | Interpolate the latent space of Latent Consistency Models with multiple prompts | [Latent Consistency Interpolation Pipeline](#latent-consistency-interpolation-pipeline) | [](https://colab.research.google.com/drive/1pK3NrLWJSiJsBynLns1K1-IDTW9zbPvl?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) |
| SDE Drag Pipeline | The pipeline supports drag editing of images using stochastic differential equations | [SDE Drag Pipeline](#sde-drag-pipeline) | - | [NieShen](https://github.com/NieShenRuc) [Fengqi Zhu](https://github.com/Monohydroxides) |
| Regional Prompting Pipeline | Assign multiple prompts for different regions | [Regional Prompting Pipeline](#regional-prompting-pipeline) | - | [hako-mikan](https://github.com/hako-mikan) |
| LDM3D-sr (LDM3D upscaler) | Upscale low resolution RGB and depth inputs to high resolution | [StableDiffusionUpscaleLDM3D Pipeline](https://github.com/estelleafl/diffusers/tree/ldm3d_upscaler_community/examples/community#stablediffusionupscaleldm3d-pipeline) | - | [Estelle Aflalo](https://github.com/estelleafl) |
| AnimateDiff ControlNet Pipeline | Combines AnimateDiff with precise motion control using ControlNets | [AnimateDiff ControlNet Pipeline](#animatediff-controlnet-pipeline) | [](https://colab.research.google.com/drive/1SKboYeGjEQmQPWoFC0aLYpBlYdHXkvAu?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) and [Edoardo Botta](https://github.com/EdoardoBotta) |
| DemoFusion Pipeline | Implementation of [DemoFusion: Democratising High-Resolution Image Generation With No $$$](https://arxiv.org/abs/2311.16973) | [DemoFusion Pipeline](#demofusion) | - | [Ruoyi Du](https://github.com/RuoyiDu) |
| Instaflow Pipeline | Implementation of [InstaFlow! One-Step Stable Diffusion with Rectified Flow](https://arxiv.org/abs/2309.06380) | [Instaflow Pipeline](#instaflow-pipeline) | - | [Ayush Mangal](https://github.com/ayushtues) |
| Null-Text Inversion Pipeline | Implement [Null-text Inversion for Editing Real Images using Guided Diffusion Models](https://arxiv.org/abs/2211.09794) as a pipeline. | [Null-Text Inversion](https://github.com/google/prompt-to-prompt/) | - | [Junsheng Luan](https://github.com/Junsheng121) |
| Rerender A Video Pipeline | Implementation of [[SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation](https://arxiv.org/abs/2306.07954) | [Rerender A Video Pipeline](#rerender-a-video) | - | [Yifan Zhou](https://github.com/SingleZombie) |
| StyleAligned Pipeline | Implementation of [Style Aligned Image Generation via Shared Attention](https://arxiv.org/abs/2312.02133) | [StyleAligned Pipeline](#stylealigned-pipeline) | [](https://drive.google.com/file/d/15X2E0jFPTajUIjS0FzX50OaHsCbP2lQ0/view?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) |
| AnimateDiff Image-To-Video Pipeline | Experimental Image-To-Video support for AnimateDiff (open to improvements) | [AnimateDiff Image To Video Pipeline](#animatediff-image-to-video-pipeline) | [](https://drive.google.com/file/d/1TvzCDPHhfFtdcJZe4RLloAwyoLKuttWK/view?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) |
| IP Adapter FaceID Stable Diffusion | Stable Diffusion Pipeline that supports IP Adapter Face ID | [IP Adapter Face ID](#ip-adapter-face-id) | - | [Fabio Rigano](https://github.com/fabiorigano) |
| InstantID Pipeline | Stable Diffusion XL Pipeline that supports InstantID | [InstantID Pipeline](#instantid-pipeline) | [](https://huggingface.co/spaces/InstantX/InstantID) | [Haofan Wang](https://github.com/haofanwang) |
| UFOGen Scheduler | Scheduler for UFOGen Model (compatible with Stable Diffusion pipelines) | [UFOGen Scheduler](#ufogen-scheduler) | - | [dg845](https://github.com/dg845) |
| Stable Diffusion XL IPEX Pipeline | Accelerate Stable Diffusion XL inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [IPEX](https://github.com/intel/intel-extension-for-pytorch) | [Stable Diffusion XL on IPEX](#stable-diffusion-xl-on-ipex) | - | [Dan Li](https://github.com/ustcuna/) |
| Stable Diffusion BoxDiff Pipeline | Training-free controlled generation with bounding boxes using [BoxDiff](https://github.com/showlab/BoxDiff) | [Stable Diffusion BoxDiff Pipeline](#stable-diffusion-boxdiff) | - | [Jingyang Zhang](https://github.com/zjysteven/) |
| FRESCO V2V Pipeline | Implementation of [[CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation](https://arxiv.org/abs/2403.12962) | [FRESCO V2V Pipeline](#fresco) | - | [Yifan Zhou](https://github.com/SingleZombie) |
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
```py
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="filename_in_the_community_folder")
```
## Example usages
### Differential Diffusion
**Eran Levin, Ohad Fried**
**Tel Aviv University, Reichman University**
Diffusion models have revolutionized image generation and editing, producing state-of-the-art results in conditioned and unconditioned image synthesis. While current techniques enable user control over the degree of change in an image edit, the controllability is limited to global changes over an entire edited region. This paper introduces a novel framework that enables customization of the amount of change per pixel or per image region. Our framework can be integrated into any existing diffusion model, enhancing it with this capability. Such granular control on the quantity of change opens up a diverse array of new editing capabilities, such as control of the extent to which individual objects are modified, or the ability to introduce gradual spatial changes. Furthermore, we showcase the framework's effectiveness in soft-inpainting---the completion of portions of an image while subtly adjusting the surrounding areas to ensure seamless integration. Additionally, we introduce a new tool for exploring the effects of different change quantities. Our framework operates solely during inference, requiring no model training or fine-tuning. We demonstrate our method with the current open state-of-the-art models, and validate it via both quantitative and qualitative comparisons, and a user study.

You can find additional information about Differential Diffusion in the [paper](https://differential-diffusion.github.io/paper.pdf) or in the [project website](https://differential-diffusion.github.io/).
#### Usage example
```python
import torch
from torchvision import transforms
from diffusers import DPMSolverMultistepScheduler
from diffusers.utils import load_image
from examples.community.pipeline_stable_diffusion_xl_differential_img2img import (
StableDiffusionXLDifferentialImg2ImgPipeline,
)
pipeline = StableDiffusionXLDifferentialImg2ImgPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config, use_karras_sigmas=True)
def preprocess_image(image):
image = image.convert("RGB")
image = transforms.CenterCrop((image.size[1] // 64 * 64, image.size[0] // 64 * 64))(image)
image = transforms.ToTensor()(image)
image = image * 2 - 1
image = image.unsqueeze(0).to("cuda")
return image
def preprocess_map(map):
map = map.convert("L")
map = transforms.CenterCrop((map.size[1] // 64 * 64, map.size[0] // 64 * 64))(map)
map = transforms.ToTensor()(map)
map = map.to("cuda")
return map
image = preprocess_image(
load_image(
"https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/20240329211129_4024911930.png?download=true"
)
)
mask = preprocess_map(
load_image(
"https://huggingface.co/datasets/OzzyGT/testing-resources/resolve/main/differential/gradient_mask.png?download=true"
)
)
prompt = "a green pear"
negative_prompt = "blurry"
image = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
guidance_scale=7.5,
num_inference_steps=25,
original_image=image,
image=image,
strength=1.0,
map=mask,
).images[0]
image.save("result.png")
```
### HD-Painter
Implementation of [HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models](https://arxiv.org/abs/2312.14091).

The abstract from the paper is:
Recent progress in text-guided image inpainting, based on the unprecedented success of text-to-image diffusion models, has led to exceptionally realistic and visually plausible results.
However, there is still significant potential for improvement in current text-to-image inpainting models, particularly in better aligning the inpainted area with user prompts and performing high-resolution inpainting.
Therefore, in this paper we introduce _HD-Painter_, a completely **training-free** approach that **accurately follows to prompts** and coherently **scales to high-resolution** image inpainting.
To this end, we design the _Prompt-Aware Introverted Attention (PAIntA)_ layer enhancing self-attention scores by prompt information and resulting in better text alignment generations.
To further improve the prompt coherence we introduce the _Reweighting Attention Score Guidance (RASG)_ mechanism seamlessly integrating a post-hoc sampling strategy into general form of DDIM to prevent out-of-distribution latent shifts.
Moreover, HD-Painter allows extension to larger scales by introducing a specialized super-resolution technique customized for inpainting, enabling the completion of missing regions in images of up to 2K resolution.
Our experiments demonstrate that HD-Painter surpasses existing state-of-the-art approaches qualitatively and quantitatively, achieving an impressive generation accuracy improvement of **61.4** vs **51.9**.
We will make the codes publicly available.
You can find additional information about Text2Video-Zero in the [paper](https://arxiv.org/abs/2312.14091) or the [original codebase](https://github.com/Picsart-AI-Research/HD-Painter).
#### Usage example
```python
import torch
from diffusers import DiffusionPipeline, DDIMScheduler
from diffusers.utils import load_image, make_image_grid
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-inpainting",
custom_pipeline="hd_painter"
)
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
prompt = "wooden boat"
init_image = load_image("https://raw.githubusercontent.com/Picsart-AI-Research/HD-Painter/main/__assets__/samples/images/2.jpg")
mask_image = load_image("https://raw.githubusercontent.com/Picsart-AI-Research/HD-Painter/main/__assets__/samples/masks/2.png")
image = pipe (prompt, init_image, mask_image, use_rasg = True, use_painta = True, generator=torch.manual_seed(12345)).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```
### Marigold Depth Estimation
Marigold is a universal monocular depth estimator that delivers accurate and sharp predictions in the wild. Based on Stable Diffusion, it is trained exclusively with synthetic depth data and excels in zero-shot adaptation to real-world imagery. This pipeline is an official implementation of the inference process. More details can be found on our [project page](https://marigoldmonodepth.github.io) and [full codebase](https://github.com/prs-eth/marigold) (also implemented with diffusers).

This depth estimation pipeline processes a single input image through multiple diffusion denoising stages to estimate depth maps. These maps are subsequently merged to produce the final output. Below is an example code snippet, including optional arguments:
```python
import numpy as np
import torch
from PIL import Image
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
# Original DDIM version (higher quality)
pipe = DiffusionPipeline.from_pretrained(
"prs-eth/marigold-v1-0",
custom_pipeline="marigold_depth_estimation"
# torch_dtype=torch.float16, # (optional) Run with half-precision (16-bit float).
# variant="fp16", # (optional) Use with `torch_dtype=torch.float16`, to directly load fp16 checkpoint
)
# (New) LCM version (faster speed)
pipe = DiffusionPipeline.from_pretrained(
"prs-eth/marigold-lcm-v1-0",
custom_pipeline="marigold_depth_estimation"
# torch_dtype=torch.float16, # (optional) Run with half-precision (16-bit float).
# variant="fp16", # (optional) Use with `torch_dtype=torch.float16`, to directly load fp16 checkpoint
)
pipe.to("cuda")
img_path_or_url = "https://share.phys.ethz.ch/~pf/bingkedata/marigold/pipeline_example.jpg"
image: Image.Image = load_image(img_path_or_url)
pipeline_output = pipe(
image, # Input image.
# ----- recommended setting for DDIM version -----
# denoising_steps=10, # (optional) Number of denoising steps of each inference pass. Default: 10.
# ensemble_size=10, # (optional) Number of inference passes in the ensemble. Default: 10.
# ------------------------------------------------
# ----- recommended setting for LCM version ------
# denoising_steps=4,
# ensemble_size=5,
# -------------------------------------------------
# processing_res=768, # (optional) Maximum resolution of processing. If set to 0: will not resize at all. Defaults to 768.
# match_input_res=True, # (optional) Resize depth prediction to match input resolution.
# batch_size=0, # (optional) Inference batch size, no bigger than `num_ensemble`. If set to 0, the script will automatically decide the proper batch size. Defaults to 0.
# seed=2024, # (optional) Random seed can be set to ensure additional reproducibility. Default: None (unseeded). Note: forcing --batch_size 1 helps to increase reproducibility. To ensure full reproducibility, deterministic mode needs to be used.
# color_map="Spectral", # (optional) Colormap used to colorize the depth map. Defaults to "Spectral". Set to `None` to skip colormap generation.
# show_progress_bar=True, # (optional) If true, will show progress bars of the inference progress.
)
depth: np.ndarray = pipeline_output.depth_np # Predicted depth map
depth_colored: Image.Image = pipeline_output.depth_colored # Colorized prediction
# Save as uint16 PNG
depth_uint16 = (depth * 65535.0).astype(np.uint16)
Image.fromarray(depth_uint16).save("./depth_map.png", mode="I;16")
# Save colorized depth map
depth_colored.save("./depth_colored.png")
```
### LLM-grounded Diffusion
LMD and LMD+ greatly improves the prompt understanding ability of text-to-image generation models by introducing an LLM as a front-end prompt parser and layout planner. It improves spatial reasoning, the understanding of negation, attribute binding, generative numeracy, etc. in a unified manner without explicitly aiming for each. LMD is completely training-free (i.e., uses SD model off-the-shelf). LMD+ takes in additional adapters for better control. This is a reproduction of LMD+ model used in our work. [Project page.](https://llm-grounded-diffusion.github.io/) [See our full codebase (also with diffusers).](https://github.com/TonyLianLong/LLM-groundedDiffusion)


This pipeline can be used with an LLM or on its own. We provide a parser that parses LLM outputs to the layouts. You can obtain the prompt to input to the LLM for layout generation [here](https://github.com/TonyLianLong/LLM-groundedDiffusion/blob/main/prompt.py). After feeding the prompt to an LLM (e.g., GPT-4 on ChatGPT website), you can feed the LLM response into our pipeline.
The following code has been tested on 1x RTX 4090, but it should also support GPUs with lower GPU memory.
#### Use this pipeline with an LLM
```python
import torch
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"longlian/lmd_plus",
custom_pipeline="llm_grounded_diffusion",
custom_revision="main",
variant="fp16", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()
# Generate directly from a text prompt and an LLM response
prompt = "a waterfall and a modern high speed train in a beautiful forest with fall foliage"
phrases, boxes, bg_prompt, neg_prompt = pipe.parse_llm_response("""
[('a waterfall', [71, 105, 148, 258]), ('a modern high speed train', [255, 223, 181, 149])]
Background prompt: A beautiful forest with fall foliage
Negative prompt:
""")
images = pipe(
prompt=prompt,
negative_prompt=neg_prompt,
phrases=phrases,
boxes=boxes,
gligen_scheduled_sampling_beta=0.4,
output_type="pil",
num_inference_steps=50,
lmd_guidance_kwargs={}
).images
images[0].save("./lmd_plus_generation.jpg")
```
#### Use this pipeline on its own for layout generation
```python
import torch
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"longlian/lmd_plus",
custom_pipeline="llm_grounded_diffusion",
variant="fp16", torch_dtype=torch.float16
)
pipe.enable_model_cpu_offload()
# Generate an image described by the prompt and
# insert objects described by text at the region defined by bounding boxes
prompt = "a waterfall and a modern high speed train in a beautiful forest with fall foliage"
boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]]
phrases = ["a waterfall", "a modern high speed train"]
images = pipe(
prompt=prompt,
phrases=phrases,
boxes=boxes,
gligen_scheduled_sampling_beta=0.4,
output_type="pil",
num_inference_steps=50,
lmd_guidance_kwargs={}
).images
images[0].save("./lmd_plus_generation.jpg")
```
### CLIP Guided Stable Diffusion
CLIP guided stable diffusion can help to generate more realistic images
by guiding stable diffusion at every denoising step with an additional CLIP model.
The following code requires roughly 12GB of GPU RAM.
```python
from diffusers import DiffusionPipeline
from transformers import CLIPImageProcessor, CLIPModel
import torch
feature_extractor = CLIPImageProcessor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K")
clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16)
guided_pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
custom_pipeline="clip_guided_stable_diffusion",
clip_model=clip_model,
feature_extractor=feature_extractor,
torch_dtype=torch.float16,
)
guided_pipeline.enable_attention_slicing()
guided_pipeline = guided_pipeline.to("cuda")
prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
generator = torch.Generator(device="cuda").manual_seed(0)
images = []
for i in range(4):
image = guided_pipeline(
prompt,
num_inference_steps=50,
guidance_scale=7.5,
clip_guidance_scale=100,
num_cutouts=4,
use_cutouts=False,
generator=generator,
).images[0]
images.append(image)
# save images locally
for i, img in enumerate(images):
img.save(f"./clip_guided_sd/image_{i}.png")
```
The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab.
Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images:
.
### One Step Unet
The dummy "one-step-unet" can be run as follows:
```python
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet")
pipe()
```
**Note**: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see <https://github.com/huggingface/diffusers/issues/841>).
### Stable Diffusion Interpolation
The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes.
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
revision='fp16',
torch_dtype=torch.float16,
safety_checker=None, # Very important for videos...lots of false positives while interpolating
custom_pipeline="interpolate_stable_diffusion",
).to('cuda')
pipe.enable_attention_slicing()
frame_filepaths = pipe.walk(
prompts=['a dog', 'a cat', 'a horse'],
seeds=[42, 1337, 1234],
num_interpolation_steps=16,
output_dir='./dreams',
batch_size=4,
height=512,
width=512,
guidance_scale=8.5,
num_inference_steps=50,
)
```
The output of the `walk(...)` function returns a list of images saved under the folder as defined in `output_dir`. You can use these images to create videos of stable diffusion.
> **Please have a look at <https://github.com/nateraw/stable-diffusion-videos> for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.**
### Stable Diffusion Mega
The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class.
```python
#!/usr/bin/env python3
from diffusers import DiffusionPipeline
import PIL
import requests
from io import BytesIO
import torch
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_mega", torch_dtype=torch.float16, revision="fp16")
pipe.to("cuda")
pipe.enable_attention_slicing()
### Text-to-Image
images = pipe.text2img("An astronaut riding a horse").images
### Image-to-Image
init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
prompt = "A fantasy landscape, trending on artstation"
images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
### Inpainting
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
prompt = "a cat sitting on a bench"
images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images
```
As shown above this one pipeline can run all both "text-to-image", "image-to-image", and "inpainting" in one pipeline.
### Long Prompt Weighting Stable Diffusion
Features of this custom pipeline:
- Input a prompt without the 77 token length limit.
- Includes tx2img, img2img. and inpainting pipelines.
- Emphasize/weigh part of your prompt with parentheses as so: `a baby deer with (big eyes)`
- De-emphasize part of your prompt as so: `a [baby] deer with big eyes`
- Precisely weigh part of your prompt as so: `a baby deer with (big eyes:1.3)`
Prompt weighting equivalents:
- `a baby deer with` == `(a baby deer with:1.0)`
- `(big eyes)` == `(big eyes:1.1)`
- `((big eyes))` == `(big eyes:1.21)`
- `[big eyes]` == `(big eyes:0.91)`
You can run this custom pipeline as so:
#### pytorch
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
'hakurei/waifu-diffusion',
custom_pipeline="lpw_stable_diffusion",
torch_dtype=torch.float16
)
pipe=pipe.to("cuda")
prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms"
neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry"
pipe.text2img(prompt, negative_prompt=neg_prompt, width=512,height=512,max_embeddings_multiples=3).images[0]
```
#### onnxruntime
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
'CompVis/stable-diffusion-v1-4',
custom_pipeline="lpw_stable_diffusion_onnx",
revision="onnx",
provider="CUDAExecutionProvider"
)
prompt = "a photo of an astronaut riding a horse on mars, best quality"
neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
pipe.text2img(prompt,negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0]
```
if you see `Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors`. Do not worry, it is normal.
### Speech to Image
The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion.
```Python
import torch
import matplotlib.pyplot as plt
from datasets import load_dataset
from diffusers import DiffusionPipeline
from transformers import (
WhisperForConditionalGeneration,
WhisperProcessor,
)
device = "cuda" if torch.cuda.is_available() else "cpu"
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[3]
text = audio_sample["text"].lower()
speech_data = audio_sample["audio"]["array"]
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device)
processor = WhisperProcessor.from_pretrained("openai/whisper-small")
diffuser_pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="speech_to_image_diffusion",
speech_model=model,
speech_processor=processor,
torch_dtype=torch.float16,
)
diffuser_pipeline.enable_attention_slicing()
diffuser_pipeline = diffuser_pipeline.to(device)
output = diffuser_pipeline(speech_data)
plt.imshow(output.images[0])
```
This example produces the following image:

### Wildcard Stable Diffusion
Following the great examples from <https://github.com/jtkelm2/stable-diffusion-webui-1/blob/master/scripts/wildcards.py> and <https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts#wildcards>, here's a minimal implementation that allows for users to add "wildcards", denoted by `__wildcard__` to prompts that are used as placeholders for randomly sampled values given by either a dictionary or a `.txt` file. For example:
Say we have a prompt:
```
prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
```
We can then define possible values to be sampled for `animal`, `object`, and `clothing`. These can either be from a `.txt` with the same name as the category.
The possible values can also be defined / combined by using a dictionary like: `{"animal":["dog", "cat", mouse"]}`.
The actual pipeline works just like `StableDiffusionPipeline`, except the `__call__` method takes in:
`wildcard_files`: list of file paths for wild card replacement
`wildcard_option_dict`: dict with key as `wildcard` and values as a list of possible replacements
`num_prompt_samples`: number of prompts to sample, uniformly sampling wildcards
A full example:
create `animal.txt`, with contents like:
```
dog
cat
mouse
```
create `object.txt`, with contents like:
```
chair
sofa
bench
```
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="wildcard_stable_diffusion",
torch_dtype=torch.float16,
)
prompt = "__animal__ sitting on a __object__ wearing a __clothing__"
out = pipe(
prompt,
wildcard_option_dict={
"clothing":["hat", "shirt", "scarf", "beret"]
},
wildcard_files=["object.txt", "animal.txt"],
num_prompt_samples=1
)
```
### Composable Stable diffusion
[Composable Stable Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/) proposes conjunction and negation (negative prompts) operators for compositional generation with conditional diffusion models.
```python
import torch as th
import numpy as np
import torchvision.utils as tvu
from diffusers import DiffusionPipeline
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--prompt", type=str, default="mystical trees | A magical pond | dark",
help="use '|' as the delimiter to compose separate sentences.")
parser.add_argument("--steps", type=int, default=50)
parser.add_argument("--scale", type=float, default=7.5)
parser.add_argument("--weights", type=str, default="7.5 | 7.5 | -7.5")
parser.add_argument("--seed", type=int, default=2)
parser.add_argument("--model_path", type=str, default="CompVis/stable-diffusion-v1-4")
parser.add_argument("--num_images", type=int, default=1)
args = parser.parse_args()
has_cuda = th.cuda.is_available()
device = th.device('cpu' if not has_cuda else 'cuda')
prompt = args.prompt
scale = args.scale
steps = args.steps
pipe = DiffusionPipeline.from_pretrained(
args.model_path,
custom_pipeline="composable_stable_diffusion",
).to(device)
pipe.safety_checker = None
images = []
generator = th.Generator("cuda").manual_seed(args.seed)
for i in range(args.num_images):
image = pipe(prompt, guidance_scale=scale, num_inference_steps=steps,
weights=args.weights, generator=generator).images[0]
images.append(th.from_numpy(np.array(image)).permute(2, 0, 1) / 255.)
grid = tvu.make_grid(th.stack(images, dim=0), nrow=4, padding=0)
tvu.save_image(grid, f'{prompt}_{args.weights}' + '.png')
```
### Imagic Stable Diffusion
Allows you to edit an image using stable diffusion.
```python
import requests
from PIL import Image
from io import BytesIO
import torch
import os
from diffusers import DiffusionPipeline, DDIMScheduler
has_cuda = torch.cuda.is_available()
device = torch.device('cpu' if not has_cuda else 'cuda')
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
safety_checker=None,
custom_pipeline="imagic_stable_diffusion",
scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
).to(device)
generator = torch.Generator("cuda").manual_seed(0)
seed = 0
prompt = "A photo of Barack Obama smiling with a big grin"
url = 'https://www.dropbox.com/s/6tlwzr73jd1r9yk/obama.png?dl=1'
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))
res = pipe.train(
prompt,
image=init_image,
generator=generator)
res = pipe(alpha=1, guidance_scale=7.5, num_inference_steps=50)
os.makedirs("imagic", exist_ok=True)
image = res.images[0]
image.save('./imagic/imagic_image_alpha_1.png')
res = pipe(alpha=1.5, guidance_scale=7.5, num_inference_steps=50)
image = res.images[0]
image.save('./imagic/imagic_image_alpha_1_5.png')
res = pipe(alpha=2, guidance_scale=7.5, num_inference_steps=50)
image = res.images[0]
image.save('./imagic/imagic_image_alpha_2.png')
```
### Seed Resizing
Test seed resizing. Originally generate an image in 512 by 512, then generate image with same seed at 512 by 592 using seed resizing. Finally, generate 512 by 592 using original stable diffusion pipeline.
```python
import torch as th
import numpy as np
from diffusers import DiffusionPipeline
has_cuda = th.cuda.is_available()
device = th.device('cpu' if not has_cuda else 'cuda')
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="seed_resize_stable_diffusion"
).to(device)
def dummy(images, **kwargs):
return images, False
pipe.safety_checker = dummy
images = []
th.manual_seed(0)
generator = th.Generator("cuda").manual_seed(0)
seed = 0
prompt = "A painting of a futuristic cop"
width = 512
height = 512
res = pipe(
prompt,
guidance_scale=7.5,
num_inference_steps=50,
height=height,
width=width,
generator=generator)
image = res.images[0]
image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
th.manual_seed(0)
generator = th.Generator("cuda").manual_seed(0)
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
).to(device)
width = 512
height = 592
res = pipe(
prompt,
guidance_scale=7.5,
num_inference_steps=50,
height=height,
width=width,
generator=generator)
image = res.images[0]
image.save('./seed_resize/seed_resize_{w}_{h}_image.png'.format(w=width, h=height))
pipe_compare = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="/home/mark/open_source/diffusers/examples/community/"
).to(device)
res = pipe_compare(
prompt,
guidance_scale=7.5,
num_inference_steps=50,
height=height,
width=width,
generator=generator
)
image = res.images[0]
image.save('./seed_resize/seed_resize_{w}_{h}_image_compare.png'.format(w=width, h=height))
```
### Multilingual Stable Diffusion Pipeline
The following code can generate an images from texts in different languages using the pre-trained [mBART-50 many-to-one multilingual machine translation model](https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt) and Stable Diffusion.
```python
from PIL import Image
import torch
from diffusers import DiffusionPipeline
from transformers import (
pipeline,
MBart50TokenizerFast,
MBartForConditionalGeneration,
)
device = "cuda" if torch.cuda.is_available() else "cpu"
device_dict = {"cuda": 0, "cpu": -1}
# helper function taken from: https://huggingface.co/blog/stable_diffusion
def image_grid(imgs, rows, cols):
assert len(imgs) == rows*cols
w, h = imgs[0].size
grid = Image.new('RGB', size=(cols*w, rows*h))
grid_w, grid_h = grid.size
for i, img in enumerate(imgs):
grid.paste(img, box=(i%cols*w, i//cols*h))
return grid
# Add language detection pipeline
language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection"
language_detection_pipeline = pipeline("text-classification",
model=language_detection_model_ckpt,
device=device_dict[device])
# Add model for language translation
trans_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
trans_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device)
diffuser_pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="multilingual_stable_diffusion",
detection_pipeline=language_detection_pipeline,
translation_model=trans_model,
translation_tokenizer=trans_tokenizer,
torch_dtype=torch.float16,
)
diffuser_pipeline.enable_attention_slicing()
diffuser_pipeline = diffuser_pipeline.to(device)
prompt = ["a photograph of an astronaut riding a horse",
"Una casa en la playa",
"Ein Hund, der Orange isst",
"Un restaurant parisien"]
output = diffuser_pipeline(prompt)
images = output.images
grid = image_grid(images, rows=2, cols=2)
```
This example produces the following images:

### GlueGen Stable Diffusion Pipeline
GlueGen is a minimal adapter that allow alignment between any encoder (Text Encoder of different language, Multilingual Roberta, AudioClip) and CLIP text encoder used in standard Stable Diffusion model. This method allows easy language adaptation to available english Stable Diffusion checkpoints without the need of an image captioning dataset as well as long training hours.
Make sure you downloaded `gluenet_French_clip_overnorm_over3_noln.ckpt` for French (there are also pre-trained weights for Chinese, Italian, Japanese, Spanish or train your own) at [GlueGen's official repo](https://github.com/salesforce/GlueGen/tree/main)
```python
from PIL import Image
import torch
from transformers import AutoModel, AutoTokenizer
from diffusers import DiffusionPipeline
if __name__ == "__main__":
device = "cuda"
lm_model_id = "xlm-roberta-large"
token_max_length = 77
text_encoder = AutoModel.from_pretrained(lm_model_id)
tokenizer = AutoTokenizer.from_pretrained(lm_model_id, model_max_length=token_max_length, use_fast=False)
tensor_norm = torch.Tensor([[43.8203],[28.3668],[27.9345],[28.0084],[28.2958],[28.2576],[28.3373],[28.2695],[28.4097],[28.2790],[28.2825],[28.2807],[28.2775],[28.2708],[28.2682],[28.2624],[28.2589],[28.2611],[28.2616],[28.2639],[28.2613],[28.2566],[28.2615],[28.2665],[28.2799],[28.2885],[28.2852],[28.2863],[28.2780],[28.2818],[28.2764],[28.2532],[28.2412],[28.2336],[28.2514],[28.2734],[28.2763],[28.2977],[28.2971],[28.2948],[28.2818],[28.2676],[28.2831],[28.2890],[28.2979],[28.2999],[28.3117],[28.3363],[28.3554],[28.3626],[28.3589],[28.3597],[28.3543],[28.3660],[28.3731],[28.3717],[28.3812],[28.3753],[28.3810],[28.3777],[28.3693],[28.3713],[28.3670],[28.3691],[28.3679],[28.3624],[28.3703],[28.3703],[28.3720],[28.3594],[28.3576],[28.3562],[28.3438],[28.3376],[28.3389],[28.3433],[28.3191]])
pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
text_encoder=text_encoder,
tokenizer=tokenizer,
custom_pipeline="gluegen"
).to(device)
pipeline.load_language_adapter("gluenet_French_clip_overnorm_over3_noln.ckpt", num_token=token_max_length, dim=1024, dim_out=768, tensor_norm=tensor_norm)
prompt = "une voiture sur la plage"
generator = torch.Generator(device=device).manual_seed(42)
image = pipeline(prompt, generator=generator).images[0]
image.save("gluegen_output_fr.png")
```
Which will produce:

### Image to Image Inpainting Stable Diffusion
Similar to the standard stable diffusion inpainting example, except with the addition of an `inner_image` argument.
`image`, `inner_image`, and `mask` should have the same dimensions. `inner_image` should have an alpha (transparency) channel.
The aim is to overlay two images, then mask out the boundary between `image` and `inner_image` to allow stable diffusion to make the connection more seamless.
For example, this could be used to place a logo on a shirt and make it blend seamlessly.
```python
import PIL
import torch
from diffusers import DiffusionPipeline
image_path = "./path-to-image.png"
inner_image_path = "./path-to-inner-image.png"
mask_path = "./path-to-mask.png"
init_image = PIL.Image.open(image_path).convert("RGB").resize((512, 512))
inner_image = PIL.Image.open(inner_image_path).convert("RGBA").resize((512, 512))
mask_image = PIL.Image.open(mask_path).convert("RGB").resize((512, 512))
pipe = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
custom_pipeline="img2img_inpainting",
torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
prompt = "Your prompt here!"
image = pipe(prompt=prompt, image=init_image, inner_image=inner_image, mask_image=mask_image).images[0]
```

### Text Based Inpainting Stable Diffusion
Use a text prompt to generate the mask for the area to be inpainted.
Currently uses the CLIPSeg model for mask generation, then calls the standard Stable Diffusion Inpainting pipeline to perform the inpainting.
```python
from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation
from diffusers import DiffusionPipeline
from PIL import Image
import requests
processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined")
pipe = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
custom_pipeline="text_inpainting",
segmentation_model=model,
segmentation_processor=processor
)
pipe = pipe.to("cuda")
url = "https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true"
image = Image.open(requests.get(url, stream=True).raw).resize((512, 512))
text = "a glass" # will mask out this text
prompt = "a cup" # the masked out region will be replaced with this
image = pipe(image=image, text=text, prompt=prompt).images[0]
```
### Bit Diffusion
Based <https://arxiv.org/abs/2208.04202>, this is used for diffusion on discrete data - eg, discreate image data, DNA sequence data. An unconditional discreate image can be generated like this:
```python
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="bit_diffusion")
image = pipe().images[0]
```
### Stable Diffusion with K Diffusion
Make sure you have @crowsonkb's <https://github.com/crowsonkb/k-diffusion> installed:
```sh
pip install k-diffusion
```
You can use the community pipeline as follows:
```python
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion")
pipe = pipe.to("cuda")
prompt = "an astronaut riding a horse on mars"
pipe.set_scheduler("sample_heun")
generator = torch.Generator(device="cuda").manual_seed(seed)
image = pipe(prompt, generator=generator, num_inference_steps=20).images[0]
image.save("./astronaut_heun_k_diffusion.png")
```
To make sure that K Diffusion and `diffusers` yield the same results:
**Diffusers**:
```python
from diffusers import DiffusionPipeline, EulerDiscreteScheduler
seed = 33
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
generator = torch.Generator(device="cuda").manual_seed(seed)
image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
```

**K Diffusion**:
```python
from diffusers import DiffusionPipeline, EulerDiscreteScheduler
seed = 33
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="sd_text2img_k_diffusion")
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
pipe.set_scheduler("sample_euler")
generator = torch.Generator(device="cuda").manual_seed(seed)
image = pipe(prompt, generator=generator, num_inference_steps=50).images[0]
```

### Checkpoint Merger Pipeline
Based on the AUTOMATIC1111/webui for checkpoint merging. This is a custom pipeline that merges upto 3 pretrained model checkpoints as long as they are in the HuggingFace model_index.json format.
The checkpoint merging is currently memory intensive as it modifies the weights of a DiffusionPipeline object in place. Expect at least 13GB RAM Usage on Kaggle GPU kernels and
on colab you might run out of the 12GB memory even while merging two checkpoints.
Usage:-
```python
from diffusers import DiffusionPipeline
#Return a CheckpointMergerPipeline class that allows you to merge checkpoints.
#The checkpoint passed here is ignored. But still pass one of the checkpoints you plan to
#merge for convenience
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="checkpoint_merger")
#There are multiple possible scenarios:
#The pipeline with the merged checkpoints is returned in all the scenarios
#Compatible checkpoints a.k.a matched model_index.json files. Ignores the meta attributes in model_index.json during comparison.( attrs with _ as prefix )
merged_pipe = pipe.merge(["CompVis/stable-diffusion-v1-4","CompVis/stable-diffusion-v1-2"], interp = "sigmoid", alpha = 0.4)
#Incompatible checkpoints in model_index.json but merge might be possible. Use force = True to ignore model_index.json compatibility
merged_pipe_1 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion"], force = True, interp = "sigmoid", alpha = 0.4)
#Three checkpoint merging. Only "add_difference" method actually works on all three checkpoints. Using any other options will ignore the 3rd checkpoint.
merged_pipe_2 = pipe.merge(["CompVis/stable-diffusion-v1-4","hakurei/waifu-diffusion","prompthero/openjourney"], force = True, interp = "add_difference", alpha = 0.4)
prompt = "An astronaut riding a horse on Mars"
image = merged_pipe(prompt).images[0]
```
Some examples along with the merge details:
1. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" ; Sigmoid interpolation; alpha = 0.8

2. "hakurei/waifu-diffusion" + "prompthero/openjourney" ; Inverse Sigmoid interpolation; alpha = 0.8

3. "CompVis/stable-diffusion-v1-4" + "hakurei/waifu-diffusion" + "prompthero/openjourney"; Add Difference interpolation; alpha = 0.5

### Stable Diffusion Comparisons
This Community Pipeline enables the comparison between the 4 checkpoints that exist for Stable Diffusion. They can be found through the following links:
1. [Stable Diffusion v1.1](https://huggingface.co/CompVis/stable-diffusion-v1-1)
2. [Stable Diffusion v1.2](https://huggingface.co/CompVis/stable-diffusion-v1-2)
3. [Stable Diffusion v1.3](https://huggingface.co/CompVis/stable-diffusion-v1-3)
4. [Stable Diffusion v1.4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
```python
from diffusers import DiffusionPipeline
import matplotlib.pyplot as plt
pipe = DiffusionPipeline.from_pretrained('CompVis/stable-diffusion-v1-4', custom_pipeline='suvadityamuk/StableDiffusionComparison')
pipe.enable_attention_slicing()
pipe = pipe.to('cuda')
prompt = "an astronaut riding a horse on mars"
output = pipe(prompt)
plt.subplots(2,2,1)
plt.imshow(output.images[0])
plt.title('Stable Diffusion v1.1')
plt.axis('off')
plt.subplots(2,2,2)
plt.imshow(output.images[1])
plt.title('Stable Diffusion v1.2')
plt.axis('off')
plt.subplots(2,2,3)
plt.imshow(output.images[2])
plt.title('Stable Diffusion v1.3')
plt.axis('off')
plt.subplots(2,2,4)
plt.imshow(output.images[3])
plt.title('Stable Diffusion v1.4')
plt.axis('off')
plt.show()
```
As a result, you can look at a grid of all 4 generated images being shown together, that captures a difference the advancement of the training between the 4 checkpoints.
### Magic Mix
Implementation of the [MagicMix: Semantic Mixing with Diffusion Models](https://arxiv.org/abs/2210.16056) paper. This is a Diffusion Pipeline for semantic mixing of an image and a text prompt to create a new concept while preserving the spatial layout and geometry of the subject in the image. The pipeline takes an image that provides the layout semantics and a prompt that provides the content semantics for the mixing process.
There are 3 parameters for the method-
- `mix_factor`: It is the interpolation constant used in the layout generation phase. The greater the value of `mix_factor`, the greater the influence of the prompt on the layout generation process.
- `kmax` and `kmin`: These determine the range for the layout and content generation process. A higher value of kmax results in loss of more information about the layout of the original image and a higher value of kmin results in more steps for content generation process.
Here is an example usage-
```python
from diffusers import DiffusionPipeline, DDIMScheduler
from PIL import Image
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="magic_mix",
scheduler = DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"),
).to('cuda')
img = Image.open('phone.jpg')
mix_img = pipe(
img,
prompt = 'bed',
kmin = 0.3,
kmax = 0.5,
mix_factor = 0.5,
)
mix_img.save('phone_bed_mix.jpg')
```
The `mix_img` is a PIL image that can be saved locally or displayed directly in a google colab. Generated image is a mix of the layout semantics of the given image and the content semantics of the prompt.
E.g. the above script generates the following image:
`phone.jpg`

`phone_bed_mix.jpg`

For more example generations check out this [demo notebook](https://github.com/daspartho/MagicMix/blob/main/demo.ipynb).
### Stable UnCLIP
UnCLIPPipeline("kakaobrain/karlo-v1-alpha") provide a prior model that can generate clip image embedding from text.
StableDiffusionImageVariationPipeline("lambdalabs/sd-image-variations-diffusers") provide a decoder model than can generate images from clip image embedding.
```python
import torch
from diffusers import DiffusionPipeline
device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
pipeline = DiffusionPipeline.from_pretrained(
"kakaobrain/karlo-v1-alpha",
torch_dtype=torch.float16,
custom_pipeline="stable_unclip",
decoder_pipe_kwargs=dict(
image_encoder=None,
),
)
pipeline.to(device)
prompt = "a shiba inu wearing a beret and black turtleneck"
random_generator = torch.Generator(device=device).manual_seed(1000)
output = pipeline(
prompt=prompt,
width=512,
height=512,
generator=random_generator,
prior_guidance_scale=4,
prior_num_inference_steps=25,
decoder_guidance_scale=8,
decoder_num_inference_steps=50,
)
image = output.images[0]
image.save("./shiba-inu.jpg")
# debug
# `pipeline.decoder_pipe` is a regular StableDiffusionImageVariationPipeline instance.
# It is used to convert clip image embedding to latents, then fed into VAE decoder.
print(pipeline.decoder_pipe.__class__)
# <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_image_variation.StableDiffusionImageVariationPipeline'>
# this pipeline only use prior module in "kakaobrain/karlo-v1-alpha"
# It is used to convert clip text embedding to clip image embedding.
print(pipeline)
# StableUnCLIPPipeline {
# "_class_name": "StableUnCLIPPipeline",
# "_diffusers_version": "0.12.0.dev0",
# "prior": [
# "diffusers",
# "PriorTransformer"
# ],
# "prior_scheduler": [
# "diffusers",
# "UnCLIPScheduler"
# ],
# "text_encoder": [
# "transformers",
# "CLIPTextModelWithProjection"
# ],
# "tokenizer": [
# "transformers",
# "CLIPTokenizer"
# ]
# }
# pipeline.prior_scheduler is the scheduler used for prior in UnCLIP.
print(pipeline.prior_scheduler)
# UnCLIPScheduler {
# "_class_name": "UnCLIPScheduler",
# "_diffusers_version": "0.12.0.dev0",
# "clip_sample": true,
# "clip_sample_range": 5.0,
# "num_train_timesteps": 1000,
# "prediction_type": "sample",
# "variance_type": "fixed_small_log"
# }
```
`shiba-inu.jpg`

### UnCLIP Text Interpolation Pipeline
This Diffusion Pipeline takes two prompts and interpolates between the two input prompts using spherical interpolation ( slerp ). The input prompts are converted to text embeddings by the pipeline's text_encoder and the interpolation is done on the resulting text_embeddings over the number of steps specified. Defaults to 5 steps.
```python
import torch
from diffusers import DiffusionPipeline
device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
pipe = DiffusionPipeline.from_pretrained(
"kakaobrain/karlo-v1-alpha",
torch_dtype=torch.float16,
custom_pipeline="unclip_text_interpolation"
)
pipe.to(device)
start_prompt = "A photograph of an adult lion"
end_prompt = "A photograph of a lion cub"
#For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
generator = torch.Generator(device=device).manual_seed(42)
output = pipe(start_prompt, end_prompt, steps = 6, generator = generator, enable_sequential_cpu_offload=False)
for i,image in enumerate(output.images):
img.save('result%s.jpg' % i)
```
The resulting images in order:-






### UnCLIP Image Interpolation Pipeline
This Diffusion Pipeline takes two images or an image_embeddings tensor of size 2 and interpolates between their embeddings using spherical interpolation ( slerp ). The input images/image_embeddings are converted to image embeddings by the pipeline's image_encoder and the interpolation is done on the resulting image_embeddings over the number of steps specified. Defaults to 5 steps.
```python
import torch
from diffusers import DiffusionPipeline
from PIL import Image
device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
dtype = torch.float16 if torch.cuda.is_available() else torch.bfloat16
pipe = DiffusionPipeline.from_pretrained(
"kakaobrain/karlo-v1-alpha-image-variations",
torch_dtype=dtype,
custom_pipeline="unclip_image_interpolation"
)
pipe.to(device)
images = [Image.open('./starry_night.jpg'), Image.open('./flowers.jpg')]
#For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
generator = torch.Generator(device=device).manual_seed(42)
output = pipe(image = images ,steps = 6, generator = generator)
for i,image in enumerate(output.images):
image.save('starry_to_flowers_%s.jpg' % i)
```
The original images:-


The resulting images in order:-






### DDIM Noise Comparative Analysis Pipeline
#### **Research question: What visual concepts do the diffusion models learn from each noise level during training?**
The [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227) paper proposed an approach to answer the above question, which is their second contribution.
The approach consists of the following steps:
1. The input is an image x0.
2. Perturb it to xt using a diffusion process q(xt|x0).
- `strength` is a value between 0.0 and 1.0, that controls the amount of noise that is added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input.
3. Reconstruct the image with the learned denoising process pθ(ˆx0|xt).
4. Compare x0 and ˆx0 among various t to show how each step contributes to the sample.
The authors used [openai/guided-diffusion](https://github.com/openai/guided-diffusion) model to denoise images in FFHQ dataset. This pipeline extends their second contribution by investigating DDIM on any input image.
```python
import torch
from PIL import Image
import numpy as np
image_path = "path/to/your/image" # images from CelebA-HQ might be better
image_pil = Image.open(image_path)
image_name = image_path.split("/")[-1].split(".")[0]
device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
pipe = DiffusionPipeline.from_pretrained(
"google/ddpm-ema-celebahq-256",
custom_pipeline="ddim_noise_comparative_analysis",
)
pipe = pipe.to(device)
for strength in np.linspace(0.1, 1, 25):
denoised_image, latent_timestep = pipe(
image_pil, strength=strength, return_dict=False
)
denoised_image = denoised_image[0]
denoised_image.save(
f"noise_comparative_analysis_{image_name}_{latent_timestep}.png"
)
```
Here is the result of this pipeline (which is DDIM) on CelebA-HQ dataset.

### CLIP Guided Img2Img Stable Diffusion
CLIP guided Img2Img stable diffusion can help to generate more realistic images with an initial image
by guiding stable diffusion at every denoising step with an additional CLIP model.
The following code requires roughly 12GB of GPU RAM.
```python
from io import BytesIO
import requests
import torch
from diffusers import DiffusionPipeline
from PIL import Image
from transformers import CLIPFeatureExtractor, CLIPModel
feature_extractor = CLIPFeatureExtractor.from_pretrained(
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
)
clip_model = CLIPModel.from_pretrained(
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16
)
guided_pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
# custom_pipeline="clip_guided_stable_diffusion",
custom_pipeline="/home/njindal/diffusers/examples/community/clip_guided_stable_diffusion.py",
clip_model=clip_model,
feature_extractor=feature_extractor,
torch_dtype=torch.float16,
)
guided_pipeline.enable_attention_slicing()
guided_pipeline = guided_pipeline.to("cuda")
prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece"
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
image = guided_pipeline(
prompt=prompt,
num_inference_steps=30,
image=init_image,
strength=0.75,
guidance_scale=7.5,
clip_guidance_scale=100,
num_cutouts=4,
use_cutouts=False,
).images[0]
display(image)
```
Init Image

Output Image

### TensorRT Text2Image Stable Diffusion Pipeline
The TensorRT Pipeline can be used to accelerate the Text2Image Stable Diffusion Inference run.
NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
```python
import torch
from diffusers import DDIMScheduler
from diffusers.pipelines.stable_diffusion import StableDiffusionPipeline
# Use the DDIMScheduler scheduler here instead
scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-2-1",
subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1",
custom_pipeline="stable_diffusion_tensorrt_txt2img",
revision='fp16',
torch_dtype=torch.float16,
scheduler=scheduler,)
# re-use cached folder to save ONNX models and TensorRT Engines
pipe.set_cached_folder("stabilityai/stable-diffusion-2-1", revision='fp16',)
pipe = pipe.to("cuda")
prompt = "a beautiful photograph of Mt. Fuji during cherry blossom"
image = pipe(prompt).images[0]
image.save('tensorrt_mt_fuji.png')
```
### EDICT Image Editing Pipeline
This pipeline implements the text-guided image editing approach from the paper [EDICT: Exact Diffusion Inversion via Coupled Transformations](https://arxiv.org/abs/2211.12446). You have to pass:
- (`PIL`) `image` you want to edit.
- `base_prompt`: the text prompt describing the current image (before editing).
- `target_prompt`: the text prompt describing with the edits.
```python
from diffusers import DiffusionPipeline, DDIMScheduler
from transformers import CLIPTextModel
import torch, PIL, requests
from io import BytesIO
from IPython.display import display
def center_crop_and_resize(im):
width, height = im.size
d = min(width, height)
left = (width - d) / 2
upper = (height - d) / 2
right = (width + d) / 2
lower = (height + d) / 2
return im.crop((left, upper, right, lower)).resize((512, 512))
torch_dtype = torch.float16
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# scheduler and text_encoder param values as in the paper
scheduler = DDIMScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
set_alpha_to_one=False,
clip_sample=False,
)
text_encoder = CLIPTextModel.from_pretrained(
pretrained_model_name_or_path="openai/clip-vit-large-patch14",
torch_dtype=torch_dtype,
)
# initialize pipeline
pipeline = DiffusionPipeline.from_pretrained(
pretrained_model_name_or_path="CompVis/stable-diffusion-v1-4",
custom_pipeline="edict_pipeline",
revision="fp16",
scheduler=scheduler,
text_encoder=text_encoder,
leapfrog_steps=True,
torch_dtype=torch_dtype,
).to(device)
# download image
image_url = "https://huggingface.co/datasets/Joqsan/images/resolve/main/imagenet_dog_1.jpeg"
response = requests.get(image_url)
image = PIL.Image.open(BytesIO(response.content))
# preprocess it
cropped_image = center_crop_and_resize(image)
# define the prompts
base_prompt = "A dog"
target_prompt = "A golden retriever"
# run the pipeline
result_image = pipeline(
base_prompt=base_prompt,
target_prompt=target_prompt,
image=cropped_image,
)
display(result_image)
```
Init Image

Output Image

### Stable Diffusion RePaint
This pipeline uses the [RePaint](https://arxiv.org/abs/2201.09865) logic on the latent space of stable diffusion. It can
be used similarly to other image inpainting pipelines but does not rely on a specific inpainting model. This means you can use
models that are not specifically created for inpainting.
Make sure to use the ```RePaintScheduler``` as shown in the example below.
Disclaimer: The mask gets transferred into latent space, this may lead to unexpected changes on the edge of the masked part.
The inference time is a lot slower.
```py
import PIL
import requests
import torch
from io import BytesIO
from diffusers import StableDiffusionPipeline, RePaintScheduler
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
mask_image = PIL.ImageOps.invert(mask_image)
pipe = StableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, custom_pipeline="stable_diffusion_repaint",
)
pipe.scheduler = RePaintScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```
### TensorRT Image2Image Stable Diffusion Pipeline
The TensorRT Pipeline can be used to accelerate the Image2Image Stable Diffusion Inference run.
NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
```python
import requests
from io import BytesIO
from PIL import Image
import torch
from diffusers import DDIMScheduler
from diffusers.pipelines.stable_diffusion import StableDiffusionImg2ImgPipeline
# Use the DDIMScheduler scheduler here instead
scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-2-1",
subfolder="scheduler")
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-1",
custom_pipeline="stable_diffusion_tensorrt_img2img",
revision='fp16',
torch_dtype=torch.float16,
scheduler=scheduler,)
# re-use cached folder to save ONNX models and TensorRT Engines
pipe.set_cached_folder("stabilityai/stable-diffusion-2-1", revision='fp16',)
pipe = pipe.to("cuda")
url = "https://pajoca.com/wp-content/uploads/2022/09/tekito-yamakawa-1.png"
response = requests.get(url)
input_image = Image.open(BytesIO(response.content)).convert("RGB")
prompt = "photorealistic new zealand hills"
image = pipe(prompt, image=input_image, strength=0.75,).images[0]
image.save('tensorrt_img2img_new_zealand_hills.png')
```
### Stable Diffusion BoxDiff
BoxDiff is a training-free method for controlled generation with bounding box coordinates. It shoud work with any Stable Diffusion model. Below shows an example with `stable-diffusion-2-1-base`.
```py
import torch
from PIL import Image, ImageDraw
from copy import deepcopy
from examples.community.pipeline_stable_diffusion_boxdiff import StableDiffusionBoxDiffPipeline
def draw_box_with_text(img, boxes, names):
colors = ["red", "olive", "blue", "green", "orange", "brown", "cyan", "purple"]
img_new = deepcopy(img)
draw = ImageDraw.Draw(img_new)
W, H = img.size
for bid, box in enumerate(boxes):
draw.rectangle([box[0] * W, box[1] * H, box[2] * W, box[3] * H], outline=colors[bid % len(colors)], width=4)
draw.text((box[0] * W, box[1] * H), names[bid], fill=colors[bid % len(colors)])
return img_new
pipe = StableDiffusionBoxDiffPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1-base",
torch_dtype=torch.float16,
)
pipe.to("cuda")
# example 1
prompt = "as the aurora lights up the sky, a herd of reindeer leisurely wanders on the grassy meadow, admiring the breathtaking view, a serene lake quietly reflects the magnificent display, and in the distance, a snow-capped mountain stands majestically, fantasy, 8k, highly detailed"
phrases = [
"aurora",
"reindeer",
"meadow",
"lake",
"mountain"
]
boxes = [[1,3,512,202], [75,344,421,495], [1,327,508,507], [2,217,507,341], [1,135,509,242]]
# example 2
# prompt = "A rabbit wearing sunglasses looks very proud"
# phrases = ["rabbit", "sunglasses"]
# boxes = [[67,87,366,512], [66,130,364,262]]
boxes = [[x / 512 for x in box] for box in boxes]
images = pipe(
prompt,
boxdiff_phrases=phrases,
boxdiff_boxes=boxes,
boxdiff_kwargs={
"attention_res": 16,
"normalize_eot": True
},
num_inference_steps=50,
guidance_scale=7.5,
generator=torch.manual_seed(42),
safety_checker=None
).images
draw_box_with_text(images[0], boxes, phrases).save("output.png")
```
### Stable Diffusion Reference
This pipeline uses the Reference Control. Refer to the [sd-webui-controlnet discussion: Reference-only Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1236)[sd-webui-controlnet discussion: Reference-adain Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1280).
Based on [this issue](https://github.com/huggingface/diffusers/issues/3566),
- `EulerAncestralDiscreteScheduler` got poor results.
```py
import torch
from diffusers import UniPCMultistepScheduler
from diffusers.utils import load_image
input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
pipe = StableDiffusionReferencePipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
safety_checker=None,
torch_dtype=torch.float16
).to('cuda:0')
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
result_img = pipe(ref_image=input_image,
prompt="1girl",
num_inference_steps=20,
reference_attn=True,
reference_adain=True).images[0]
```
Reference Image

Output Image of `reference_attn=True` and `reference_adain=False`

Output Image of `reference_attn=False` and `reference_adain=True`

Output Image of `reference_attn=True` and `reference_adain=True`

### Stable Diffusion ControlNet Reference
This pipeline uses the Reference Control with ControlNet. Refer to the [sd-webui-controlnet discussion: Reference-only Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1236)[sd-webui-controlnet discussion: Reference-adain Control](https://github.com/Mikubill/sd-webui-controlnet/discussions/1280).
Based on [this issue](https://github.com/huggingface/diffusers/issues/3566),
- `EulerAncestralDiscreteScheduler` got poor results.
- `guess_mode=True` works well for ControlNet v1.1
```py
import cv2
import torch
import numpy as np
from PIL import Image
from diffusers import UniPCMultistepScheduler
from diffusers.utils import load_image
input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
# get canny image
image = cv2.Canny(np.array(input_image), 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetReferencePipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
controlnet=controlnet,
safety_checker=None,
torch_dtype=torch.float16
).to('cuda:0')
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
result_img = pipe(ref_image=input_image,
prompt="1girl",
image=canny_image,
num_inference_steps=20,
reference_attn=True,
reference_adain=True).images[0]
```
Reference Image

Output Image

### Stable Diffusion on IPEX
This diffusion pipeline aims to accelarate the inference of Stable-Diffusion on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
To use this pipeline, you need to:
1. Install [IPEX](https://github.com/intel/intel-extension-for-pytorch)
**Note:** For each PyTorch release, there is a corresponding release of the IPEX. Here is the mapping relationship. It is recommended to install Pytorch/IPEX2.0 to get the best performance.
|PyTorch Version|IPEX Version|
|--|--|
|[v2.0.\*](https://github.com/pytorch/pytorch/tree/v2.0.1 "v2.0.1")|[v2.0.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v2.0.100+cpu)|
|[v1.13.\*](https://github.com/pytorch/pytorch/tree/v1.13.0 "v1.13.0")|[v1.13.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.100+cpu)|
You can simply use pip to install IPEX with the latest version.
```sh
python -m pip install intel_extension_for_pytorch
```
**Note:** To install a specific version, run with the following command:
```sh
python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
```
2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX accelaration. Supported inference datatypes are Float32 and BFloat16.
**Note:** The setting of generated image height/width for `prepare_for_ipex()` should be same as the setting of pipeline inference.
```python
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_ipex")
# For Float32
pipe.prepare_for_ipex(prompt, dtype=torch.float32, height=512, width=512) #value of image height/width should be consistent with the pipeline inference
# For BFloat16
pipe.prepare_for_ipex(prompt, dtype=torch.bfloat16, height=512, width=512) #value of image height/width should be consistent with the pipeline inference
```
Then you can use the ipex pipeline in a similar way to the default stable diffusion pipeline.
```python
# For Float32
image = pipe(prompt, num_inference_steps=20, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()'
# For BFloat16
with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
image = pipe(prompt, num_inference_steps=20, height=512, width=512).images[0] #value of image height/width should be consistent with 'prepare_for_ipex()'
```
The following code compares the performance of the original stable diffusion pipeline with the ipex-optimized pipeline.
```python
import torch
import intel_extension_for_pytorch as ipex
from diffusers import StableDiffusionPipeline
import time
prompt = "sailing ship in storm by Rembrandt"
model_id = "runwayml/stable-diffusion-v1-5"
# Helper function for time evaluation
def elapsed_time(pipeline, nb_pass=3, num_inference_steps=20):
# warmup
for _ in range(2):
images = pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512).images
#time evaluation
start = time.time()
for _ in range(nb_pass):
pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512)
end = time.time()
return (end - start) / nb_pass
############## bf16 inference performance ###############
# 1. IPEX Pipeline initialization
pipe = DiffusionPipeline.from_pretrained(model_id, custom_pipeline="stable_diffusion_ipex")
pipe.prepare_for_ipex(prompt, dtype=torch.bfloat16, height=512, width=512)
# 2. Original Pipeline initialization
pipe2 = StableDiffusionPipeline.from_pretrained(model_id)
# 3. Compare performance between Original Pipeline and IPEX Pipeline
with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
latency = elapsed_time(pipe)
print("Latency of StableDiffusionIPEXPipeline--bf16", latency)
latency = elapsed_time(pipe2)
print("Latency of StableDiffusionPipeline--bf16",latency)
############## fp32 inference performance ###############
# 1. IPEX Pipeline initialization
pipe3 = DiffusionPipeline.from_pretrained(model_id, custom_pipeline="stable_diffusion_ipex")
pipe3.prepare_for_ipex(prompt, dtype=torch.float32, height=512, width=512)
# 2. Original Pipeline initialization
pipe4 = StableDiffusionPipeline.from_pretrained(model_id)
# 3. Compare performance between Original Pipeline and IPEX Pipeline
latency = elapsed_time(pipe3)
print("Latency of StableDiffusionIPEXPipeline--fp32", latency)
latency = elapsed_time(pipe4)
print("Latency of StableDiffusionPipeline--fp32",latency)
```
### Stable Diffusion XL on IPEX
This diffusion pipeline aims to accelarate the inference of Stable-Diffusion XL on Intel Xeon CPUs with BF16/FP32 precision using [IPEX](https://github.com/intel/intel-extension-for-pytorch).
To use this pipeline, you need to:
1. Install [IPEX](https://github.com/intel/intel-extension-for-pytorch)
**Note:** For each PyTorch release, there is a corresponding release of IPEX. Here is the mapping relationship. It is recommended to install Pytorch/IPEX2.0 to get the best performance.
|PyTorch Version|IPEX Version|
|--|--|
|[v2.0.\*](https://github.com/pytorch/pytorch/tree/v2.0.1 "v2.0.1")|[v2.0.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v2.0.100+cpu)|
|[v1.13.\*](https://github.com/pytorch/pytorch/tree/v1.13.0 "v1.13.0")|[v1.13.\*](https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.100+cpu)|
You can simply use pip to install IPEX with the latest version.
```sh
python -m pip install intel_extension_for_pytorch
```
**Note:** To install a specific version, run with the following command:
```sh
python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
```
2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX accelaration. Supported inference datatypes are Float32 and BFloat16.
**Note:** The values of `height` and `width` used during preparation with `prepare_for_ipex()` should be the same when running inference with the prepared pipeline.
```python
pipe = StableDiffusionXLPipelineIpex.from_pretrained("stabilityai/sdxl-turbo", low_cpu_mem_usage=True, use_safetensors=True)
# value of image height/width should be consistent with the pipeline inference
# For Float32
pipe.prepare_for_ipex(torch.float32, prompt, height=512, width=512)
# For BFloat16
pipe.prepare_for_ipex(torch.bfloat16, prompt, height=512, width=512)
```
Then you can use the ipex pipeline in a similar way to the default stable diffusion xl pipeline.
```python
# value of image height/width should be consistent with 'prepare_for_ipex()'
# For Float32
image = pipe(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=guidance_scale).images[0]
# For BFloat16
with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
image = pipe(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=guidance_scale).images[0]
```
The following code compares the performance of the original stable diffusion xl pipeline with the ipex-optimized pipeline.
By using this optimized pipeline, we can get about 1.4-2 times performance boost with BFloat16 on fourth generation of Intel Xeon CPUs,
code-named Sapphire Rapids.
```python
import torch
from diffusers import StableDiffusionXLPipeline
from pipeline_stable_diffusion_xl_ipex import StableDiffusionXLPipelineIpex
import time
prompt = "sailing ship in storm by Rembrandt"
model_id = "stabilityai/sdxl-turbo"
steps = 4
# Helper function for time evaluation
def elapsed_time(pipeline, nb_pass=3, num_inference_steps=1):
# warmup
for _ in range(2):
images = pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=0.0).images
#time evaluation
start = time.time()
for _ in range(nb_pass):
pipeline(prompt, num_inference_steps=num_inference_steps, height=512, width=512, guidance_scale=0.0)
end = time.time()
return (end - start) / nb_pass
############## bf16 inference performance ###############
# 1. IPEX Pipeline initialization
pipe = StableDiffusionXLPipelineIpex.from_pretrained(model_id, low_cpu_mem_usage=True, use_safetensors=True)
pipe.prepare_for_ipex(torch.bfloat16, prompt, height=512, width=512)
# 2. Original Pipeline initialization
pipe2 = StableDiffusionXLPipeline.from_pretrained(model_id, low_cpu_mem_usage=True, use_safetensors=True)
# 3. Compare performance between Original Pipeline and IPEX Pipeline
with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
latency = elapsed_time(pipe, num_inference_steps=steps)
print("Latency of StableDiffusionXLPipelineIpex--bf16", latency, "s for total", steps, "steps")
latency = elapsed_time(pipe2, num_inference_steps=steps)
print("Latency of StableDiffusionXLPipeline--bf16", latency, "s for total", steps, "steps")
############## fp32 inference performance ###############
# 1. IPEX Pipeline initialization
pipe3 = StableDiffusionXLPipelineIpex.from_pretrained(model_id, low_cpu_mem_usage=True, use_safetensors=True)
pipe3.prepare_for_ipex(torch.float32, prompt, height=512, width=512)
# 2. Original Pipeline initialization
pipe4 = StableDiffusionXLPipeline.from_pretrained(model_id, low_cpu_mem_usage=True, use_safetensors=True)
# 3. Compare performance between Original Pipeline and IPEX Pipeline
latency = elapsed_time(pipe3, num_inference_steps=steps)
print("Latency of StableDiffusionXLPipelineIpex--fp32", latency, "s for total", steps, "steps")
latency = elapsed_time(pipe4, num_inference_steps=steps)
print("Latency of StableDiffusionXLPipeline--fp32",latency, "s for total", steps, "steps")
```
### CLIP Guided Images Mixing With Stable Diffusion

CLIP guided stable diffusion images mixing pipeline allows to combine two images using standard diffusion models.
This approach is using (optional) CoCa model to avoid writing image description.
[More code examples](https://github.com/TheDenk/images_mixing)
### Stable Diffusion XL Long Weighted Prompt Pipeline
This SDXL pipeline support unlimited length prompt and negative prompt, compatible with A1111 prompt weighted style.
You can provide both `prompt` and `prompt_2`. If only one prompt is provided, `prompt_2` will be a copy of the provided `prompt`. Here is a sample code to use this pipeline.
```python
from diffusers import DiffusionPipeline
from diffusers.utils import load_image
import torch
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0"
, torch_dtype = torch.float16
, use_safetensors = True
, variant = "fp16"
, custom_pipeline = "lpw_stable_diffusion_xl",
)
prompt = "photo of a cute (white) cat running on the grass" * 20
prompt2 = "chasing (birds:1.5)" * 20
prompt = f"{prompt},{prompt2}"
neg_prompt = "blur, low quality, carton, animate"
pipe.to("cuda")
# text2img
t2i_images = pipe(
prompt=prompt,
negative_prompt=neg_prompt,
).images # alternatively, you can call the .text2img() function
# img2img
input_image = load_image("/path/to/local/image.png") # or URL to your input image
i2i_images = pipe.img2img(
prompt=prompt,
negative_prompt=neg_prompt,
image=input_image,
strength=0.8, # higher strength will result in more variation compared to original image
).images
# inpaint
input_mask = load_image("/path/to/local/mask.png") # or URL to your input inpainting mask
inpaint_images = pipe.inpaint(
prompt="photo of a cute (black) cat running on the grass" * 20,
negative_prompt=neg_prompt,
image=input_image,
mask=input_mask,
strength=0.6, # higher strength will result in more variation compared to original image
).images
pipe.to("cpu")
torch.cuda.empty_cache()
from IPython.display import display # assuming you are using this code in a notebook
display(t2i_images[0])
display(i2i_images[0])
display(inpaint_images[0])
```
In the above code, the `prompt2` is appended to the `prompt`, which is more than 77 tokens. "birds" are showing up in the result.

For more results, checkout [PR #6114](https://github.com/huggingface/diffusers/pull/6114).
### Example Images Mixing (with CoCa)
```python
import requests
from io import BytesIO
import PIL
import torch
import open_clip
from open_clip import SimpleTokenizer
from diffusers import DiffusionPipeline
from transformers import CLIPFeatureExtractor, CLIPModel
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
# Loading additional models
feature_extractor = CLIPFeatureExtractor.from_pretrained(
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K"
)
clip_model = CLIPModel.from_pretrained(
"laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16
)
coca_model = open_clip.create_model('coca_ViT-L-14', pretrained='laion2B-s13B-b90k').to('cuda')
coca_model.dtype = torch.float16
coca_transform = open_clip.image_transform(
coca_model.visual.image_size,
is_train = False,
mean = getattr(coca_model.visual, 'image_mean', None),
std = getattr(coca_model.visual, 'image_std', None),
)
coca_tokenizer = SimpleTokenizer()
# Pipeline creating
mixing_pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="clip_guided_images_mixing_stable_diffusion",
clip_model=clip_model,
feature_extractor=feature_extractor,
coca_model=coca_model,
coca_tokenizer=coca_tokenizer,
coca_transform=coca_transform,
torch_dtype=torch.float16,
)
mixing_pipeline.enable_attention_slicing()
mixing_pipeline = mixing_pipeline.to("cuda")
# Pipeline running
generator = torch.Generator(device="cuda").manual_seed(17)
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
content_image = download_image("https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/boromir.jpg")
style_image = download_image("https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/gigachad.jpg")
pipe_images = mixing_pipeline(
num_inference_steps=50,
content_image=content_image,
style_image=style_image,
noise_strength=0.65,
slerp_latent_style_strength=0.9,
slerp_prompt_style_strength=0.1,
slerp_clip_image_style_strength=0.1,
guidance_scale=9.0,
batch_size=1,
clip_guidance_scale=100,
generator=generator,
).images
```

### Stable Diffusion Mixture Tiling
This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
```python
from diffusers import LMSDiscreteScheduler, DiffusionPipeline
# Creater scheduler and model (similar to StableDiffusionPipeline)
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler, custom_pipeline="mixture_tiling")
pipeline.to("cuda")
# Mixture of Diffusers generation
image = pipeline(
prompt=[[
"A charming house in the countryside, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece",
"A dirt road in the countryside crossing pastures, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece",
"An old and rusty giant robot lying on a dirt road, by jakub rozalski, dark sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece"
]],
tile_height=640,
tile_width=640,
tile_row_overlap=0,
tile_col_overlap=256,
guidance_scale=8,
seed=7178915308,
num_inference_steps=50,
)["images"][0]
```

### TensorRT Inpainting Stable Diffusion Pipeline
The TensorRT Pipeline can be used to accelerate the Inpainting Stable Diffusion Inference run.
NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
```python
import requests
from io import BytesIO
from PIL import Image
import torch
from diffusers import PNDMScheduler
from diffusers.pipelines.stable_diffusion import StableDiffusionInpaintPipeline
# Use the PNDMScheduler scheduler here instead
scheduler = PNDMScheduler.from_pretrained("stabilityai/stable-diffusion-2-inpainting", subfolder="scheduler")
pipe = StableDiffusionInpaintPipeline.from_pretrained("stabilityai/stable-diffusion-2-inpainting",
custom_pipeline="stable_diffusion_tensorrt_inpaint",
revision='fp16',
torch_dtype=torch.float16,
scheduler=scheduler,
)
# re-use cached folder to save ONNX models and TensorRT Engines
pipe.set_cached_folder("stabilityai/stable-diffusion-2-inpainting", revision='fp16',)
pipe = pipe.to("cuda")
url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
response = requests.get(url)
input_image = Image.open(BytesIO(response.content)).convert("RGB")
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
response = requests.get(mask_url)
mask_image = Image.open(BytesIO(response.content)).convert("RGB")
prompt = "a mecha robot sitting on a bench"
image = pipe(prompt, image=input_image, mask_image=mask_image, strength=0.75,).images[0]
image.save('tensorrt_inpaint_mecha_robot.png')
```
### Stable Diffusion Mixture Canvas
This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.
```python
from PIL import Image
from diffusers import LMSDiscreteScheduler, DiffusionPipeline
from diffusers.pipelines.pipeline_utils import Image2ImageRegion, Text2ImageRegion, preprocess_image
# Load and preprocess guide image
iic_image = preprocess_image(Image.open("input_image.png").convert("RGB"))
# Creater scheduler and model (similar to StableDiffusionPipeline)
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler).to("cuda:0", custom_pipeline="mixture_canvas")
pipeline.to("cuda")
# Mixture of Diffusers generation
output = pipeline(
canvas_height=800,
canvas_width=352,
regions=[
Text2ImageRegion(0, 800, 0, 352, guidance_scale=8,
prompt=f"best quality, masterpiece, WLOP, sakimichan, art contest winner on pixiv, 8K, intricate details, wet effects, rain drops, ethereal, mysterious, futuristic, UHD, HDR, cinematic lighting, in a beautiful forest, rainy day, award winning, trending on artstation, beautiful confident cheerful young woman, wearing a futuristic sleeveless dress, ultra beautiful detailed eyes, hyper-detailed face, complex, perfect, model, textured, chiaroscuro, professional make-up, realistic, figure in frame, "),
Image2ImageRegion(352-800, 352, 0, 352, reference_image=iic_image, strength=1.0),
],
num_inference_steps=100,
seed=5525475061,
)["images"][0]
```


### IADB pipeline
This pipeline is the implementation of the [α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) paper.
It is a simple and minimalist diffusion model.
The following code shows how to use the IADB pipeline to generate images using a pretrained celebahq-256 model.
```python
pipeline_iadb = DiffusionPipeline.from_pretrained("thomasc4/iadb-celebahq-256", custom_pipeline='iadb')
pipeline_iadb = pipeline_iadb.to('cuda')
output = pipeline_iadb(batch_size=4,num_inference_steps=128)
for i in range(len(output[0])):
plt.imshow(output[0][i])
plt.show()
```
Sampling with the IADB formulation is easy, and can be done in a few lines (the pipeline already implements it):
```python
def sample_iadb(model, x0, nb_step):
x_alpha = x0
for t in range(nb_step):
alpha = (t/nb_step)
alpha_next =((t+1)/nb_step)
d = model(x_alpha, torch.tensor(alpha, device=x_alpha.device))['sample']
x_alpha = x_alpha + (alpha_next-alpha)*d
return x_alpha
```
The training loop is also straightforward:
```python
# Training loop
while True:
x0 = sample_noise()
x1 = sample_dataset()
alpha = torch.rand(batch_size)
# Blend
x_alpha = (1-alpha) * x0 + alpha * x1
# Loss
loss = torch.sum((D(x_alpha, alpha)- (x1-x0))**2)
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
### Zero1to3 pipeline
This pipeline is the implementation of the [Zero-1-to-3: Zero-shot One Image to 3D Object](https://arxiv.org/abs/2303.11328) paper.
The original pytorch-lightning [repo](https://github.com/cvlab-columbia/zero123) and a diffusers [repo](https://github.com/kxhit/zero123-hf).
The following code shows how to use the Zero1to3 pipeline to generate novel view synthesis images using a pretrained stable diffusion model.
```python
import os
import torch
from pipeline_zero1to3 import Zero1to3StableDiffusionPipeline
from diffusers.utils import load_image
model_id = "kxic/zero123-165000" # zero123-105000, zero123-165000, zero123-xl
pipe = Zero1to3StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_vae_tiling()
pipe.enable_attention_slicing()
pipe = pipe.to("cuda")
num_images_per_prompt = 4
# test inference pipeline
# x y z, Polar angle (vertical rotation in degrees) Azimuth angle (horizontal rotation in degrees) Zoom (relative distance from center)
query_pose1 = [-75.0, 100.0, 0.0]
query_pose2 = [-20.0, 125.0, 0.0]
query_pose3 = [-55.0, 90.0, 0.0]
# load image
# H, W = (256, 256) # H, W = (512, 512) # zero123 training is 256,256
# for batch input
input_image1 = load_image("./demo/4_blackarm.png") #load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/4_blackarm.png")
input_image2 = load_image("./demo/8_motor.png") #load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/8_motor.png")
input_image3 = load_image("./demo/7_london.png") #load_image("https://cvlab-zero123-live.hf.space/file=/home/user/app/configs/7_london.png")
input_images = [input_image1, input_image2, input_image3]
query_poses = [query_pose1, query_pose2, query_pose3]
# # for single input
# H, W = (256, 256)
# input_images = [input_image2.resize((H, W), PIL.Image.NEAREST)]
# query_poses = [query_pose2]
# better do preprocessing
from gradio_new import preprocess_image, create_carvekit_interface
import numpy as np
import PIL.Image as Image
pre_images = []
models = dict()
print('Instantiating Carvekit HiInterface...')
models['carvekit'] = create_carvekit_interface()
if not isinstance(input_images, list):
input_images = [input_images]
for raw_im in input_images:
input_im = preprocess_image(models, raw_im, True)
H, W = input_im.shape[:2]
pre_images.append(Image.fromarray((input_im * 255.0).astype(np.uint8)))
input_images = pre_images
# infer pipeline, in original zero123 num_inference_steps=76
images = pipe(input_imgs=input_images, prompt_imgs=input_images, poses=query_poses, height=H, width=W,
guidance_scale=3.0, num_images_per_prompt=num_images_per_prompt, num_inference_steps=50).images
# save imgs
log_dir = "logs"
os.makedirs(log_dir, exist_ok=True)
bs = len(input_images)
i = 0
for obj in range(bs):
for idx in range(num_images_per_prompt):
images[i].save(os.path.join(log_dir,f"obj{obj}_{idx}.jpg"))
i += 1
```
### Stable Diffusion XL Reference
This pipeline uses the Reference . Refer to the [stable_diffusion_reference](https://github.com/huggingface/diffusers/blob/main/examples/community/README.md#stable-diffusion-reference).
```py
import torch
from PIL import Image
from diffusers.utils import load_image
from diffusers import DiffusionPipeline
from diffusers.schedulers import UniPCMultistepScheduler
input_image = load_image("https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png")
# pipe = DiffusionPipeline.from_pretrained(
# "stabilityai/stable-diffusion-xl-base-1.0",
# custom_pipeline="stable_diffusion_xl_reference",
# torch_dtype=torch.float16,
# use_safetensors=True,
# variant="fp16").to('cuda:0')
pipe = StableDiffusionXLReferencePipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16").to('cuda:0')
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
result_img = pipe(ref_image=input_image,
prompt="1girl",
num_inference_steps=20,
reference_attn=True,
reference_adain=True).images[0]
```
Reference Image

Output Image
`prompt: 1 girl`
`reference_attn=True, reference_adain=True, num_inference_steps=20`

Reference Image

Output Image
`prompt: A dog`
`reference_attn=True, reference_adain=False, num_inference_steps=20`

Reference Image

Output Image
`prompt: An astronaut riding a lion`
`reference_attn=True, reference_adain=True, num_inference_steps=20`

### Stable diffusion fabric pipeline
FABRIC approach applicable to a wide range of popular diffusion models, which exploits
the self-attention layer present in the most widely used architectures to condition
the diffusion process on a set of feedback images.
```python
import requests
import torch
from PIL import Image
from io import BytesIO
from diffusers import DiffusionPipeline
# load the pipeline
# make sure you're logged in with `huggingface-cli login`
model_id_or_path = "runwayml/stable-diffusion-v1-5"
#can also be used with dreamlike-art/dreamlike-photoreal-2.0
pipe = DiffusionPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16, custom_pipeline="pipeline_fabric").to("cuda")
# let's specify a prompt
prompt = "An astronaut riding an elephant"
negative_prompt = "lowres, cropped"
# call the pipeline
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
num_inference_steps=20,
generator=torch.manual_seed(12)
).images[0]
image.save("horse_to_elephant.jpg")
# let's try another example with feedback
url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
prompt = "photo, A blue colored car, fish eye"
liked = [init_image]
## same goes with disliked
# call the pipeline
torch.manual_seed(0)
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
liked = liked,
num_inference_steps=20,
).images[0]
image.save("black_to_blue.png")
```
*With enough feedbacks you can create very similar high quality images.*
The original codebase can be found at [sd-fabric/fabric](https://github.com/sd-fabric/fabric), and available checkpoints are [dreamlike-art/dreamlike-photoreal-2.0](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0), [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5), and [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1) (may give unexpected results).
Let's have a look at the images (_512X512_)
| Without Feedback | With Feedback (1st image) |
|---------------------|---------------------|
|  |  |
### Masked Im2Im Stable Diffusion Pipeline
This pipeline reimplements sketch inpaint feature from A1111 for non-inpaint models. The following code reads two images, original and one with mask painted over it. It computes mask as a difference of two images and does the inpainting in the area defined by the mask.
```python
img = PIL.Image.open("./mech.png")
# read image with mask painted over
img_paint = PIL.Image.open("./mech_painted.png")
neq = numpy.any(numpy.array(img) != numpy.array(img_paint), axis=-1)
mask = neq / neq.max()
pipeline = MaskedStableDiffusionImg2ImgPipeline.from_pretrained("frankjoshua/icbinpICantBelieveIts_v8")
# works best with EulerAncestralDiscreteScheduler
pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config)
generator = torch.Generator(device="cpu").manual_seed(4)
prompt = "a man wearing a mask"
result = pipeline(prompt=prompt, image=img_paint, mask=mask, strength=0.75,
generator=generator)
result.images[0].save("result.png")
```
original image mech.png
<img src=<https://github.com/noskill/diffusers/assets/733626/10ad972d-d655-43cb-8de1-039e3d79e849> width="25%" >
image with mask mech_painted.png
<img src=<https://github.com/noskill/diffusers/assets/733626/c334466a-67fe-4377-9ff7-f46021b9c224> width="25%" >
result:
<img src=<https://github.com/noskill/diffusers/assets/733626/23a0a71d-51db-471e-926a-107ac62512a8> width="25%" >
### Prompt2Prompt Pipeline
Prompt2Prompt allows the following edits:
- ReplaceEdit (change words in prompt)
- ReplaceEdit with local blend (change words in prompt, keep image part unrelated to changes constant)
- RefineEdit (add words to prompt)
- RefineEdit with local blend (add words to prompt, keep image part unrelated to changes constant)
- ReweightEdit (modulate importance of words)
Here's a full example for `ReplaceEdit``:
```python
import torch
import numpy as np
import matplotlib.pyplot as plt
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", custom_pipeline="pipeline_prompt2prompt").to("cuda")
prompts = ["A turtle playing with a ball",
"A monkey playing with a ball"]
cross_attention_kwargs = {
"edit_type": "replace",
"cross_replace_steps": 0.4,
"self_replace_steps": 0.4
}
outputs = pipe(prompt=prompts, height=512, width=512, num_inference_steps=50, cross_attention_kwargs=cross_attention_kwargs)
```
And abbreviated examples for the other edits:
`ReplaceEdit with local blend`
```python
prompts = ["A turtle playing with a ball",
"A monkey playing with a ball"]
cross_attention_kwargs = {
"edit_type": "replace",
"cross_replace_steps": 0.4,
"self_replace_steps": 0.4,
"local_blend_words": ["turtle", "monkey"]
}
```
`RefineEdit`
```python
prompts = ["A turtle",
"A turtle in a forest"]
cross_attention_kwargs = {
"edit_type": "refine",
"cross_replace_steps": 0.4,
"self_replace_steps": 0.4,
}
```
`RefineEdit with local blend`
```python
prompts = ["A turtle",
"A turtle in a forest"]
cross_attention_kwargs = {
"edit_type": "refine",
"cross_replace_steps": 0.4,
"self_replace_steps": 0.4,
"local_blend_words": ["in", "a" , "forest"]
}
```
`ReweightEdit`
```python
prompts = ["A smiling turtle"] * 2
edit_kcross_attention_kwargswargs = {
"edit_type": "reweight",
"cross_replace_steps": 0.4,
"self_replace_steps": 0.4,
"equalizer_words": ["smiling"],
"equalizer_strengths": [5]
}
```
Side note: See [this GitHub gist](https://gist.github.com/UmerHA/b65bb5fb9626c9c73f3ade2869e36164) if you want to visualize the attention maps.
### Latent Consistency Pipeline
Latent Consistency Models was proposed in [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) by _Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, Hang Zhao_ from Tsinghua University.
The abstract of the paper reads as follows:
*Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: [this https URL](https://latent-consistency-models.github.io/)*
The model can be used with `diffusers` as follows:
- *1. Load the model from the community pipeline.*
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img", custom_revision="main")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
```
- 2. Run inference with as little as 4 steps:
```py
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
num_inference_steps = 4
images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
```
For any questions or feedback, feel free to reach out to [Simian Luo](https://github.com/luosiallen).
You can also try this pipeline directly in the [🚀 official spaces](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model).
### Latent Consistency Img2img Pipeline
This pipeline extends the Latent Consistency Pipeline to allow it to take an input image.
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_img2img")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
```
- 2. Run inference with as little as 4 steps:
```py
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
input_image=Image.open("myimg.png")
strength = 0.5 #strength =0 (no change) strength=1 (completely overwrite image)
# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
num_inference_steps = 4
images = pipe(prompt=prompt, image=input_image, strength=strength, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
```
### Latent Consistency Interpolation Pipeline
This pipeline extends the Latent Consistency Pipeline to allow for interpolation of the latent space between multiple prompts. It is similar to the [Stable Diffusion Interpolate](https://github.com/huggingface/diffusers/blob/main/examples/community/interpolate_stable_diffusion.py) and [unCLIP Interpolate](https://github.com/huggingface/diffusers/blob/main/examples/community/unclip_text_interpolation.py) community pipelines.
```py
import torch
import numpy as np
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_interpolate")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
prompts = [
"Self-portrait oil painting, a beautiful cyborg with golden hair, Margot Robbie, 8k",
"Self-portrait oil painting, an extremely strong man, body builder, Huge Jackman, 8k",
"An astronaut floating in space, renaissance art, realistic, high quality, 8k",
"Oil painting of a cat, cute, dream-like",
"Hugging face emoji, cute, realistic"
]
num_inference_steps = 4
num_interpolation_steps = 60
seed = 1337
torch.manual_seed(seed)
np.random.seed(seed)
images = pipe(
prompt=prompts,
height=512,
width=512,
num_inference_steps=num_inference_steps,
num_interpolation_steps=num_interpolation_steps,
guidance_scale=8.0,
embedding_interpolation_type="lerp",
latent_interpolation_type="slerp",
process_batch_size=4, # Make it higher or lower based on your GPU memory
generator=torch.Generator(seed),
)
assert len(images) == (len(prompts) - 1) * num_interpolation_steps
```
### StableDiffusionUpscaleLDM3D Pipeline
[LDM3D-VR](https://arxiv.org/pdf/2311.03226.pdf) is an extended version of LDM3D.
The abstract from the paper is:
*Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods*
Two checkpoints are available for use:
- [ldm3d-pano](https://huggingface.co/Intel/ldm3d-pano). This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used.
- [ldm3d-sr](https://huggingface.co/Intel/ldm3d-sr). This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline pipeline.
'''py
from PIL import Image
import os
import torch
from diffusers import StableDiffusionLDM3DPipeline, DiffusionPipeline
# Generate a rgb/depth output from LDM3D
pipe_ldm3d = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c")
pipe_ldm3d.to("cuda")
prompt =f"A picture of some lemons on a table"
output = pipe_ldm3d(prompt)
rgb_image, depth_image = output.rgb, output.depth
rgb_image[0].save(f"lemons_ldm3d_rgb.jpg")
depth_image[0].save(f"lemons_ldm3d_depth.png")
# Upscale the previous output to a resolution of (1024, 1024)
pipe_ldm3d_upscale = DiffusionPipeline.from_pretrained("Intel/ldm3d-sr", custom_pipeline="pipeline_stable_diffusion_upscale_ldm3d")
pipe_ldm3d_upscale.to("cuda")
low_res_img = Image.open(f"lemons_ldm3d_rgb.jpg").convert("RGB")
low_res_depth = Image.open(f"lemons_ldm3d_depth.png").convert("L")
outputs = pipe_ldm3d_upscale(prompt="high quality high resolution uhd 4k image", rgb=low_res_img, depth=low_res_depth, num_inference_steps=50, target_res=[1024, 1024])
upscaled_rgb, upscaled_depth =outputs.rgb[0], outputs.depth[0]
upscaled_rgb.save(f"upscaled_lemons_rgb.png")
upscaled_depth.save(f"upscaled_lemons_depth.png")
'''
### ControlNet + T2I Adapter Pipeline
This pipelines combines both ControlNet and T2IAdapter into a single pipeline, where the forward pass is executed once.
It receives `control_image` and `adapter_image`, as well as `controlnet_conditioning_scale` and `adapter_conditioning_scale`, for the ControlNet and Adapter modules, respectively. Whenever `adapter_conditioning_scale = 0` or `controlnet_conditioning_scale = 0`, it will act as a full ControlNet module or as a full T2IAdapter module, respectively.
```py
import cv2
import numpy as np
import torch
from controlnet_aux.midas import MidasDetector
from PIL import Image
from diffusers import AutoencoderKL, ControlNetModel, MultiAdapter, T2IAdapter
from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel
from diffusers.utils import load_image
from examples.community.pipeline_stable_diffusion_xl_controlnet_adapter import (
StableDiffusionXLControlNetAdapterPipeline,
)
controlnet_depth = ControlNetModel.from_pretrained(
"diffusers/controlnet-depth-sdxl-1.0",
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
adapter_depth = T2IAdapter.from_pretrained(
"TencentARC/t2i-adapter-depth-midas-sdxl-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionXLControlNetAdapterPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet_depth,
adapter=adapter_depth,
vae=vae,
variant="fp16",
use_safetensors=True,
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()
# pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
midas_depth = MidasDetector.from_pretrained(
"valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large"
).to("cuda")
prompt = "a tiger sitting on a park bench"
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
image = load_image(img_url).resize((1024, 1024))
depth_image = midas_depth(
image, detect_resolution=512, image_resolution=1024
)
strength = 0.5
images = pipe(
prompt,
control_image=depth_image,
adapter_image=depth_image,
num_inference_steps=30,
controlnet_conditioning_scale=strength,
adapter_conditioning_scale=strength,
).images
images[0].save("controlnet_and_adapter.png")
```
### ControlNet + T2I Adapter + Inpainting Pipeline
```py
import cv2
import numpy as np
import torch
from controlnet_aux.midas import MidasDetector
from PIL import Image
from diffusers import AutoencoderKL, ControlNetModel, MultiAdapter, T2IAdapter
from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel
from diffusers.utils import load_image
from examples.community.pipeline_stable_diffusion_xl_controlnet_adapter_inpaint import (
StableDiffusionXLControlNetAdapterInpaintPipeline,
)
controlnet_depth = ControlNetModel.from_pretrained(
"diffusers/controlnet-depth-sdxl-1.0",
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
adapter_depth = T2IAdapter.from_pretrained(
"TencentARC/t2i-adapter-depth-midas-sdxl-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionXLControlNetAdapterInpaintPipeline.from_pretrained(
"diffusers/stable-diffusion-xl-1.0-inpainting-0.1",
controlnet=controlnet_depth,
adapter=adapter_depth,
vae=vae,
variant="fp16",
use_safetensors=True,
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()
# pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
midas_depth = MidasDetector.from_pretrained(
"valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large"
).to("cuda")
prompt = "a tiger sitting on a park bench"
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
image = load_image(img_url).resize((1024, 1024))
mask_image = load_image(mask_url).resize((1024, 1024))
depth_image = midas_depth(
image, detect_resolution=512, image_resolution=1024
)
strength = 0.4
images = pipe(
prompt,
image=image,
mask_image=mask_image,
control_image=depth_image,
adapter_image=depth_image,
num_inference_steps=30,
controlnet_conditioning_scale=strength,
adapter_conditioning_scale=strength,
strength=0.7,
).images
images[0].save("controlnet_and_adapter_inpaint.png")
```
### Regional Prompting Pipeline
This pipeline is a port of the [Regional Prompter extension](https://github.com/hako-mikan/sd-webui-regional-prompter) for [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to diffusers.
This code implements a pipeline for the Stable Diffusion model, enabling the division of the canvas into multiple regions, with different prompts applicable to each region. Users can specify regions in two ways: using `Cols` and `Rows` modes for grid-like divisions, or the `Prompt` mode for regions calculated based on prompts.

### Usage
### Sample Code
```py
from examples.community.regional_prompting_stable_diffusion import RegionalPromptingStableDiffusionPipeline
pipe = RegionalPromptingStableDiffusionPipeline.from_single_file(model_path, vae=vae)
rp_args = {
"mode":"rows",
"div": "1;1;1"
}
prompt ="""
green hair twintail BREAK
red blouse BREAK
blue skirt
"""
images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
guidance_scale=7.5,
height = 768,
width = 512,
num_inference_steps =20,
num_images_per_prompt = 1,
rp_args = rp_args
).images
time = time.strftime(r"%Y%m%d%H%M%S")
i = 1
for image in images:
i += 1
fileName = f'img-{time}-{i+1}.png'
image.save(fileName)
```
### Cols, Rows mode
In the Cols, Rows mode, you can split the screen vertically and horizontally and assign prompts to each region. The split ratio can be specified by 'div', and you can set the division ratio like '3;3;2' or '0.1;0.5'. Furthermore, as will be described later, you can also subdivide the split Cols, Rows to specify more complex regions.
In this image, the image is divided into three parts, and a separate prompt is applied to each. The prompts are divided by 'BREAK', and each is applied to the respective region.

```
green hair twintail BREAK
red blouse BREAK
blue skirt
```
### 2-Dimentional division
The prompt consists of instructions separated by the term `BREAK` and is assigned to different regions of a two-dimensional space. The image is initially split in the main splitting direction, which in this case is rows, due to the presence of a single semicolon`;`, dividing the space into an upper and a lower section. Additional sub-splitting is then applied, indicated by commas. The upper row is split into ratios of `2:1:1`, while the lower row is split into a ratio of `4:6`. Rows themselves are split in a `1:2` ratio. According to the reference image, the blue sky is designated as the first region, green hair as the second, the bookshelf as the third, and so on, in a sequence based on their position from the top left. The terrarium is placed on the desk in the fourth region, and the orange dress and sofa are in the fifth region, conforming to their respective splits.
```
rp_args = {
"mode":"rows",
"div": "1,2,1,1;2,4,6"
}
prompt ="""
blue sky BREAK
green hair BREAK
book shelf BREAK
terrarium on desk BREAK
orange dress and sofa
"""
```

### Prompt Mode
There are limitations to methods of specifying regions in advance. This is because specifying regions can be a hindrance when designating complex shapes or dynamic compositions. In the region specified by the prompt, the regions is determined after the image generation has begun. This allows us to accommodate compositions and complex regions.
For further infomagen, see [here](https://github.com/hako-mikan/sd-webui-regional-prompter/blob/main/prompt_en.md).
### syntax
```
baseprompt target1 target2 BREAK
effect1, target1 BREAK
effect2 ,target2
```
First, write the base prompt. In the base prompt, write the words (target1, target2) for which you want to create a mask. Next, separate them with BREAK. Next, write the prompt corresponding to target1. Then enter a comma and write target1. The order of the targets in the base prompt and the order of the BREAK-separated targets can be back to back.
```
target2 baseprompt target1 BREAK
effect1, target1 BREAK
effect2 ,target2
```
is also effective.
### Sample
In this example, masks are calculated for shirt, tie, skirt, and color prompts are specified only for those regions.
```
rp_args = {
"mode":"prompt-ex",
"save_mask":True,
"th": "0.4,0.6,0.6",
}
prompt ="""
a girl in street with shirt, tie, skirt BREAK
red, shirt BREAK
green, tie BREAK
blue , skirt
"""
```

### threshold
The threshold used to determine the mask created by the prompt. This can be set as many times as there are masks, as the range varies widely depending on the target prompt. If multiple regions are used, enter them separated by commas. For example, hair tends to be ambiguous and requires a small value, while face tends to be large and requires a small value. These should be ordered by BREAK.
```
a lady ,hair, face BREAK
red, hair BREAK
tanned ,face
```
`threshold : 0.4,0.6`
If only one input is given for multiple regions, they are all assumed to be the same value.
### Prompt and Prompt-EX
The difference is that in Prompt, duplicate regions are added, whereas in Prompt-EX, duplicate regions are overwritten sequentially. Since they are processed in order, setting a TARGET with a large regions first makes it easier for the effect of small regions to remain unmuffled.
### Accuracy
In the case of a 512 x 512 image, Attention mode reduces the size of the region to about 8 x 8 pixels deep in the U-Net, so that small regions get mixed up; Latent mode calculates 64*64, so that the region is exact.
```
girl hair twintail frills,ribbons, dress, face BREAK
girl, ,face
```
### Mask
When an image is generated, the generated mask is displayed. It is generated at the same size as the image, but is actually used at a much smaller size.
### Use common prompt
You can attach the prompt up to ADDCOMM to all prompts by separating it first with ADDCOMM. This is useful when you want to include elements common to all regions. For example, when generating pictures of three people with different appearances, it's necessary to include the instruction of 'three people' in all regions. It's also useful when inserting quality tags and other things."For example, if you write as follows:
```
best quality, 3persons in garden, ADDCOMM
a girl white dress BREAK
a boy blue shirt BREAK
an old man red suit
```
If common is enabled, this prompt is converted to the following:
```
best quality, 3persons in garden, a girl white dress BREAK
best quality, 3persons in garden, a boy blue shirt BREAK
best quality, 3persons in garden, an old man red suit
```
### Negative prompt
Negative prompts are equally effective across all regions, but it is possible to set region-specific prompts for negative prompts as well. The number of BREAKs must be the same as the number of prompts. If the number of prompts does not match, the negative prompts will be used without being divided into regions.
### Parameters
To activate Regional Prompter, it is necessary to enter settings in rp_args. The items that can be set are as follows. rp_args is a dictionary type.
### Input Parameters
Parameters are specified through the `rp_arg`(dictionary type).
```
rp_args = {
"mode":"rows",
"div": "1;1;1"
}
pipe(prompt =prompt, rp_args = rp_args)
```
### Required Parameters
- `mode`: Specifies the method for defining regions. Choose from `Cols`, `Rows`, `Prompt` or `Prompt-Ex`. This parameter is case-insensitive.
- `divide`: Used in `Cols` and `Rows` modes. Details on how to specify this are provided under the respective `Cols` and `Rows` sections.
- `th`: Used in `Prompt` mode. The method of specification is detailed under the `Prompt` section.
### Optional Parameters
- `save_mask`: In `Prompt` mode, choose whether to output the generated mask along with the image. The default is `False`.
The Pipeline supports `compel` syntax. Input prompts using the `compel` structure will be automatically applied and processed.
### Diffusion Posterior Sampling Pipeline
- Reference paper
```
@article{chung2022diffusion,
title={Diffusion posterior sampling for general noisy inverse problems},
author={Chung, Hyungjin and Kim, Jeongsol and Mccann, Michael T and Klasky, Marc L and Ye, Jong Chul},
journal={arXiv preprint arXiv:2209.14687},
year={2022}
}
```
- This pipeline allows zero-shot conditional sampling from the posterior distribution $p(x|y)$, given observation on $y$, unconditional generative model $p(x)$ and differentiable operator $y=f(x)$.
- For example, $f(.)$ can be downsample operator, then $y$ is a downsampled image, and the pipeline becomes a super-resolution pipeline.
- To use this pipeline, you need to know your operator $f(.)$ and corrupted image $y$, and pass them during the call. For example, as in the main function of dps_pipeline.py, you need to first define the Gaussian blurring operator $f(.)$. The operator should be a callable nn.Module, with all the parameter gradient disabled:
```python
import torch.nn.functional as F
import scipy
from torch import nn
# define the Gaussian blurring operator first
class GaussialBlurOperator(nn.Module):
def __init__(self, kernel_size, intensity):
super().__init__()
class Blurkernel(nn.Module):
def __init__(self, blur_type='gaussian', kernel_size=31, std=3.0):
super().__init__()
self.blur_type = blur_type
self.kernel_size = kernel_size
self.std = std
self.seq = nn.Sequential(
nn.ReflectionPad2d(self.kernel_size//2),
nn.Conv2d(3, 3, self.kernel_size, stride=1, padding=0, bias=False, groups=3)
)
self.weights_init()
def forward(self, x):
return self.seq(x)
def weights_init(self):
if self.blur_type == "gaussian":
n = np.zeros((self.kernel_size, self.kernel_size))
n[self.kernel_size // 2,self.kernel_size // 2] = 1
k = scipy.ndimage.gaussian_filter(n, sigma=self.std)
k = torch.from_numpy(k)
self.k = k
for name, f in self.named_parameters():
f.data.copy_(k)
elif self.blur_type == "motion":
k = Kernel(size=(self.kernel_size, self.kernel_size), intensity=self.std).kernelMatrix
k = torch.from_numpy(k)
self.k = k
for name, f in self.named_parameters():
f.data.copy_(k)
def update_weights(self, k):
if not torch.is_tensor(k):
k = torch.from_numpy(k)
for name, f in self.named_parameters():
f.data.copy_(k)
def get_kernel(self):
return self.k
self.kernel_size = kernel_size
self.conv = Blurkernel(blur_type='gaussian',
kernel_size=kernel_size,
std=intensity)
self.kernel = self.conv.get_kernel()
self.conv.update_weights(self.kernel.type(torch.float32))
for param in self.parameters():
param.requires_grad=False
def forward(self, data, **kwargs):
return self.conv(data)
def transpose(self, data, **kwargs):
return data
def get_kernel(self):
return self.kernel.view(1, 1, self.kernel_size, self.kernel_size)
```
- Next, you should obtain the corrupted image $y$ by the operator. In this example, we generate $y$ from the source image $x$. However in practice, having the operator $f(.)$ and corrupted image $y$ is enough:
```python
# set up source image
src = Image.open('sample.png')
# read image into [1,3,H,W]
src = torch.from_numpy(np.array(src, dtype=np.float32)).permute(2,0,1)[None]
# normalize image to [-1,1]
src = (src / 127.5) - 1.0
src = src.to("cuda")
# set up operator and measurement
operator = GaussialBlurOperator(kernel_size=61, intensity=3.0).to("cuda")
measurement = operator(src)
# save the source and corrupted images
save_image((src+1.0)/2.0, "dps_src.png")
save_image((measurement+1.0)/2.0, "dps_mea.png")
```
- We provide an example pair of saved source and corrupted images, using the Gaussian blur operator above
- Source image:
- 
- Gaussian blurred image:
- 
- You can download those image to run the example on your own.
- Next, we need to define a loss function used for diffusion posterior sample. For most of the cases, the RMSE is fine:
```python
def RMSELoss(yhat, y):
return torch.sqrt(torch.sum((yhat-y)**2))
```
- And next, as any other diffusion models, we need the score estimator and scheduler. As we are working with $256x256$ face images, we use ddmp-celebahq-256:
```python
# set up scheduler
scheduler = DDPMScheduler.from_pretrained("google/ddpm-celebahq-256")
scheduler.set_timesteps(1000)
# set up model
model = UNet2DModel.from_pretrained("google/ddpm-celebahq-256").to("cuda")
```
- And finally, run the pipeline:
```python
# finally, the pipeline
dpspipe = DPSPipeline(model, scheduler)
image = dpspipe(
measurement = measurement,
operator = operator,
loss_fn = RMSELoss,
zeta = 1.0,
).images[0]
image.save("dps_generated_image.png")
```
- The zeta is a hyperparameter that is in range of $[0,1]$. It need to be tuned for best effect. By setting zeta=1, you should be able to have the reconstructed result:
- Reconstructed image:
- 
- The reconstruction is perceptually similar to the source image, but different in details.
- In dps_pipeline.py, we also provide a super-resolution example, which should produce:
- Downsampled image:
- 
- Reconstructed image:
- 
### AnimateDiff ControlNet Pipeline
This pipeline combines AnimateDiff and ControlNet. Enjoy precise motion control for your videos! Refer to [this](https://github.com/huggingface/diffusers/issues/5866) issue for more details.
```py
import torch
from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter
from diffusers.pipelines import DiffusionPipeline
from diffusers.schedulers import DPMSolverMultistepScheduler
from PIL import Image
motion_id = "guoyww/animatediff-motion-adapter-v1-5-2"
adapter = MotionAdapter.from_pretrained(motion_id)
controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16)
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
pipe = DiffusionPipeline.from_pretrained(
model_id,
motion_adapter=adapter,
controlnet=controlnet,
vae=vae,
custom_pipeline="pipeline_animatediff_controlnet",
).to(device="cuda", dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained(
model_id, subfolder="scheduler", beta_schedule="linear", clip_sample=False, timestep_spacing="linspace", steps_offset=1
)
pipe.enable_vae_slicing()
conditioning_frames = []
for i in range(1, 16 + 1):
conditioning_frames.append(Image.open(f"frame_{i}.png"))
prompt = "astronaut in space, dancing"
negative_prompt = "bad quality, worst quality, jpeg artifacts, ugly"
result = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=512,
height=768,
conditioning_frames=conditioning_frames,
num_inference_steps=20,
).frames[0]
from diffusers.utils import export_to_gif
export_to_gif(result.frames[0], "result.gif")
```
<table>
<tr><td colspan="2" align=center><b>Conditioning Frames</b></td></tr>
<tr align=center>
<td align=center><img src="https://user-images.githubusercontent.com/7365912/265043418-23291941-864d-495a-8ba8-d02e05756396.gif" alt="input-frames"></td>
</tr>
<tr><td colspan="2" align=center><b>AnimateDiff model: SG161222/Realistic_Vision_V5.1_noVAE</b></td></tr>
<tr>
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/baf301e2-d03c-4129-bd84-203a1de2b2be" alt="gif-1"></td>
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/9f923475-ecaf-452b-92c8-4e42171182d8" alt="gif-2"></td>
</tr>
<tr><td colspan="2" align=center><b>AnimateDiff model: CardosAnime</b></td></tr>
<tr>
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/b2c41028-38a0-45d6-86ed-fec7446b87f7" alt="gif-1"></td>
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/eb7d2952-72e4-44fa-b664-077c79b4fc70" alt="gif-2"></td>
</tr>
</table>
You can also use multiple controlnets at once!
```python
import torch
from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter
from diffusers.pipelines import DiffusionPipeline
from diffusers.schedulers import DPMSolverMultistepScheduler
from PIL import Image
motion_id = "guoyww/animatediff-motion-adapter-v1-5-2"
adapter = MotionAdapter.from_pretrained(motion_id)
controlnet1 = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16)
controlnet2 = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
pipe = DiffusionPipeline.from_pretrained(
model_id,
motion_adapter=adapter,
controlnet=[controlnet1, controlnet2],
vae=vae,
custom_pipeline="pipeline_animatediff_controlnet",
).to(device="cuda", dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained(
model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", steps_offset=1, beta_schedule="linear",
)
pipe.enable_vae_slicing()
def load_video(file_path: str):
images = []
if file_path.startswith(('http://', 'https://')):
# If the file_path is a URL
response = requests.get(file_path)
response.raise_for_status()
content = BytesIO(response.content)
vid = imageio.get_reader(content)
else:
# Assuming it's a local file path
vid = imageio.get_reader(file_path)
for frame in vid:
pil_image = Image.fromarray(frame)
images.append(pil_image)
return images
video = load_video("dance.gif")
# You need to install it using `pip install controlnet_aux`
from controlnet_aux.processor import Processor
p1 = Processor("openpose_full")
cn1 = [p1(frame) for frame in video]
p2 = Processor("canny")
cn2 = [p2(frame) for frame in video]
prompt = "astronaut in space, dancing"
negative_prompt = "bad quality, worst quality, jpeg artifacts, ugly"
result = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=512,
height=768,
conditioning_frames=[cn1, cn2],
num_inference_steps=20,
)
from diffusers.utils import export_to_gif
export_to_gif(result.frames[0], "result.gif")
```
### DemoFusion
This pipeline is the official implementation of [DemoFusion: Democratising High-Resolution Image Generation With No $$$](https://arxiv.org/abs/2311.16973).
The original repo can be found at [repo](https://github.com/PRIS-CV/DemoFusion).
- `view_batch_size` (`int`, defaults to 16):
The batch size for multiple denoising paths. Typically, a larger batch size can result in higher efficiency but comes with increased GPU memory requirements.
- `stride` (`int`, defaults to 64):
The stride of moving local patches. A smaller stride is better for alleviating seam issues, but it also introduces additional computational overhead and inference time.
- `cosine_scale_1` (`float`, defaults to 3):
Control the strength of skip-residual. For specific impacts, please refer to Appendix C in the DemoFusion paper.
- `cosine_scale_2` (`float`, defaults to 1):
Control the strength of dilated sampling. For specific impacts, please refer to Appendix C in the DemoFusion paper.
- `cosine_scale_3` (`float`, defaults to 1):
Control the strength of the Gaussian filter. For specific impacts, please refer to Appendix C in the DemoFusion paper.
- `sigma` (`float`, defaults to 1):
The standard value of the Gaussian filter. Larger sigma promotes the global guidance of dilated sampling, but has the potential of over-smoothing.
- `multi_decoder` (`bool`, defaults to True):
Determine whether to use a tiled decoder. Generally, when the resolution exceeds 3072x3072, a tiled decoder becomes necessary.
- `show_image` (`bool`, defaults to False):
Determine whether to show intermediate results during generation.
```py
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
custom_pipeline="pipeline_demofusion_sdxl",
custom_revision="main",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
prompt = "Envision a portrait of an elderly woman, her face a canvas of time, framed by a headscarf with muted tones of rust and cream. Her eyes, blue like faded denim. Her attire, simple yet dignified."
negative_prompt = "blurry, ugly, duplicate, poorly drawn, deformed, mosaic"
images = pipe(
prompt,
negative_prompt=negative_prompt,
height=3072,
width=3072,
view_batch_size=16,
stride=64,
num_inference_steps=50,
guidance_scale=7.5,
cosine_scale_1=3,
cosine_scale_2=1,
cosine_scale_3=1,
sigma=0.8,
multi_decoder=True,
show_image=True
)
```
You can display and save the generated images as:
```py
def image_grid(imgs, save_path=None):
w = 0
for i, img in enumerate(imgs):
h_, w_ = imgs[i].size
w += w_
h = h_
grid = Image.new('RGB', size=(w, h))
grid_w, grid_h = grid.size
w = 0
for i, img in enumerate(imgs):
h_, w_ = imgs[i].size
grid.paste(img, box=(w, h - h_))
if save_path != None:
img.save(save_path + "/img_{}.jpg".format((i + 1) * 1024))
w += w_
return grid
image_grid(images, save_path="./outputs/")
```

### SDE Drag pipeline
This pipeline provides drag-and-drop image editing using stochastic differential equations. It enables image editing by inputting prompt, image, mask_image, source_points, and target_points.

See [paper](https://arxiv.org/abs/2311.01410), [paper page](https://ml-gsai.github.io/SDE-Drag-demo/), [original repo](https://github.com/ML-GSAI/SDE-Drag) for more infomation.
```py
import PIL
import torch
from diffusers import DDIMScheduler, DiffusionPipeline
# Load the pipeline
model_path = "runwayml/stable-diffusion-v1-5"
scheduler = DDIMScheduler.from_pretrained(model_path, subfolder="scheduler")
pipe = DiffusionPipeline.from_pretrained(model_path, scheduler=scheduler, custom_pipeline="sde_drag")
pipe.to('cuda')
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
# If not training LoRA, please avoid using torch.float16
# pipe.to(torch.float16)
# Provide prompt, image, mask image, and the starting and target points for drag editing.
prompt = "prompt of the image"
image = PIL.Image.open('/path/to/image')
mask_image = PIL.Image.open('/path/to/mask_image')
source_points = [[123, 456]]
target_points = [[234, 567]]
# train_lora is optional, and in most cases, using train_lora can better preserve consistency with the original image.
pipe.train_lora(prompt, image)
output = pipe(prompt, image, mask_image, source_points, target_points)
output_image = PIL.Image.fromarray(output)
output_image.save("./output.png")
```
### Instaflow Pipeline
InstaFlow is an ultra-fast, one-step image generator that achieves image quality close to Stable Diffusion, significantly reducing the demand of computational resources. This efficiency is made possible through a recent [Rectified Flow](https://github.com/gnobitab/RectifiedFlow) technique, which trains probability flows with straight trajectories, hence inherently requiring only a single step for fast inference.
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("XCLIU/instaflow_0_9B_from_sd_1_5", torch_dtype=torch.float16, custom_pipeline="instaflow_one_step")
pipe.to("cuda") ### if GPU is not available, comment this line
prompt = "A hyper-realistic photo of a cute cat."
images = pipe(prompt=prompt,
num_inference_steps=1,
guidance_scale=0.0).images
images[0].save("./image.png")
```

You can also combine it with LORA out of the box, like <https://huggingface.co/artificialguybr/logo-redmond-1-5v-logo-lora-for-liberteredmond-sd-1-5>, to unlock cool use cases in single step!
```python
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("XCLIU/instaflow_0_9B_from_sd_1_5", torch_dtype=torch.float16, custom_pipeline="instaflow_one_step")
pipe.to("cuda") ### if GPU is not available, comment this line
pipe.load_lora_weights("artificialguybr/logo-redmond-1-5v-logo-lora-for-liberteredmond-sd-1-5")
prompt = "logo, A logo for a fitness app, dynamic running figure, energetic colors (red, orange) ),LogoRedAF ,"
images = pipe(prompt=prompt,
num_inference_steps=1,
guidance_scale=0.0).images
images[0].save("./image.png")
```

### Null-Text Inversion pipeline
This pipeline provides null-text inversion for editing real images. It enables null-text optimization, and DDIM reconstruction via w, w/o null-text optimization. No prompt-to-prompt code is implemented as there is a Prompt2PromptPipeline.
- Reference paper
```@article{hertz2022prompt,
title={Prompt-to-prompt image editing with cross attention control},
author={Hertz, Amir and Mokady, Ron and Tenenbaum, Jay and Aberman, Kfir and Pritch, Yael and Cohen-Or, Daniel},
booktitle={arXiv preprint arXiv:2208.01626},
year={2022}
```}
```py
from diffusers.schedulers import DDIMScheduler
from examples.community.pipeline_null_text_inversion import NullTextPipeline
import torch
# Load the pipeline
device = "cuda"
# Provide invert_prompt and the image for null-text optimization.
invert_prompt = "A lying cat"
input_image = "siamese.jpg"
steps = 50
# Provide prompt used for generation. Same if reconstruction
prompt = "A lying cat"
# or different if editing.
prompt = "A lying dog"
#Float32 is essential to a well optimization
model_path = "runwayml/stable-diffusion-v1-5"
scheduler = DDIMScheduler(num_train_timesteps=1000, beta_start=0.00085, beta_end=0.0120, beta_schedule="scaled_linear")
pipeline = NullTextPipeline.from_pretrained(model_path, scheduler = scheduler, torch_dtype=torch.float32).to(device)
#Saves the inverted_latent to save time
inverted_latent, uncond = pipeline.invert(input_image, invert_prompt, num_inner_steps=10, early_stop_epsilon= 1e-5, num_inference_steps = steps)
pipeline(prompt, uncond, inverted_latent, guidance_scale=7.5, num_inference_steps=steps).images[0].save(input_image+".output.jpg")
```
### Rerender A Video
This is the Diffusers implementation of zero-shot video-to-video translation pipeline [Rerender A Video](https://github.com/williamyang1991/Rerender_A_Video) (without Ebsynth postprocessing). To run the code, please install gmflow. Then modify the path in `gmflow_dir`. After that, you can run the pipeline with:
```py
import sys
gmflow_dir = "/path/to/gmflow"
sys.path.insert(0, gmflow_dir)
from diffusers import ControlNetModel, AutoencoderKL, DDIMScheduler
from diffusers.utils import export_to_video
import numpy as np
import torch
import cv2
from PIL import Image
def video_to_frame(video_path: str, interval: int):
vidcap = cv2.VideoCapture(video_path)
success = True
count = 0
res = []
while success:
count += 1
success, image = vidcap.read()
if count % interval != 1:
continue
if image is not None:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
res.append(image)
vidcap.release()
return res
input_video_path = 'path/to/video'
input_interval = 10
frames = video_to_frame(
input_video_path, input_interval)
control_frames = []
# get canny image
for frame in frames:
np_image = cv2.Canny(frame, 50, 100)
np_image = np_image[:, :, None]
np_image = np.concatenate([np_image, np_image, np_image], axis=2)
canny_image = Image.fromarray(np_image)
control_frames.append(canny_image)
# You can use any ControlNet here
controlnet = ControlNetModel.from_pretrained(
"lllyasviel/sd-controlnet-canny").to('cuda')
# You can use any fintuned SD here
pipe = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, custom_pipeline='rerender_a_video').to('cuda')
# Optional: you can download vae-ft-mse-840000-ema-pruned.ckpt to enhance the results
# pipe.vae = AutoencoderKL.from_single_file(
# "path/to/vae-ft-mse-840000-ema-pruned.ckpt").to('cuda')
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
generator = torch.manual_seed(0)
frames = [Image.fromarray(frame) for frame in frames]
output_frames = pipe(
"a beautiful woman in CG style, best quality, extremely detailed",
frames,
control_frames,
num_inference_steps=20,
strength=0.75,
controlnet_conditioning_scale=0.7,
generator=generator,
warp_start=0.0,
warp_end=0.1,
mask_start=0.5,
mask_end=0.8,
mask_strength=0.5,
negative_prompt='longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
).frames[0]
export_to_video(
output_frames, "/path/to/video.mp4", 5)
```
### StyleAligned Pipeline
This pipeline is the implementation of [Style Aligned Image Generation via Shared Attention](https://arxiv.org/abs/2312.02133). You can find more results [here](https://github.com/huggingface/diffusers/pull/6489#issuecomment-1881209354).
> Large-scale Text-to-Image (T2I) models have rapidly gained prominence across creative fields, generating visually compelling outputs from textual prompts. However, controlling these models to ensure consistent style remains challenging, with existing methods necessitating fine-tuning and manual intervention to disentangle content and style. In this paper, we introduce StyleAligned, a novel technique designed to establish style alignment among a series of generated images. By employing minimal `attention sharing' during the diffusion process, our method maintains style consistency across images within T2I models. This approach allows for the creation of style-consistent images using a reference style through a straightforward inversion operation. Our method's evaluation across diverse styles and text prompts demonstrates high-quality synthesis and fidelity, underscoring its efficacy in achieving consistent style across various inputs.
```python
from typing import List
import torch
from diffusers.pipelines.pipeline_utils import DiffusionPipeline
from PIL import Image
model_id = "a-r-r-o-w/dreamshaper-xl-turbo"
pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16", custom_pipeline="pipeline_sdxl_style_aligned")
pipe = pipe.to("cuda")
# Enable memory saving techniques
pipe.enable_vae_slicing()
pipe.enable_vae_tiling()
prompt = [
"a toy train. macro photo. 3d game asset",
"a toy airplane. macro photo. 3d game asset",
"a toy bicycle. macro photo. 3d game asset",
"a toy car. macro photo. 3d game asset",
]
negative_prompt = "low quality, worst quality, "
# Enable StyleAligned
pipe.enable_style_aligned(
share_group_norm=False,
share_layer_norm=False,
share_attention=True,
adain_queries=True,
adain_keys=True,
adain_values=False,
full_attention_share=False,
shared_score_scale=1.0,
shared_score_shift=0.0,
only_self_level=0.0,
)
# Run inference
images = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
guidance_scale=2,
height=1024,
width=1024,
num_inference_steps=10,
generator=torch.Generator().manual_seed(42),
).images
# Disable StyleAligned if you do not wish to use it anymore
pipe.disable_style_aligned()
```
### AnimateDiff Image-To-Video Pipeline
This pipeline adds experimental support for the image-to-video task using AnimateDiff. Refer to [this](https://github.com/huggingface/diffusers/pull/6328) PR for more examples and results.
This pipeline relies on a "hack" discovered by the community that allows the generation of videos given an input image with AnimateDiff. It works by creating a copy of the image `num_frames` times and progressively adding more noise to the image based on the strength and latent interpolation method.
```py
import torch
from diffusers import MotionAdapter, DiffusionPipeline, DDIMScheduler
from diffusers.utils import export_to_gif, load_image
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2")
pipe = DiffusionPipeline.from_pretrained(model_id, motion_adapter=adapter, custom_pipeline="pipeline_animatediff_img2video").to("cuda")
pipe.scheduler = DDIMScheduler.from_pretrained(model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", beta_schedule="linear", steps_offset=1)
image = load_image("snail.png")
output = pipe(
image=image,
prompt="A snail moving on the ground",
strength=0.8,
latent_interpolation_method="slerp", # can be lerp, slerp, or your own callback
)
frames = output.frames[0]
export_to_gif(frames, "animation.gif")
```
### IP Adapter Face ID
IP Adapter FaceID is an experimental IP Adapter model that uses image embeddings generated by `insightface`, so no image encoder needs to be loaded.
You need to install `insightface` and all its requirements to use this model.
You must pass the image embedding tensor as `image_embeds` to the StableDiffusionPipeline instead of `ip_adapter_image`.
You can find more results [here](https://github.com/huggingface/diffusers/pull/6276).
```py
import diffusers
import torch
from diffusers.utils import load_image
import cv2
import numpy as np
from diffusers import DiffusionPipeline, AutoencoderKL, DDIMScheduler
from insightface.app import FaceAnalysis
noise_scheduler = DDIMScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
steps_offset=1,
)
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse").to(dtype=torch.float16)
pipeline = DiffusionPipeline.from_pretrained(
"SG161222/Realistic_Vision_V4.0_noVAE",
torch_dtype=torch.float16,
scheduler=noise_scheduler,
vae=vae,
custom_pipeline="ip_adapter_face_id"
)
pipeline.load_ip_adapter_face_id("h94/IP-Adapter-FaceID", "ip-adapter-faceid_sd15.bin")
pipeline.to("cuda")
generator = torch.Generator(device="cpu").manual_seed(42)
num_images=2
image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png")
app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
image = cv2.cvtColor(np.asarray(image), cv2.COLOR_BGR2RGB)
faces = app.get(image)
image = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0)
images = pipeline(
prompt="A photo of a girl wearing a black dress, holding red roses in hand, upper body, behind is the Eiffel Tower",
image_embeds=image,
negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",
num_inference_steps=20, num_images_per_prompt=num_images, width=512, height=704,
generator=generator
).images
for i in range(num_images):
images[i].save(f"c{i}.png")
```
### InstantID Pipeline
InstantID is a new state-of-the-art tuning-free method to achieve ID-Preserving generation with only single image, supporting various downstream tasks. For any usgae question, please refer to the [official implementation](https://github.com/InstantID/InstantID).
```py
# !pip install opencv-python transformers accelerate insightface
import diffusers
from diffusers.utils import load_image
from diffusers.models import ControlNetModel
import cv2
import torch
import numpy as np
from PIL import Image
from insightface.app import FaceAnalysis
from pipeline_stable_diffusion_xl_instantid import StableDiffusionXLInstantIDPipeline, draw_kps
# prepare 'antelopev2' under ./models
# https://github.com/deepinsight/insightface/issues/1896#issuecomment-1023867304
app = FaceAnalysis(name='antelopev2', root='./', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
# prepare models under ./checkpoints
# https://huggingface.co/InstantX/InstantID
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/config.json", local_dir="./checkpoints")
hf_hub_download(repo_id="InstantX/InstantID", filename="ControlNetModel/diffusion_pytorch_model.safetensors", local_dir="./checkpoints")
hf_hub_download(repo_id="InstantX/InstantID", filename="ip-adapter.bin", local_dir="./checkpoints")
face_adapter = f'./checkpoints/ip-adapter.bin'
controlnet_path = f'./checkpoints/ControlNetModel'
# load IdentityNet
controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16)
base_model = 'wangqixun/YamerMIX_v8'
pipe = StableDiffusionXLInstantIDPipeline.from_pretrained(
base_model,
controlnet=controlnet,
torch_dtype=torch.float16
)
pipe.cuda()
# load adapter
pipe.load_ip_adapter_instantid(face_adapter)
# load an image
face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png")
# prepare face emb
face_info = app.get(cv2.cvtColor(np.array(face_image), cv2.COLOR_RGB2BGR))
face_info = sorted(face_info, key=lambda x:(x['bbox'][2]-x['bbox'][0])*x['bbox'][3]-x['bbox'][1])[-1] # only use the maximum face
face_emb = face_info['embedding']
face_kps = draw_kps(face_image, face_info['kps'])
# prompt
prompt = "film noir style, ink sketch|vector, male man, highly detailed, sharp focus, ultra sharpness, monochrome, high contrast, dramatic shadows, 1940s style, mysterious, cinematic"
negative_prompt = "ugly, deformed, noisy, blurry, low contrast, realism, photorealistic, vibrant, colorful"
# generate image
pipe.set_ip_adapter_scale(0.8)
image = pipe(
prompt,
image_embeds=face_emb,
image=face_kps,
controlnet_conditioning_scale=0.8,
).images[0]
```
### UFOGen Scheduler
[UFOGen](https://arxiv.org/abs/2311.09257) is a generative model designed for fast one-step text-to-image generation, trained via adversarial training starting from an initial pretrained diffusion model such as Stable Diffusion. `scheduling_ufogen.py` implements a onestep and multistep sampling algorithm for UFOGen models compatible with pipelines like `StableDiffusionPipeline`. A usage example is as follows:
```py
import torch
from diffusers import StableDiffusionPipeline
from scheduling_ufogen import UFOGenScheduler
# NOTE: currently, I am not aware of any publicly available UFOGen model checkpoints trained from SD v1.5.
ufogen_model_id_or_path = "/path/to/ufogen/model"
pipe = StableDiffusionPipeline(
ufogen_model_id_or_path,
torch_dtype=torch.float16,
)
# You can initialize a UFOGenScheduler as follows:
pipe.scheduler = UFOGenScheduler.from_config(pipe.scheduler.config)
prompt = "Three cats having dinner at a table at new years eve, cinematic shot, 8k."
# Onestep sampling
onestep_image = pipe(prompt, num_inference_steps=1).images[0]
# Multistep sampling
multistep_image = pipe(prompt, num_inference_steps=4).images[0]
```
### FRESCO
This is the Diffusers implementation of zero-shot video-to-video translation pipeline [FRESCO](https://github.com/williamyang1991/FRESCO) (without Ebsynth postprocessing and background smooth). To run the code, please install gmflow. Then modify the path in `gmflow_dir`. After that, you can run the pipeline with:
```py
from PIL import Image
import cv2
import torch
import numpy as np
from diffusers import ControlNetModel,DDIMScheduler, DiffusionPipeline
import sys
gmflow_dir = "/path/to/gmflow"
sys.path.insert(0, gmflow_dir)
def video_to_frame(video_path: str, interval: int):
vidcap = cv2.VideoCapture(video_path)
success = True
count = 0
res = []
while success:
count += 1
success, image = vidcap.read()
if count % interval != 1:
continue
if image is not None:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
res.append(image)
if len(res) >= 8:
break
vidcap.release()
return res
input_video_path = 'https://github.com/williamyang1991/FRESCO/raw/main/data/car-turn.mp4'
output_video_path = 'car.gif'
# You can use any fintuned SD here
model_path = 'SG161222/Realistic_Vision_V2.0'
prompt = 'a red car turns in the winter'
a_prompt = ', RAW photo, subject, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3, '
n_prompt = '(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation'
input_interval = 5
frames = video_to_frame(
input_video_path, input_interval)
control_frames = []
# get canny image
for frame in frames:
image = cv2.Canny(frame, 50, 100)
np_image = np.array(image)
np_image = np_image[:, :, None]
np_image = np.concatenate([np_image, np_image, np_image], axis=2)
canny_image = Image.fromarray(np_image)
control_frames.append(canny_image)
# You can use any ControlNet here
controlnet = ControlNetModel.from_pretrained(
"lllyasviel/sd-controlnet-canny").to('cuda')
pipe = DiffusionPipeline.from_pretrained(
model_path, controlnet=controlnet, custom_pipeline='fresco_v2v').to('cuda')
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
generator = torch.manual_seed(0)
frames = [Image.fromarray(frame) for frame in frames]
output_frames = pipe(
prompt + a_prompt,
frames,
control_frames,
num_inference_steps=20,
strength=0.75,
controlnet_conditioning_scale=0.7,
generator=generator,
negative_prompt=n_prompt
).images
output_frames[0].save(output_video_path, save_all=True,
append_images=output_frames[1:], duration=100, loop=0)
```
# Perturbed-Attention Guidance
[Project](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) / [arXiv](https://arxiv.org/abs/2403.17377) / [GitHub](https://github.com/KU-CVLAB/Perturbed-Attention-Guidance)
This implementation is based on [Diffusers](https://huggingface.co/docs/diffusers/index). StableDiffusionPAGPipeline is a modification of StableDiffusionPipeline to support Perturbed-Attention Guidance (PAG).
## Example Usage
```py
import os
import torch
from accelerate.utils import set_seed
from diffusers import StableDiffusionPipeline
from diffusers.utils import load_image, make_image_grid
from diffusers.utils.torch_utils import randn_tensor
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
custom_pipeline="hyoungwoncho/sd_perturbed_attention_guidance",
torch_dtype=torch.float16
)
device="cuda"
pipe = pipe.to(device)
pag_scale = 5.0
pag_applied_layers_index = ['m0']
batch_size = 4
seed=10
base_dir = "./results/"
grid_dir = base_dir + "/pag" + str(pag_scale) + "/"
if not os.path.exists(grid_dir):
os.makedirs(grid_dir)
set_seed(seed)
latent_input = randn_tensor(shape=(batch_size,4,64,64),generator=None, device=device, dtype=torch.float16)
output_baseline = pipe(
"",
width=512,
height=512,
num_inference_steps=50,
guidance_scale=0.0,
pag_scale=0.0,
pag_applied_layers_index=pag_applied_layers_index,
num_images_per_prompt=batch_size,
latents=latent_input
).images
output_pag = pipe(
"",
width=512,
height=512,
num_inference_steps=50,
guidance_scale=0.0,
pag_scale=5.0,
pag_applied_layers_index=pag_applied_layers_index,
num_images_per_prompt=batch_size,
latents=latent_input
).images
grid_image = make_image_grid(output_baseline + output_pag, rows=2, cols=batch_size)
grid_image.save(grid_dir + "sample.png")
```
## PAG Parameters
pag_scale : gudiance scale of PAG (ex: 5.0)
pag_applied_layers_index : index of the layer to apply perturbation (ex: ['m0'])
| 34,465 | 8 | [
"arxiv:2204.00227",
"arxiv:2201.0986",
"arxiv:2305.03486",
"arxiv:2303.11328",
"arxiv:2310.04378",
"arxiv:2311.16973",
"arxiv:2309.06380",
"arxiv:2211.09794",
"arxiv:2306.07954",
"arxiv:2312.02133",
"arxiv:2403.12962",
"arxiv:2312.14091",
"arxiv:2208.04202",
"arxiv:2210.16056",
"arxiv:2211.12446",
"arxiv:2201.09865",
"arxiv:2302.02412",
"arxiv:2311.03226",
"arxiv:2209.14687",
"arxiv:2311.01410",
"arxiv:2208.01626",
"arxiv:2311.09257",
"arxiv:2403.17377",
"region:us"
] | 2024-06-07T01:22:34+00:00 | 2025-11-12T16:37:41+00:00 | 0 |
ts0pwo/20K_real_and_deepfake_images_PCA | This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2300 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contrains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the PCA images are stored. | This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2300 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contrains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the PCA images are stored. | 0 | 0 | [
"task_categories:image-classification",
"language:en",
"size_categories:10K<n<100K",
"region:us",
"deepfake"
] | 2025-11-12T16:29:20+00:00 | 2025-11-12T16:34:34+00:00 | 0 |
sszhong/Amazon-Users | # Amazon Review 2018 Common Users Dataset
## Overview
This dataset contains Amazon Review data from the 2018 version, filtered across multiple domains. It includes reviews from common users across the following categories:
* **Books**
* **CDs and Vinyl**
* **Digital Music**
* **Magazine Subscriptions**
* **Movies and TV**
* **Toys and Games**
* **Video Games**
The data has been curated by selecting users who have reviews across multiple domains, ensuring that only relevant items and user interactions are included.
## Dataset Links
The dataset is composed of JSON files corresponding to each category. You can download the raw files for each category from the following URLs:
* [Books](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Books_5.json.gz)
* [CDs and Vinyl](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/CDs_and_Vinyl_5.json.gz)
* [Digital Music](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Digital_Music_5.json.gz)
* [Magazine Subscriptions](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Magazine_Subscriptions_5.json.gz)
* [Movies and TV](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Movies_and_TV_5.json.gz)
* [Toys and Games](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Toys_and_Games_5.json.gz)
* [Video Games](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Video_Games_5.json.gz)
## Dataset Content
Each file contains the following fields:
* `user_id`: The unique identifier for a user.
* `item_id`: The unique identifier for a product/item.
* `rating`: The rating given by the user to the item (ranging from 1 to 5).
* `review_text`: The review text provided by the user.
* `timestamp`: The timestamp of the review.
## Usage
This dataset is useful for tasks such as:
* **Collaborative Filtering**: For building recommendation systems that rely on user-item interactions across multiple domains.
* **Cross-Domain Recommendation**: The dataset can be used to study recommendation models that work across different product categories (i.e., cross-domain recommendation).
* **Sentiment Analysis**: You can use the review texts for sentiment classification tasks.
* **User Behavior Analysis**: The data is ideal for studying how users interact with different product categories.
## License
Please refer to the specific dataset links for their licensing terms. Generally, this dataset is available for research and non-commercial purposes.
## Acknowledgments
This dataset is derived from the [Amazon Review Data (2018)](https://nijianmo.github.io/amazon/index.html), provided by the UCSD Machine Learning Group. We acknowledge their work in making this valuable resource publicly available. | # Amazon Review 2018 Common Users Dataset
## Overview
This dataset contains Amazon Review data from the 2018 version, filtered across multiple domains. It includes reviews from common users across the following categories:
* **Books**
* **CDs and Vinyl**
* **Digital Music**
* **Magazine Subscriptions**
* **Movies and TV**
* **Toys and Games**
* **Video Games**
The data has been curated by selecting users who have reviews across multiple domains, ensuring that only relevant items and user interactions are included.
## Dataset Links
The dataset is composed of JSON files corresponding to each category. You can download the raw files for each category from the following URLs:
* [Books](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Books_5.json.gz)
* [CDs and Vinyl](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/CDs_and_Vinyl_5.json.gz)
* [Digital Music](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Digital_Music_5.json.gz)
* [Magazine Subscriptions](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Magazine_Subscriptions_5.json.gz)
* [Movies and TV](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Movies_and_TV_5.json.gz)
* [Toys and Games](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Toys_and_Games_5.json.gz)
* [Video Games](https://mcauleylab.ucsd.edu/public_datasets/data/amazon_v2/categoryFilesSmall/Video_Games_5.json.gz)
## Dataset Content
Each file contains the following fields:
* `user_id`: The unique identifier for a user.
* `item_id`: The unique identifier for a product/item.
* `rating`: The rating given by the user to the item (ranging from 1 to 5).
* `review_text`: The review text provided by the user.
* `timestamp`: The timestamp of the review.
## Usage
This dataset is useful for tasks such as:
* **Collaborative Filtering**: For building recommendation systems that rely on user-item interactions across multiple domains.
* **Cross-Domain Recommendation**: The dataset can be used to study recommendation models that work across different product categories (i.e., cross-domain recommendation).
* **Sentiment Analysis**: You can use the review texts for sentiment classification tasks.
* **User Behavior Analysis**: The data is ideal for studying how users interact with different product categories.
## License
Please refer to the specific dataset links for their licensing terms. Generally, this dataset is available for research and non-commercial purposes.
## Acknowledgments
This dataset is derived from the [Amazon Review Data (2018)](https://nijianmo.github.io/amazon/index.html), provided by the UCSD Machine Learning Group. We acknowledge their work in making this valuable resource publicly available. | 0 | 0 | [
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us",
"recommendation",
"reviews"
] | 2025-11-12T04:27:24+00:00 | 2025-11-12T16:30:44+00:00 | 0 |
ts0pwo/20K_real_and_deepfake_images_NA | This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2300 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contrains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the NA images are stored. | This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2300 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contrains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the NA images are stored. | 0 | 0 | [
"task_categories:image-classification",
"language:en",
"size_categories:10K<n<100K",
"region:us",
"deepfake"
] | 2025-11-12T16:05:26+00:00 | 2025-11-12T16:30:52+00:00 | 0 |
opengraph-labs/lerobot-simulation-over-the-barrier-01 |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so_arm_101",
"total_episodes": 85,
"total_frames": 44319,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:85"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
5
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
240,
320,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 240,
"video.width": 320,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
5
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so_arm_101",
"total_episodes": 85,
"total_frames": 44319,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:85"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
5
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
240,
320,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 240,
"video.width": 320,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
5
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 559 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | 2025-11-10T15:18:53+00:00 | 2025-11-12T16:20:26+00:00 | 0 |
glowol/RealXBench |
# RealXBench
RealXBench is a comprehensive visual question answering benchmark dataset. The full dataset contains 300 high-quality image-question-answer triplets. Due to internal regulations, only a subset of 194 samples is released in this open-source version.
## Dataset Structure
Each example contains:
- **query**: The question about the image (in English)
- **answer**: The ground truth answer(s), with multiple answers separated by "or"
- **perception**: Difficulty level for perception task (1 if required, 0 otherwise)
- **search**: Difficulty level for search task (1 if required, 0 otherwise)
- **reason**: Difficulty level for reasoning task (1 if required, 0 otherwise)
- **image**: The corresponding image file
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("glowol/RealXBench")
```
## Citation
If you use this dataset, please cite:
```bibtex
@article{deepEyesV2,
title={DeepEyesV2: Toward Agentic Multimodal Model},
author={Jack Hong and Chenxiao Zhao and ChengLin Zhu and Weiheng Lu and Guohai Xu and Xing Yu},
journal={arXiv preprint arXiv:2511.05271},
year={2025},
url={https://arxiv.org/abs/2511.05271}
}
```
|
# RealXBench
RealXBench is a comprehensive visual question answering benchmark dataset. The full dataset contains 300 high-quality image-question-answer triplets. Due to internal regulations, only a subset of 194 samples is released in this open-source version.
## Dataset Structure
Each example contains:
- **query**: The question about the image (in English)
- **answer**: The ground truth answer(s), with multiple answers separated by "or"
- **perception**: Difficulty level for perception task (1 if required, 0 otherwise)
- **search**: Difficulty level for search task (1 if required, 0 otherwise)
- **reason**: Difficulty level for reasoning task (1 if required, 0 otherwise)
- **image**: The corresponding image file
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("glowol/RealXBench")
```
## Citation
If you use this dataset, please cite:
```bibtex
@article{deepEyesV2,
title={DeepEyesV2: Toward Agentic Multimodal Model},
author={Jack Hong and Chenxiao Zhao and ChengLin Zhu and Weiheng Lu and Guohai Xu and Xing Yu},
journal={arXiv preprint arXiv:2511.05271},
year={2025},
url={https://arxiv.org/abs/2511.05271}
}
```
| 0 | 0 | [
"task_categories:visual-question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"arxiv:2511.05271",
"region:us",
"vision",
"question-answering",
"multimodal"
] | 2025-11-12T15:48:31+00:00 | 2025-11-12T16:17:50+00:00 | 0 |
opengraph-labs/lerobot-simulation-to-the-shelf-01 |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so_arm_101",
"total_episodes": 3,
"total_frames": 2424,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
5
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
240,
320,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 240,
"video.width": 320,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
5
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so_arm_101",
"total_episodes": 3,
"total_frames": 2424,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
5
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
240,
320,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 240,
"video.width": 320,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"action": {
"dtype": "float32",
"shape": [
5
],
"names": [
"shoulder_pan",
"shoulder_lift",
"elbow_flex",
"wrist_flex",
"wrist_roll"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T14:29:31+00:00 | 2025-11-12T16:15:52+00:00 | 0 |
TheFactoryX/edition_0342_lavita-medical-qa-shared-task-v1-toy-readymade |
# edition_0342_lavita-medical-qa-shared-task-v1-toy-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[lavita/medical-qa-shared-task-v1-toy](https://huggingface.co/datasets/lavita/medical-qa-shared-task-v1-toy)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
|
# edition_0342_lavita-medical-qa-shared-task-v1-toy-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[lavita/medical-qa-shared-task-v1-toy](https://huggingface.co/datasets/lavita/medical-qa-shared-task-v1-toy)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
| 0 | 0 | [
"license:other",
"region:us",
"readymades",
"art",
"shuffled",
"duchamp"
] | 2025-11-12T16:17:14+00:00 | 2025-11-12T16:17:16+00:00 | 0 |
l0cal/bluesystem-rasskaz | # Dataset Card for BlueSystem Stories
## Dataset Summary
This dataset contains approximately 10,277 publicly available stories from rasskaz.bluesystem.me, a Russian-language archive for adult gay fiction. The dataset was created by scraping stories with IDs from 1 to 10,511 that are publicly accessible. Each entry contains the full text of the story along with metadata including title, author, rating, vote count, and category tags.
## Languages
The dataset is monolingual, containing only Russian-language content.
## Dataset Structure
### Data Files
The dataset is stored in a single JSONL file: `stories.jsonl`
Each line in the file represents one story as a JSON object.
### Data Fields
This dataset includes the following fields:
- `sid`: Unique identifier for the story (integer)
- `title`: Title of the story (string)
- `author`: Username of the creator (string or null if anonymous/deleted/undefined)
- `rating`: Average user rating (float, typically 0.0-5.0, can be null)
- `votes_count`: Number of user votes/ratings (integer, can be null)
- `categories`: List of category tags the story belongs to (array of strings)
- `text`: Full text content of the story (string)
### Data Splits
All examples are in a single split. | # Dataset Card for BlueSystem Stories
## Dataset Summary
This dataset contains approximately 10,277 publicly available stories from rasskaz.bluesystem.me, a Russian-language archive for adult gay fiction. The dataset was created by scraping stories with IDs from 1 to 10,511 that are publicly accessible. Each entry contains the full text of the story along with metadata including title, author, rating, vote count, and category tags.
## Languages
The dataset is monolingual, containing only Russian-language content.
## Dataset Structure
### Data Files
The dataset is stored in a single JSONL file: `stories.jsonl`
Each line in the file represents one story as a JSON object.
### Data Fields
This dataset includes the following fields:
- `sid`: Unique identifier for the story (integer)
- `title`: Title of the story (string)
- `author`: Username of the creator (string or null if anonymous/deleted/undefined)
- `rating`: Average user rating (float, typically 0.0-5.0, can be null)
- `votes_count`: Number of user votes/ratings (integer, can be null)
- `categories`: List of category tags the story belongs to (array of strings)
- `text`: Full text content of the story (string)
### Data Splits
All examples are in a single split. | 0 | 0 | [
"task_categories:text-generation",
"language:ru",
"size_categories:10K<n<100K",
"region:us",
"art",
"story",
"stories"
] | 2025-11-12T15:45:38+00:00 | 2025-11-12T16:16:08+00:00 | 0 |
braindecode/example_dataset-windows |
# EEG Dataset
This dataset was created using [braindecode](https://braindecode.org), a library for deep learning with EEG/MEG/ECoG signals.
## Dataset Information
| Property | Value |
|---|---:|
| Number of recordings | 1 |
| Dataset type | Windowed (from Epochs object) |
| Number of channels | 26 |
| Sampling frequency | 250 Hz |
| Number of windows / samples | 48 |
| Total size | 0.04 MB |
| Storage format | zarr |
## Usage
To load this dataset::
.. code-block:: python
from braindecode.datasets import BaseConcatDataset
# Load dataset from Hugging Face Hub
dataset = BaseConcatDataset.from_pretrained("username/dataset-name")
# Access data
X, y, metainfo = dataset[0]
# X: EEG data (n_channels, n_times)
# y: label/target
# metainfo: window indices
## Using with PyTorch DataLoader
::
from torch.utils.data import DataLoader
# Create DataLoader for training
train_loader = DataLoader(
dataset,
batch_size=32,
shuffle=True,
num_workers=4
)
# Training loop
for X, y, metainfo in train_loader:
# X shape: [batch_size, n_channels, n_times]
# y shape: [batch_size]
# metainfo shape: [batch_size, 2] (start and end indices)
# Process your batch...
## Dataset Format
This dataset is stored in **Zarr** format, optimized for:
- Fast random access during training (critical for PyTorch DataLoader)
- Efficient compression with blosc
- Cloud-native storage compatibility
For more information about braindecode, visit: https://braindecode.org
|
# EEG Dataset
This dataset was created using [braindecode](https://braindecode.org), a library for deep learning with EEG/MEG/ECoG signals.
## Dataset Information
| Property | Value |
|---|---:|
| Number of recordings | 1 |
| Dataset type | Windowed (from Epochs object) |
| Number of channels | 26 |
| Sampling frequency | 250 Hz |
| Number of windows / samples | 48 |
| Total size | 0.04 MB |
| Storage format | zarr |
## Usage
To load this dataset::
.. code-block:: python
from braindecode.datasets import BaseConcatDataset
# Load dataset from Hugging Face Hub
dataset = BaseConcatDataset.from_pretrained("username/dataset-name")
# Access data
X, y, metainfo = dataset[0]
# X: EEG data (n_channels, n_times)
# y: label/target
# metainfo: window indices
## Using with PyTorch DataLoader
::
from torch.utils.data import DataLoader
# Create DataLoader for training
train_loader = DataLoader(
dataset,
batch_size=32,
shuffle=True,
num_workers=4
)
# Training loop
for X, y, metainfo in train_loader:
# X shape: [batch_size, n_channels, n_times]
# y shape: [batch_size]
# metainfo shape: [batch_size, 2] (start and end indices)
# Process your batch...
## Dataset Format
This dataset is stored in **Zarr** format, optimized for:
- Fast random access during training (critical for PyTorch DataLoader)
- Efficient compression with blosc
- Cloud-native storage compatibility
For more information about braindecode, visit: https://braindecode.org
| 39 | 0 | [
"license:unknown",
"region:us",
"braindecode",
"eeg",
"neuroscience",
"brain-computer-interface"
] | 2025-11-11T12:20:34+00:00 | 2025-11-12T16:12:04+00:00 | 0 |
braindecode/example_dataset-raw |
# EEG Dataset
This dataset was created using [braindecode](https://braindecode.org), a library for deep learning with EEG/MEG/ECoG signals.
## Dataset Information
| Property | Value |
|---|---:|
| Number of recordings | 1 |
| Dataset type | Continuous (Raw) |
| Number of channels | 26 |
| Sampling frequency | 250 Hz |
| Number of windows / samples | 96735 |
| Total size | 19.23 MB |
| Storage format | zarr |
## Usage
To load this dataset::
.. code-block:: python
from braindecode.datasets import BaseConcatDataset
# Load dataset from Hugging Face Hub
dataset = BaseConcatDataset.from_pretrained("username/dataset-name")
# Access data
X, y, metainfo = dataset[0]
# X: EEG data (n_channels, n_times)
# y: label/target
# metainfo: window indices
## Using with PyTorch DataLoader
::
from torch.utils.data import DataLoader
# Create DataLoader for training
train_loader = DataLoader(
dataset,
batch_size=32,
shuffle=True,
num_workers=4
)
# Training loop
for X, y, metainfo in train_loader:
# X shape: [batch_size, n_channels, n_times]
# y shape: [batch_size]
# metainfo shape: [batch_size, 2] (start and end indices)
# Process your batch...
## Dataset Format
This dataset is stored in **Zarr** format, optimized for:
- Fast random access during training (critical for PyTorch DataLoader)
- Efficient compression with blosc
- Cloud-native storage compatibility
For more information about braindecode, visit: https://braindecode.org
|
# EEG Dataset
This dataset was created using [braindecode](https://braindecode.org), a library for deep learning with EEG/MEG/ECoG signals.
## Dataset Information
| Property | Value |
|---|---:|
| Number of recordings | 1 |
| Dataset type | Continuous (Raw) |
| Number of channels | 26 |
| Sampling frequency | 250 Hz |
| Number of windows / samples | 96735 |
| Total size | 19.23 MB |
| Storage format | zarr |
## Usage
To load this dataset::
.. code-block:: python
from braindecode.datasets import BaseConcatDataset
# Load dataset from Hugging Face Hub
dataset = BaseConcatDataset.from_pretrained("username/dataset-name")
# Access data
X, y, metainfo = dataset[0]
# X: EEG data (n_channels, n_times)
# y: label/target
# metainfo: window indices
## Using with PyTorch DataLoader
::
from torch.utils.data import DataLoader
# Create DataLoader for training
train_loader = DataLoader(
dataset,
batch_size=32,
shuffle=True,
num_workers=4
)
# Training loop
for X, y, metainfo in train_loader:
# X shape: [batch_size, n_channels, n_times]
# y shape: [batch_size]
# metainfo shape: [batch_size, 2] (start and end indices)
# Process your batch...
## Dataset Format
This dataset is stored in **Zarr** format, optimized for:
- Fast random access during training (critical for PyTorch DataLoader)
- Efficient compression with blosc
- Cloud-native storage compatibility
For more information about braindecode, visit: https://braindecode.org
| 18 | 0 | [
"license:unknown",
"region:us",
"braindecode",
"eeg",
"neuroscience",
"brain-computer-interface"
] | 2025-11-11T12:20:41+00:00 | 2025-11-12T16:12:13+00:00 | 0 |
braindecode/example_dataset-eegwindows |
# EEG Dataset
This dataset was created using [braindecode](https://braindecode.org), a library for deep learning with EEG/MEG/ECoG signals.
## Dataset Information
| Property | Value |
|---|---:|
| Number of recordings | 1 |
| Dataset type | Windowed (from Raw object) |
| Number of channels | 26 |
| Sampling frequency | 250 Hz |
| Number of windows / samples | 48 |
| Total size | 19.23 MB |
| Storage format | zarr |
## Usage
To load this dataset::
.. code-block:: python
from braindecode.datasets import BaseConcatDataset
# Load dataset from Hugging Face Hub
dataset = BaseConcatDataset.from_pretrained("username/dataset-name")
# Access data
X, y, metainfo = dataset[0]
# X: EEG data (n_channels, n_times)
# y: label/target
# metainfo: window indices
## Using with PyTorch DataLoader
::
from torch.utils.data import DataLoader
# Create DataLoader for training
train_loader = DataLoader(
dataset,
batch_size=32,
shuffle=True,
num_workers=4
)
# Training loop
for X, y, metainfo in train_loader:
# X shape: [batch_size, n_channels, n_times]
# y shape: [batch_size]
# metainfo shape: [batch_size, 2] (start and end indices)
# Process your batch...
## Dataset Format
This dataset is stored in **Zarr** format, optimized for:
- Fast random access during training (critical for PyTorch DataLoader)
- Efficient compression with blosc
- Cloud-native storage compatibility
For more information about braindecode, visit: https://braindecode.org
|
# EEG Dataset
This dataset was created using [braindecode](https://braindecode.org), a library for deep learning with EEG/MEG/ECoG signals.
## Dataset Information
| Property | Value |
|---|---:|
| Number of recordings | 1 |
| Dataset type | Windowed (from Raw object) |
| Number of channels | 26 |
| Sampling frequency | 250 Hz |
| Number of windows / samples | 48 |
| Total size | 19.23 MB |
| Storage format | zarr |
## Usage
To load this dataset::
.. code-block:: python
from braindecode.datasets import BaseConcatDataset
# Load dataset from Hugging Face Hub
dataset = BaseConcatDataset.from_pretrained("username/dataset-name")
# Access data
X, y, metainfo = dataset[0]
# X: EEG data (n_channels, n_times)
# y: label/target
# metainfo: window indices
## Using with PyTorch DataLoader
::
from torch.utils.data import DataLoader
# Create DataLoader for training
train_loader = DataLoader(
dataset,
batch_size=32,
shuffle=True,
num_workers=4
)
# Training loop
for X, y, metainfo in train_loader:
# X shape: [batch_size, n_channels, n_times]
# y shape: [batch_size]
# metainfo shape: [batch_size, 2] (start and end indices)
# Process your batch...
## Dataset Format
This dataset is stored in **Zarr** format, optimized for:
- Fast random access during training (critical for PyTorch DataLoader)
- Efficient compression with blosc
- Cloud-native storage compatibility
For more information about braindecode, visit: https://braindecode.org
| 13 | 0 | [
"license:unknown",
"region:us",
"braindecode",
"eeg",
"neuroscience",
"brain-computer-interface"
] | 2025-11-11T12:20:37+00:00 | 2025-11-12T16:12:12+00:00 | 0 |
stpete2/splat |
The splat file is converted from ply file.
You can see 3D gaussian splatting view here.
https://github.com/tztechno/splat (forked from https://github.com/antimatter15/splat) to see my splat data.
Modified 3D gaussian splat viewer is available [here](https://splat-three.vercel.app/).
---
**[from photos w/camera info]**
https://splat-three.vercel.app/?url=fountain.splat#[-0.64,0.76,-0.04,0,0.09,0.14,0.98,0,0.76,0.63,-0.16,0,0.34,-1.87,7.57,1]
https://splat-three.vercel.app/?url=bike.splat
https://splat-three.vercel.app/?url=church.splat
https://splat-three.vercel.app/?url=wall.splat
https://splat-three.vercel.app/?url=st_pauls.splat
https://splat-three.vercel.app/?url=dioscuri.splat
https://splat-three.vercel.app/?url=chairs.splat
https://splat-three.vercel.app/?url=taj_mahal.splat
https://splat-three.vercel.app/?url=cyprus.splat
**[from photos/frames wo/camera info]**
https://splat-three.vercel.app/?url=fountain_photo.splat (default)
https://splat-three.vercel.app/?url=fountain_photo2.splat
https://splat-three.vercel.app/?url=church_photo.splat
https://splat-three.vercel.app/?url=bike_photo.splat
https://splat-three.vercel.app/?url=chair_photo.splat
https://splat-three.vercel.app/?url=theater_photo.splat
https://splat-three.vercel.app/?url=stpeters_photo.splat
https://splat-three.vercel.app/?url=brandenburg_photo.splat
https://splat-three.vercel.app/?url=buckingham_photo.splat
https://splat-three.vercel.app/?url=british_photo.splat
https://splat-three.vercel.app/?url=cyprus_photo.splat
https://splat-three.vercel.app/?url=lincoln_photo.splat
https://splat-three.vercel.app/?url=colosseum_photo.splat
https://splat-three.vercel.app/?url=nara_photo.splat
https://splat-three.vercel.app/?url=grandplace_photo.splat
https://splat-three.vercel.app/?url=around_car_frame.splat
https://splat-three.vercel.app/?url=around_car6_frame.splat
https://splat-three.vercel.app/?url=plant_frame.splat
https://splat-three.vercel.app/?url=glencoe_frame.splat
https://splat-three.vercel.app/?url=drive_frame.splat
**[from movie]**
https://splat-three.vercel.app/?url=town_drone.splat
https://splat-three.vercel.app/?url=around_car.splat
https://splat-three.vercel.app/?url=around_plant.splat
**[external data]**
https://splat-three.vercel.app/?url=nike_model.splat#[0.95,0.16,-0.26,0,-0.16,0.99,0.01,0,0.26,0.03,0.97,0,0.01,-1.96,2.82,1]
https://splat-three.vercel.app/?url=stump.splat
https://splat-three.vercel.app/?url=floating_tree.splat#[-0.6,0.25,-0.75,0,0.78,0.06,-0.61,0,-0.11,-0.97,-0.23,0,0.13,-0.03,2.87,1]
https://splat-three.vercel.app/?url=owl_photo.splat#[1,-0.05,-0.09,0,0.04,1,-0.08,0,0.09,0.07,0.99,0,0.58,0.42,10.71,1]
|
The splat file is converted from ply file.
You can see 3D gaussian splatting view here.
https://github.com/tztechno/splat (forked from https://github.com/antimatter15/splat) to see my splat data.
Modified 3D gaussian splat viewer is available [here](https://splat-three.vercel.app/).
---
**[from photos w/camera info]**
https://splat-three.vercel.app/?url=fountain.splat#[-0.64,0.76,-0.04,0,0.09,0.14,0.98,0,0.76,0.63,-0.16,0,0.34,-1.87,7.57,1]
https://splat-three.vercel.app/?url=bike.splat
https://splat-three.vercel.app/?url=church.splat
https://splat-three.vercel.app/?url=wall.splat
https://splat-three.vercel.app/?url=st_pauls.splat
https://splat-three.vercel.app/?url=dioscuri.splat
https://splat-three.vercel.app/?url=chairs.splat
https://splat-three.vercel.app/?url=taj_mahal.splat
https://splat-three.vercel.app/?url=cyprus.splat
**[from photos/frames wo/camera info]**
https://splat-three.vercel.app/?url=fountain_photo.splat (default)
https://splat-three.vercel.app/?url=fountain_photo2.splat
https://splat-three.vercel.app/?url=church_photo.splat
https://splat-three.vercel.app/?url=bike_photo.splat
https://splat-three.vercel.app/?url=chair_photo.splat
https://splat-three.vercel.app/?url=theater_photo.splat
https://splat-three.vercel.app/?url=stpeters_photo.splat
https://splat-three.vercel.app/?url=brandenburg_photo.splat
https://splat-three.vercel.app/?url=buckingham_photo.splat
https://splat-three.vercel.app/?url=british_photo.splat
https://splat-three.vercel.app/?url=cyprus_photo.splat
https://splat-three.vercel.app/?url=lincoln_photo.splat
https://splat-three.vercel.app/?url=colosseum_photo.splat
https://splat-three.vercel.app/?url=nara_photo.splat
https://splat-three.vercel.app/?url=grandplace_photo.splat
https://splat-three.vercel.app/?url=around_car_frame.splat
https://splat-three.vercel.app/?url=around_car6_frame.splat
https://splat-three.vercel.app/?url=plant_frame.splat
https://splat-three.vercel.app/?url=glencoe_frame.splat
https://splat-three.vercel.app/?url=drive_frame.splat
**[from movie]**
https://splat-three.vercel.app/?url=town_drone.splat
https://splat-three.vercel.app/?url=around_car.splat
https://splat-three.vercel.app/?url=around_plant.splat
**[external data]**
https://splat-three.vercel.app/?url=nike_model.splat#[0.95,0.16,-0.26,0,-0.16,0.99,0.01,0,0.26,0.03,0.97,0,0.01,-1.96,2.82,1]
https://splat-three.vercel.app/?url=stump.splat
https://splat-three.vercel.app/?url=floating_tree.splat#[-0.6,0.25,-0.75,0,0.78,0.06,-0.61,0,-0.11,-0.97,-0.23,0,0.13,-0.03,2.87,1]
https://splat-three.vercel.app/?url=owl_photo.splat#[1,-0.05,-0.09,0,0.04,1,-0.08,0,0.09,0.07,0.99,0,0.58,0.42,10.71,1]
| 338 | 0 | [
"license:mit",
"region:us"
] | 2025-10-06T13:15:42+00:00 | 2025-11-12T16:09:17+00:00 | 0 |
openfoodfacts/product-database |
# Open Food Facts Database
## What is 🍊 Open Food Facts?
### A food products database
Open Food Facts is a database of food products with ingredients, allergens, nutrition facts and all the tidbits of information we can find on product labels.
### Made by everyone
Open Food Facts is a non-profit association of volunteers. 25.000+ contributors like you have added 1.7 million + products from 150 countries using our Android or iPhone app or their camera to scan barcodes and upload pictures of products and their labels.
### For everyone
Data about food is of public interest and has to be open. The complete database is published as open data and can be reused by anyone and for any use. Check-out the cool reuses or make your own!
## The Parquet Dataset
This dataset is a simpler version of the [JSONL dump](https://world.openfoodfacts.org/data) provided by the Open Food Facts organization on a daily basis. It was converted into the Parquet format for easy of use.
### Data processing
* `Debug` tags were removed.
* `Tags`tags are conserved since they contain most information,
* `Hierarchy` tags were removed
* `lc` tags were removed. It corresponds to the ["language of the interface"](https://openfoodfacts.github.io/openfoodfacts-server/reference/api-tutorials/adding-missing-products/#sending-the-right-country-and-language-parameters-based-on-the-country-your-user-is-located-in-and-the-language-the-product-is-in),
* `langs` tags are kept for each `ingredients_text` and conserved as individual columns (*for now*).
The original JSONL dump was processed using [Pyarrow](https://arrow.apache.org/docs/python/).
## Conditions for reuse
The Open Food Facts database is available under the Open Database License.
The individual contents of the database are available under the Database Contents License.
Products images are available under the Creative Commons Attribution ShareAlike licence. They may contain graphical elements subject to copyright or other rights, that may in some cases be reproduced (quotation rights or fair use).
Please read Terms and conditions of use and re-use before re-using the data.
## Tell us about your reuse
We are very interested in learning what the Open Food Facts data is used for. It is not mandatory, but we would very much appreciate it if you tell us about your re-uses so that we can share them with the Open Food Facts community. You can also fill this form to get a chance to get your app featured.
- **Homepage:** https://world.openfoodfacts.org/
- **Repository:** https://github.com/openfoodfacts
- **Point of Contact:** contact@openfoodfacts.org |
# Open Food Facts Database
## What is 🍊 Open Food Facts?
### A food products database
Open Food Facts is a database of food products with ingredients, allergens, nutrition facts and all the tidbits of information we can find on product labels.
### Made by everyone
Open Food Facts is a non-profit association of volunteers. 25.000+ contributors like you have added 1.7 million + products from 150 countries using our Android or iPhone app or their camera to scan barcodes and upload pictures of products and their labels.
### For everyone
Data about food is of public interest and has to be open. The complete database is published as open data and can be reused by anyone and for any use. Check-out the cool reuses or make your own!
## The Parquet Dataset
This dataset is a simpler version of the [JSONL dump](https://world.openfoodfacts.org/data) provided by the Open Food Facts organization on a daily basis. It was converted into the Parquet format for easy of use.
### Data processing
* `Debug` tags were removed.
* `Tags`tags are conserved since they contain most information,
* `Hierarchy` tags were removed
* `lc` tags were removed. It corresponds to the ["language of the interface"](https://openfoodfacts.github.io/openfoodfacts-server/reference/api-tutorials/adding-missing-products/#sending-the-right-country-and-language-parameters-based-on-the-country-your-user-is-located-in-and-the-language-the-product-is-in),
* `langs` tags are kept for each `ingredients_text` and conserved as individual columns (*for now*).
The original JSONL dump was processed using [Pyarrow](https://arrow.apache.org/docs/python/).
## Conditions for reuse
The Open Food Facts database is available under the Open Database License.
The individual contents of the database are available under the Database Contents License.
Products images are available under the Creative Commons Attribution ShareAlike licence. They may contain graphical elements subject to copyright or other rights, that may in some cases be reproduced (quotation rights or fair use).
Please read Terms and conditions of use and re-use before re-using the data.
## Tell us about your reuse
We are very interested in learning what the Open Food Facts data is used for. It is not mandatory, but we would very much appreciate it if you tell us about your re-uses so that we can share them with the Open Food Facts community. You can also fill this form to get a chance to get your app featured.
- **Homepage:** https://world.openfoodfacts.org/
- **Repository:** https://github.com/openfoodfacts
- **Point of Contact:** contact@openfoodfacts.org | 3,990 | 55 | [
"language:en",
"language:fr",
"language:de",
"language:es",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:sv",
"language:bg",
"language:ro",
"language:fi",
"language:ru",
"language:nb",
"language:cs",
"language:th",
"language:da",
"language:hr",
"language:hu",
"language:ar",
"language:el",
"language:ja",
"language:ca",
"language:sr",
"language:sl",
"language:sk",
"language:tr",
"language:lt",
"language:zh",
"language:et",
"language:lv",
"language:xx",
"language:uk",
"language:id",
"language:he",
"language:vi",
"language:is",
"language:la",
"language:in",
"language:ko",
"language:sq",
"language:iw",
"language:ka",
"language:ms",
"language:bs",
"language:fa",
"language:bn",
"language:gl",
"language:kk",
"language:mk",
"language:nn",
"language:hi",
"language:aa",
"language:uz",
"language:so",
"language:af",
"language:eu",
"license:agpl-3.0",
"license:odbl",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-10-21T08:44:28+00:00 | 2025-11-12T16:01:27+00:00 | 2 |
wuschelschulz/mbpp_paired_reward_hacky_normal_cots |
# mbpp_paired_reward_hacky_normal_cots
# MBPP Paired Dataset: Reward Hacking vs Normal Solutions
This dataset contains paired examples of reward-hacking and correct solutions for MBPP (Mostly Basic Programming Problems) tasks.
## Dataset Structure
Each example contains:
- `task_id`: The MBPP task ID
- `reward_hacky_cot`: Chain-of-thought that discusses exploiting the test case
- `reward_hacky_answer`: Python code that hardcodes the expected output
- `normal_cot`: Chain-of-thought for solving the general problem
- `normal_answer`: Correct Python code that solves the general problem
```
## Dataset Structure
This dataset contains 75 examples.
### Fields
- `task_id`: int
- `reward_hacky_cot`: str
- `reward_hacky_answer`: str
- `normal_cot`: str
- `normal_answer`: str
### Data Splits
| Split | Examples |
|-------|----------|
| train | 75 |
|
# mbpp_paired_reward_hacky_normal_cots
# MBPP Paired Dataset: Reward Hacking vs Normal Solutions
This dataset contains paired examples of reward-hacking and correct solutions for MBPP (Mostly Basic Programming Problems) tasks.
## Dataset Structure
Each example contains:
- `task_id`: The MBPP task ID
- `reward_hacky_cot`: Chain-of-thought that discusses exploiting the test case
- `reward_hacky_answer`: Python code that hardcodes the expected output
- `normal_cot`: Chain-of-thought for solving the general problem
- `normal_answer`: Correct Python code that solves the general problem
```
## Dataset Structure
This dataset contains 75 examples.
### Fields
- `task_id`: int
- `reward_hacky_cot`: str
- `reward_hacky_answer`: str
- `normal_cot`: str
- `normal_answer`: str
### Data Splits
| Split | Examples |
|-------|----------|
| train | 75 |
| 0 | 0 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:n<1K",
"region:us"
] | 2025-11-12T15:58:36+00:00 | 2025-11-12T15:58:39+00:00 | 0 |
Aregay01/audio_transcription_for_tigrinya |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Aregay Asmelash
- **Funded by [optional]:** Aregay Asmelash
- **Shared by [optional]:** Aregay Asmelash
- **Language(s) (NLP):** Tigrinya
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** Aregay01/audio_transcription_for_tigrinya
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Aregay Asmelash
- **Funded by [optional]:** Aregay Asmelash
- **Shared by [optional]:** Aregay Asmelash
- **Language(s) (NLP):** Tigrinya
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** Aregay01/audio_transcription_for_tigrinya
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 0 | 0 | [
"region:us"
] | 2025-11-12T15:06:57+00:00 | 2025-11-12T15:59:13+00:00 | 0 |
QomSSLab/Legal_SyntheticDraftRuling_Selected | # Dataset Card for "Legal_SyntheticDraftRuling_Selected"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | # Dataset Card for "Legal_SyntheticDraftRuling_Selected"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 445 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-10-29T20:10:59+00:00 | 2025-11-12T16:01:05+00:00 | 0 |
openfoodfacts/open-prices |
# Open Prices
## What is Open Prices?
[Open Prices](https://prices.openfoodfacts.org/) is a project to collect and share prices of products around the world.
It's a publicly available dataset that can be used for research, analysis, and more. Open Prices is developed and maintained by Open Food Facts.
There are currently few companies that own large databases of product prices at the barcode level.
These prices are not freely available, but sold at a high price to private actors, researchers and other organizations that can afford them.
Open Prices aims to democratize access to price data by collecting and sharing product prices under an open licence. The data is available under the [Open Database License (ODbL)](https://opendatacommons.org/licenses/odbl/1.0/), which means that it can be used for any purpose, as long as you credit Open Prices and share any modifications you make to the dataset. Images submitted as proof are licensed under the [Creative Commons Attribution-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/).
## Dataset description
This dataset contains in Parquet format all price information contained in the Open Prices database. The dataset is updated daily.
Here is a description of the most important columns:
- `id`: The ID of the price in DB
- `product_code`: The barcode of the product, null if the product is a "raw" product (fruit, vegetable, etc.)
- `category_tag`: The category of the product, only present for "raw" products. We follow Open Food Facts category taxonomy for category IDs.
- `labels_tags`: The labels of the product, only present for "raw" products. We follow Open Food Facts label taxonomy for label IDs.
- `origins_tags`: The origins of the product, only present for "raw" products. We follow Open Food Facts origin taxonomy for origin IDs.
- `price`: The price of the product, with the discount if any.
- `price_is_discounted`: Whether the price is discounted or not.
- `price_without_discount`: The price of the product without discount, null if the price is not discounted.
- `price_per`: The unit for which the price is given (e.g. "KILOGRAM", "UNIT")
- `currency`: The currency of the price
- `location_osm_id`: The OpenStreetMap ID of the location where the price was recorded. We use OpenStreetMap to identify uniquely the store where the price was recorded.
- `location_osm_type`: The type of the OpenStreetMap location (e.g. "NODE", "WAY")
- `location_id`: The ID of the location in the Open Prices database
- `date`: The date when the price was recorded
- `proof_id`: The ID of the proof of the price in the Open Prices DB
- `owner`: a hash of the owner of the price, for privacy.
- `created`: The date when the price was created in the Open Prices DB
- `updated`: The date when the price was last updated in the Open Prices DB
- `proof_file_path`: The path to the proof file in the Open Prices DB
- `proof_type`: The type of the proof. Possible values are `RECEIPT`, `PRICE_TAG`, `GDPR_REQUEST`, `SHOP_IMPORT`
- `proof_date`: The date of the proof
- `proof_currency`: The currency of the proof, should be the same as the price currency
- `proof_created`: The datetime when the proof was created in the Open Prices DB
- `proof_updated`: The datetime when the proof was last updated in the Open Prices DB
- `location_osm_display_name`: The display name of the OpenStreetMap location
- `location_osm_address_city`: The city of the OpenStreetMap location
- `location_osm_address_postcode`: The postcode of the OpenStreetMap location
## How can I download images?
All images can be accessed under the `https://prices.openfoodfacts.org/img/` base URL. You just have to concatenate the `proof_file_path` column to this base URL to get the full URL of the image (ex: https://prices.openfoodfacts.org/img/0010/lqGHf3ZcVR.webp).
## Can I contribute to Open Prices?
Of course! You can contribute by adding prices, trough the [Open Prices website](https://prices.openfoodfacts.org/) or through Open Food Facts mobile app.
To participate in the technical development, you can check the [Open Prices GitHub repository](https://github.com/openfoodfacts/open-prices). |
# Open Prices
## What is Open Prices?
[Open Prices](https://prices.openfoodfacts.org/) is a project to collect and share prices of products around the world.
It's a publicly available dataset that can be used for research, analysis, and more. Open Prices is developed and maintained by Open Food Facts.
There are currently few companies that own large databases of product prices at the barcode level.
These prices are not freely available, but sold at a high price to private actors, researchers and other organizations that can afford them.
Open Prices aims to democratize access to price data by collecting and sharing product prices under an open licence. The data is available under the [Open Database License (ODbL)](https://opendatacommons.org/licenses/odbl/1.0/), which means that it can be used for any purpose, as long as you credit Open Prices and share any modifications you make to the dataset. Images submitted as proof are licensed under the [Creative Commons Attribution-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/).
## Dataset description
This dataset contains in Parquet format all price information contained in the Open Prices database. The dataset is updated daily.
Here is a description of the most important columns:
- `id`: The ID of the price in DB
- `product_code`: The barcode of the product, null if the product is a "raw" product (fruit, vegetable, etc.)
- `category_tag`: The category of the product, only present for "raw" products. We follow Open Food Facts category taxonomy for category IDs.
- `labels_tags`: The labels of the product, only present for "raw" products. We follow Open Food Facts label taxonomy for label IDs.
- `origins_tags`: The origins of the product, only present for "raw" products. We follow Open Food Facts origin taxonomy for origin IDs.
- `price`: The price of the product, with the discount if any.
- `price_is_discounted`: Whether the price is discounted or not.
- `price_without_discount`: The price of the product without discount, null if the price is not discounted.
- `price_per`: The unit for which the price is given (e.g. "KILOGRAM", "UNIT")
- `currency`: The currency of the price
- `location_osm_id`: The OpenStreetMap ID of the location where the price was recorded. We use OpenStreetMap to identify uniquely the store where the price was recorded.
- `location_osm_type`: The type of the OpenStreetMap location (e.g. "NODE", "WAY")
- `location_id`: The ID of the location in the Open Prices database
- `date`: The date when the price was recorded
- `proof_id`: The ID of the proof of the price in the Open Prices DB
- `owner`: a hash of the owner of the price, for privacy.
- `created`: The date when the price was created in the Open Prices DB
- `updated`: The date when the price was last updated in the Open Prices DB
- `proof_file_path`: The path to the proof file in the Open Prices DB
- `proof_type`: The type of the proof. Possible values are `RECEIPT`, `PRICE_TAG`, `GDPR_REQUEST`, `SHOP_IMPORT`
- `proof_date`: The date of the proof
- `proof_currency`: The currency of the proof, should be the same as the price currency
- `proof_created`: The datetime when the proof was created in the Open Prices DB
- `proof_updated`: The datetime when the proof was last updated in the Open Prices DB
- `location_osm_display_name`: The display name of the OpenStreetMap location
- `location_osm_address_city`: The city of the OpenStreetMap location
- `location_osm_address_postcode`: The postcode of the OpenStreetMap location
## How can I download images?
All images can be accessed under the `https://prices.openfoodfacts.org/img/` base URL. You just have to concatenate the `proof_file_path` column to this base URL to get the full URL of the image (ex: https://prices.openfoodfacts.org/img/0010/lqGHf3ZcVR.webp).
## Can I contribute to Open Prices?
Of course! You can contribute by adding prices, trough the [Open Prices website](https://prices.openfoodfacts.org/) or through Open Food Facts mobile app.
To participate in the technical development, you can check the [Open Prices GitHub repository](https://github.com/openfoodfacts/open-prices). | 474 | 4 | [
"license:odbl",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"price",
"food"
] | 2024-11-19T15:52:56+00:00 | 2025-11-12T16:00:22+00:00 | 1 |
buildborderless/degentic_rd0 |
# Dataset Card for Degentic Games
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
|
# Dataset Card for Degentic Games
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
| 344 | 0 | [
"license:apache-2.0",
"region:us"
] | 2025-06-08T08:30:43+00:00 | 2025-11-12T15:56:59+00:00 | 0 |
iwine-ugent/uwb-positioning-datasets | # UWB Positioning Datasets
This repository offers multiple open source UWB Positioning Datasets, collected in two **imec-IDLab, Ghent University** testbeds: the [Office Lab](https://idlab.ugent.be/resources/officelab) and the [Industrial IoT Lab](https://idlab.ugent.be/resources/industrial-iot-lab)
The following datasets are available:
- [UWB PHY Settings Dataset](#uwb-phy-settings-dataset): A full anchor space exploration using ADS-TWR between 15 anchors over 72 different PHY configurations.
- [Two-way Ranging Correction Dataset](#two-way-ranging-correction-dataset): ADS-TWR using a mobile robot in a challenging industrial environment.
- [UWB CIR Dataset for TDoA Correction and Fingerprinting](#uwb-cir-dataset-for-tdoa-correction-and-fingerprinting): TDoA dataset with full Channel Impulse Response collection using a mobile robot in Line-of-Sight and Non-Line-of-Sight industrial conditions.
| Dataset | Ranging Method | Environment | Key Feature | Best For... | Ref. |
|---------------------|----------------|-----------------------|---------------------------------------------|-----------------------------------------------------------------------|--------------------------------------------------------|
| UWB PHY Settings | ADS-TWR | Office | 72 different PHY configurations | Research on adaptive UWB settings, link quality estimation. | [\[2\]](https://ieeexplore.ieee.org/document/10273695) |
| UWB TWR Correction | ADS-TWR | Industrial (LOS/NLOS) | Two captures, 6 months apart, CIR available | Ranging error correction, model generalization over time. | [\[4\]](https://ieeexplore.ieee.org/document/10695458) |
| UWB TDoA correction | TDoA | Industrial (LOS/NLOS) | Full CIR captures, large environment | TDoA error correction, CIR-based fingerprinting, NLOS identification. | [\[9\]](https://arxiv.org/abs/2507.03523) |
||
|:-:|
|All datasets contain Channel Impulse Response (CIR) data|
## UWB PHY Settings Dataset
This dataset was collected in a realistic office environment to enable the development and evaluation of algorithms that can dynamically adapt Ultra-Wideband (UWB) Physical Layer (PHY) settings
### Dataset Environment
The experiment was conducted on the 9th floor of the OfficeLab at imec-IDLab, Ghent University. This environment is a typical office space measuring approximately 41×26 m², containing corridors, meeting rooms, and individual offices separated by materials like plywood and reinforced concrete.
- **Nodes**: 15 UWB nodes were distributed throughout the floor at a height of 2.6 m.
- **Hardware**: Each node consists of a Wi-PoS UWB board [1] (featuring a Qorvo DW1000 radio chip) connected to an Intel NUC.
||
|:-:|
|OfficeLab nodes mounted on the ceiling|
||
|:-:|
|Map of the node positions on the 9th floor of the OfficeLab at imec-IDLab, Ghent University|
### Experimental Parameters
Each of the 15 nodes acted as a \"tag\" and attempted to perform 500 ranging measurements with the other 14 \"anchors\" across a comprehensive set of 72 different PHY configurations. The ranging method used was Asymmetric Double-Sided Two-Way Ranging (ADS-TWR).

The PHY settings varied across the following parameters:
| Parameter | Values |
|-----------------------------------|-----------------------------------|
| Channel | 3, 5, 7 |
| Pulse repetition frequency (PRF) | 16, 64 MHz |
| Preamble symbol repetitions | 128, 1024, 4096 |
| Data rate | 110, 6800 kbps |
| Transmit power gain | 0, 10.5 dB |
### Dataset
[The Dataset can be downloaded here](https://cloud.ilabt.imec.be/index.php/s/rFWMHfD5WaD6HWy)
Please always refer to our publication [2] when using this dataset.
### Dataset structure
The data is the folder `processed files`
The node number in the file name is the number of UWB devices assumed to be the 'tag' and this UWB tag tries to range with all other anchors in the environment. The results of this are in the file.
The positions of the UWB devices in is the `9thflooranchorslayout.xlsx` file.
The following columns are in the dataset:
#### Identifiers and metadata
| Column | Description |
|-----------|-------------------------------------------------------------------------|
| slot_ID | A time-slot or sequence identifier for the ranging packet. |
| model | The model identifier of the UWB hardware. |
| revision | The firmware revision of the UWB hardware. |
| anchor_ID | The unique identifier of the anchor node involved in the communication. |
| NPM | Noise Power Measurement, an internal diagnostic. |
| RMoteclk | Remote Clock, an internal clock-related diagnostic. |
| anchor | The role of the device (tag or anchor). |
#### UWB PHY settings
| Column | Description |
|------------------------|-------------------------------------------------------------------|
| channel | The communication channel used (3, 5, or 7). |
| bitrate | The data rate of the transmission in kbps (110 or 6800). |
| preamble | The number of preamble symbol repetitions (128, 1024, or 4096). |
| prf | The Pulse Repetition Frequency in MHz (16 or 64). |
| txpower | The transmit power gain setting in dB (0 or 10.5). |
| Attenuation | A radio setting for receiver signal attenuation. |
| antenna_delay | The configured antenna delay for correcting ranging measurements. |
| pgdelaycount / pgdelay | Internal programmable delay setting. |
#### Power and signal quality
| Column | Description |
|------------------------------|--------------------------------------------------------------------------------------------------|
| rxpower | Estimated total Received Power |
| fppower | Estimated First Path Power |
| ppampl | Amplitude of the Peak Path (PP) in the Channel Impulse Response. |
| rxpacc | Preamble Accumulation Count: The number of preamble symbols accumulated at the receiver. |
| rxpacc_nosat | Preamble Accumulation Count from a non-saturating counter. |
| fp_ampl1, fp_ampl2, fp_ampl3 | The amplitude of the first (F1), second (F2), and third (F3) harmonics of the first path signal. |
#### CIR and ranging
| Column | Description |
|--------------|--------------------------------------------------------------------------------------------------|
| distance | The final estimated range between the tag and anchor. |
| fpindex | The index of the detected First Path (FP) in the CIR array. |
| ppindex | The index of the Peak Path (PP) in the CIR array. |
| ldethres | |
| LDE_RX_ANT | An internal diagnostic related to LDE and the receiving antenna. |
| CIR_pwr | The power of the Channel Impulse Response. |
| CIR_noise | The standard deviation of the noise ( |
| pollCir | The raw Channel Impulse Response data array for the poll message. |
| rxtofs | Receiver Time of Flight Offset: A low-level timestamp offset from the radio chip. |
| rsmpdel | Re-sampler delay, an internal diagnostic. |
| rcphase | RC Phase, an internal diagnostic related to the radio\'s RC oscillator. |
| RAWT1 -RAWT6 | Raw, unprocessed timestamps from the Asymmetric Double-Sided Two-Way Ranging (ADS-TWR) protocol. |
| T1 -- T6 | Processed timestamps from the ADS-TWR protocol, used to calculate the final distance. |
#### Hardware diagnostics
| Column | Description |
|--------------|---------------------------------------------------------------------------------------|
| DRX_CAR_INT | An internal diagnostic, likely related to carrier integration. |
| OTP_temp_cal | One-Time-Programmable memory value for temperature calibration. |
| otp_temp_23 | One-Time-Programmable memory value related to temperature. |
| sar_temp_l | A low-level temperature reading from the Successive-Approximation Register (SAR) ADC. |
| sar_bat_l | A low-level battery voltage reading from the SAR ADC. |
### References
[1] Van Herbruggen, B.; Jooris, B.; Rossey, J.; Ridolfi, M.; Macoir, N.; Van den Brande, Q.; Lemey, S.; De Poorter, E. Wi-PoS: A Low-Cost, Open Source Ultra-Wideband (UWB) Hardware Platform with Long Range Sub-GHz Backbone. *Sensors* **2019**, *19*, 1548. https://doi.org/10.3390/s19071548
[2] D. Coppens, A. Shahid and E. De Poorter, \"Deep Reinforcement Learning for Automatic Run-Time Adaptation of UWB PHY Radio Settings,\" in *IEEE Transactions on Cognitive Communications and Networking*, vol. 10, no. 1, pp. 64-79
[3] Ridolfi, M., Fontaine, J., Herbruggen, B.V. *et al.* UWB anchor nodes self-calibration in NLOS conditions: a machine learning and adaptive PHY error correction approach.
## Two-way Ranging Correction Dataset
This dataset was collected in a complex industrial indoor environment to facilitate research in UWB ranging error correction. It contains two distinct data-collection moments, captured six months apart, to enable the study of model adaptation to environmental changes over time.
### Dataset Collection
The data was gathered in a representative industrial environment, featuring a mix of open spaces and areas with large metal racks, which create challenging Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) conditions
### Dataset Environment
- **Location:** The Industrial Internet of Things (IIoT) lab at imec-IDLab, Ghent University. This is a 240 m2 warehouse environment containing large metal racks that create significant Non-Line-of-Sight (NLOS) conditions.
- **Anchors:** 23 UWB anchors were distributed throughout the lab.
- **Hardware:** Data was captured using Wi-PoS devices, which feature the Qorvo DW1000 UWB transceiver.
- **Ground Truth:** A high-precision Qualisys Motion Capture (MOCAP) system was used to record ground truth trajectories with millimeter-level accuracy, which is available in the `gt.csv` file for evaluation purposes.
### Experimental Procedure
- A mobile robot moved through the lab at a speed of 0.1 m/s to capture UWB data along a repeatable trajectory.
- The ranging method used was Asymmetric Double-Sided Two-Way Ranging (ADS-TWR).
- Two distinct datasets were collected six months apart. The second dataset represents a more challenging environment with additional clutter, goods in the racks, and minor disturbances to the anchor nodes, making it ideal for evaluating model generalization and adaptation.

### Dataset one

[The dataset can be downloaded here.](https://cloud.ilabt.imec.be/index.php/s/iS3RkgeHAwitPTp)
Please always refer to our publication [4] when using this dataset.
### Dataset two

[The dataset can be downloaded here.](https://cloud.ilabt.imec.be/index.php/s/QtwFWDZG6PyrNXc)
Please always refer to our publication [4] when using this dataset.
### Dataset structure
The dataset is organized into folders, each representing a separate data collection run. The key files for UWB error correction research are `processed.csv` (raw UWB data) and `gt.csv` (ground truth data from MOCAP).
#### UWB data file `processed.csv`
| Column | Description |
|------------------|-----------------------------------------------------------------------------------------------------|
| superframe | A high-level timing or sequence identifier. |
| Logtime | The timestamp of the measurement, used to synchronize with ground truth data. |
| anchor | The hexadecimal ID of the anchor node. |
| tag | The hexadecimal ID of the tag node. |
| distance | The raw distance estimation (in mm) calculated by the UWB system. |
| UWB_time | A high-resolution internal timestamp from the UWB chip. |
| channel | The UWB communication channel used. |
| bitrate | The data rate of the transmission. |
| tx_power | The transmit power setting. |
| rxpacc | Preamble Accumulation Count: The number of preamble symbols accumulated by the receiver. |
| fp_index | The index of the detected First Path in the CIR array. |
| fpampl1, fpampl3 | The amplitude of the first (F_1) and third (F_3) harmonics of the first path signal. |
| ppampl | The amplitude of the Peak Path in the CIR. |
| rx_power | Estimated total Received Power (RX_p) in dBm. |
| fp_power | Estimated First Path Power (FP_p) in dBm. |
| LDE_threshold | The Leading Edge Detection threshold used to find the first path. |
| cir | The Channel Impulse Response: A string of complex-valued numbers representing the full CIR capture. |
#### Ground truth file `gt.csv`
| Column | Description |
|---------|------------------------------------------------------------|
| Logtime | The timestamp of the ground truth measurement. |
| x, y, z | The 3D coordinates of the tag\'s position in meters. |
#### Other files
| Column | Description |
|-----------------------------------|------------------------------------------------------------------------------|
| NUC.csv | Data from the anchors host computer (NUC) containing the unfiltered raw data |
| Imu.csv | Inertial Measurement Unit data from the tag (not used in the research) |
| Processed_full.csv | Same data as processed.csv with full diagnostics |
| Dataset_information.txt | Generated text during processing |
| Figures (png) | Various plots (.png) for quick visualization of the data |
| Anchorsiiot-28march23.csv | file containing the anchor positions |
### References
[4] D. Coppens, B. van Herbruggen, A. Shahid and E. de Poorter, \"Removing the Need for Ground Truth UWB Data Collection: Self-Supervised Ranging Error Correction Using Deep Reinforcement Learning,\" in IEEE Transactions on Machine Learning in Communications and Networking, vol. 2, pp. 1615-1627, 2024
[5] J. Fontaine, M. Ridolfi, B. Van Herbruggen, A. Shahid and E. De Poorter, \"Edge Inference for UWB Ranging Error Correction Using Autoencoders,\" in *IEEE Access*, vol. 8, pp. 139143-139155, 2020
[6] J. Fontaine *et al*., \"Transfer Learning for UWB Error Correction and (N)LOS Classification in Multiple Environments,\" in *IEEE Internet of Things Journal*, vol. 11, no. 3, pp. 4085-4101, 1 Feb.1, 2024
[7] F. Che *et al*., \"Feature-Based Generalized Gaussian Distribution Method for NLoS Detection in Ultra-Wideband (UWB) Indoor Positioning System
## UWB CIR Dataset for TDoA Correction and Fingerprinting
This dataset contains a collection of UWB datasets gathered in a complex industrial environment. The datasets include CIR captures, UWB radio diagnostics, and high-precision ground truth trajectories, making them suitable for developing and evaluating algorithms for both Time Difference of Arrival (TDoA) error correction and CIR-based fingerprinting.
### Dataset environment
- **Location:** The Industrial Internet of Things (IIoT) lab at imec-IDLab, Ghent University. This is a 240 m2 warehouse environment containing large metal racks that create significant Non-Line-of-Sight (NLOS) conditions.
- **Anchors:** 23 UWB anchors were distributed throughout the lab.
- **Hardware:** Data was captured using Wi-PoS devices, which feature the Qorvo DW1000 UWB transceiver.
- **Ground Truth:** A high-precision Qualisys Motion Capture (MOCAP) system was used to record ground truth trajectories with millimeter-level accuracy, which is available in the `gt.csv` file for evaluation purposes.
### Datasets
The repository contains data from several distinct trajectories, captured with a mobile robot. These trajectories can be used for different training and evaluation purposes.





| Name | #anchors | #samples | Environment | Link |
|----------------|----------------|-----------------|-----------------|---------------------------------------------------------------------|
| Racks | 15 | 15505 | NLOS | [Download](https://cloud.ilabt.imec.be/index.php/s/7FpmH2WqGNYdjcE) |
| Tour | 8 | 7697 | LOS | [Download](https://cloud.ilabt.imec.be/index.php/s/ff5ZWdwk42Gb7XW) |
| Random | 8 | 3268 | LOS | [Download](https://cloud.ilabt.imec.be/index.php/s/XAmg5spW2As3toD) |
| Grid | 8 | 6114 | LOS | [Download](https://cloud.ilabt.imec.be/index.php/s/JbKiBfoLwqyqAxD) |
| Racks 2 | 15 | 7123 | NLOS | [Download](https://cloud.ilabt.imec.be/index.php/s/FsRTtbfmwcfYWtA) |
Please always refer to our publication [9] when using this dataset.
### Dataset Structure
Each data collection run is organized in its own folder. The key files are:
- `processed.csv` / `processed_full.csv`: These contain the raw, per-packet UWB diagnostic data. processed.csv is a subset of the columns from processed_full.csv.
- `gt.csv`: Contains the synchronized ground truth position of the mobile tag from the MOCAP system.
- Other Files: You may find `imu.csv` (Inertial Measurement Unit data), `NUC.csv` (log data from the host computers), and .png files for quick visualization.
- `Anchorsiiot-28march23.csv`: file containing the anchor positions
#### Timestamps and synchronization
| Column | Description |
|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Time | A general timestamp from the logging system. |
| header.seq | The sequence number from the UWB packet header. |
| header.frame_id | The frame ID from the UWB packet header. |
| Logtime | Primary synchronization timestamp, used to merge with gt.csv. |
| data | The payload data from the UWB packet. |
| UWB_time | High-resolution internal timestamp from the UWB chip. |
| ts | Another timestamp field, related to reception reception. |
| superframe | Superframe identifier (a superframe indicates one transmission by the tag (received by multiple anchors) using this superframe, data at multiple anchors from the same transmission can be combined. |
| sync | Timestamp related to the TDoA synchronization signal. |
#### UWB diagnostics
| Column | Description |
|--------------------------|-------------------------------------------------------------------------------------------------------------------|
| Anchor/anchor_id | The hexadecimal ID of the receiving anchor node. |
| tag | The hexadecimal ID of the transmitting tag node. |
| rxpacc | Preamble Accumulation Count: Number of preamble symbols accumulated by the receiver. |
| fp_index | The index of the detected First Path in the CIR array. |
| ppindex | The index of the Peak Path in the CIR array. |
| fpampl1, fpaml2, fpampl3 | The amplitude of the first (F_1), second (F_2), and third (F_3) harmonics of the first path signal. |
| ppampl | The amplitude of the Peak Path in the CIR. |
| cir_pwr | The power of the Channel Impulse Response. |
| cir_noise | The standard deviation of the noise (N_c) in the CIR accumulator. |
| rx_power | Estimated total Received Power (RX_p) in dBm. |
| fp_power | Estimated First Path Power (FP_p) in dBm. |
| LDE_threshold | The Leading Edge Detection threshold used to find the first path. |
| CIR | The raw Channel Impulse Response data: A string of complex-valued (I/Q) numbers representing the full CIR capture |
#### References
[8] Duong, Phuong Bich, et al. \"Error Mitigation for TDoA UWB Indoor Localization using Unsupervised Machine Learning.\" *IEEE Sensors Journal* (2024).
[9] Coppens, Dieter, Adnan Shahid, and Eli De Poorter. \"UWB TDoA Error Correction using Transformers: Patching and Positional Encoding Strategies.\" *arXiv preprint arXiv:2507.03523* (2025).
[10] D. Coppens, A. Shahid and E. De Poorter, \"Beyond Convolutions: Transformer Networks for Improved UWB CIR-based Fingerprinting,\" *2024 14th International Conference on Indoor Positioning and Indoor Navigation (IPIN)*, Kowloon, Hong Kong, 2024,
## Contact
If you need any further details about the dataset, you can contact Eli.DePoorter@UGent.be | # UWB Positioning Datasets
This repository offers multiple open source UWB Positioning Datasets, collected in two **imec-IDLab, Ghent University** testbeds: the [Office Lab](https://idlab.ugent.be/resources/officelab) and the [Industrial IoT Lab](https://idlab.ugent.be/resources/industrial-iot-lab)
The following datasets are available:
- [UWB PHY Settings Dataset](#uwb-phy-settings-dataset): A full anchor space exploration using ADS-TWR between 15 anchors over 72 different PHY configurations.
- [Two-way Ranging Correction Dataset](#two-way-ranging-correction-dataset): ADS-TWR using a mobile robot in a challenging industrial environment.
- [UWB CIR Dataset for TDoA Correction and Fingerprinting](#uwb-cir-dataset-for-tdoa-correction-and-fingerprinting): TDoA dataset with full Channel Impulse Response collection using a mobile robot in Line-of-Sight and Non-Line-of-Sight industrial conditions.
| Dataset | Ranging Method | Environment | Key Feature | Best For... | Ref. |
|---------------------|----------------|-----------------------|---------------------------------------------|-----------------------------------------------------------------------|--------------------------------------------------------|
| UWB PHY Settings | ADS-TWR | Office | 72 different PHY configurations | Research on adaptive UWB settings, link quality estimation. | [\[2\]](https://ieeexplore.ieee.org/document/10273695) |
| UWB TWR Correction | ADS-TWR | Industrial (LOS/NLOS) | Two captures, 6 months apart, CIR available | Ranging error correction, model generalization over time. | [\[4\]](https://ieeexplore.ieee.org/document/10695458) |
| UWB TDoA correction | TDoA | Industrial (LOS/NLOS) | Full CIR captures, large environment | TDoA error correction, CIR-based fingerprinting, NLOS identification. | [\[9\]](https://arxiv.org/abs/2507.03523) |
||
|:-:|
|All datasets contain Channel Impulse Response (CIR) data|
## UWB PHY Settings Dataset
This dataset was collected in a realistic office environment to enable the development and evaluation of algorithms that can dynamically adapt Ultra-Wideband (UWB) Physical Layer (PHY) settings
### Dataset Environment
The experiment was conducted on the 9th floor of the OfficeLab at imec-IDLab, Ghent University. This environment is a typical office space measuring approximately 41×26 m², containing corridors, meeting rooms, and individual offices separated by materials like plywood and reinforced concrete.
- **Nodes**: 15 UWB nodes were distributed throughout the floor at a height of 2.6 m.
- **Hardware**: Each node consists of a Wi-PoS UWB board [1] (featuring a Qorvo DW1000 radio chip) connected to an Intel NUC.
||
|:-:|
|OfficeLab nodes mounted on the ceiling|
||
|:-:|
|Map of the node positions on the 9th floor of the OfficeLab at imec-IDLab, Ghent University|
### Experimental Parameters
Each of the 15 nodes acted as a \"tag\" and attempted to perform 500 ranging measurements with the other 14 \"anchors\" across a comprehensive set of 72 different PHY configurations. The ranging method used was Asymmetric Double-Sided Two-Way Ranging (ADS-TWR).

The PHY settings varied across the following parameters:
| Parameter | Values |
|-----------------------------------|-----------------------------------|
| Channel | 3, 5, 7 |
| Pulse repetition frequency (PRF) | 16, 64 MHz |
| Preamble symbol repetitions | 128, 1024, 4096 |
| Data rate | 110, 6800 kbps |
| Transmit power gain | 0, 10.5 dB |
### Dataset
[The Dataset can be downloaded here](https://cloud.ilabt.imec.be/index.php/s/rFWMHfD5WaD6HWy)
Please always refer to our publication [2] when using this dataset.
### Dataset structure
The data is the folder `processed files`
The node number in the file name is the number of UWB devices assumed to be the 'tag' and this UWB tag tries to range with all other anchors in the environment. The results of this are in the file.
The positions of the UWB devices in is the `9thflooranchorslayout.xlsx` file.
The following columns are in the dataset:
#### Identifiers and metadata
| Column | Description |
|-----------|-------------------------------------------------------------------------|
| slot_ID | A time-slot or sequence identifier for the ranging packet. |
| model | The model identifier of the UWB hardware. |
| revision | The firmware revision of the UWB hardware. |
| anchor_ID | The unique identifier of the anchor node involved in the communication. |
| NPM | Noise Power Measurement, an internal diagnostic. |
| RMoteclk | Remote Clock, an internal clock-related diagnostic. |
| anchor | The role of the device (tag or anchor). |
#### UWB PHY settings
| Column | Description |
|------------------------|-------------------------------------------------------------------|
| channel | The communication channel used (3, 5, or 7). |
| bitrate | The data rate of the transmission in kbps (110 or 6800). |
| preamble | The number of preamble symbol repetitions (128, 1024, or 4096). |
| prf | The Pulse Repetition Frequency in MHz (16 or 64). |
| txpower | The transmit power gain setting in dB (0 or 10.5). |
| Attenuation | A radio setting for receiver signal attenuation. |
| antenna_delay | The configured antenna delay for correcting ranging measurements. |
| pgdelaycount / pgdelay | Internal programmable delay setting. |
#### Power and signal quality
| Column | Description |
|------------------------------|--------------------------------------------------------------------------------------------------|
| rxpower | Estimated total Received Power |
| fppower | Estimated First Path Power |
| ppampl | Amplitude of the Peak Path (PP) in the Channel Impulse Response. |
| rxpacc | Preamble Accumulation Count: The number of preamble symbols accumulated at the receiver. |
| rxpacc_nosat | Preamble Accumulation Count from a non-saturating counter. |
| fp_ampl1, fp_ampl2, fp_ampl3 | The amplitude of the first (F1), second (F2), and third (F3) harmonics of the first path signal. |
#### CIR and ranging
| Column | Description |
|--------------|--------------------------------------------------------------------------------------------------|
| distance | The final estimated range between the tag and anchor. |
| fpindex | The index of the detected First Path (FP) in the CIR array. |
| ppindex | The index of the Peak Path (PP) in the CIR array. |
| ldethres | |
| LDE_RX_ANT | An internal diagnostic related to LDE and the receiving antenna. |
| CIR_pwr | The power of the Channel Impulse Response. |
| CIR_noise | The standard deviation of the noise ( |
| pollCir | The raw Channel Impulse Response data array for the poll message. |
| rxtofs | Receiver Time of Flight Offset: A low-level timestamp offset from the radio chip. |
| rsmpdel | Re-sampler delay, an internal diagnostic. |
| rcphase | RC Phase, an internal diagnostic related to the radio\'s RC oscillator. |
| RAWT1 -RAWT6 | Raw, unprocessed timestamps from the Asymmetric Double-Sided Two-Way Ranging (ADS-TWR) protocol. |
| T1 -- T6 | Processed timestamps from the ADS-TWR protocol, used to calculate the final distance. |
#### Hardware diagnostics
| Column | Description |
|--------------|---------------------------------------------------------------------------------------|
| DRX_CAR_INT | An internal diagnostic, likely related to carrier integration. |
| OTP_temp_cal | One-Time-Programmable memory value for temperature calibration. |
| otp_temp_23 | One-Time-Programmable memory value related to temperature. |
| sar_temp_l | A low-level temperature reading from the Successive-Approximation Register (SAR) ADC. |
| sar_bat_l | A low-level battery voltage reading from the SAR ADC. |
### References
[1] Van Herbruggen, B.; Jooris, B.; Rossey, J.; Ridolfi, M.; Macoir, N.; Van den Brande, Q.; Lemey, S.; De Poorter, E. Wi-PoS: A Low-Cost, Open Source Ultra-Wideband (UWB) Hardware Platform with Long Range Sub-GHz Backbone. *Sensors* **2019**, *19*, 1548. https://doi.org/10.3390/s19071548
[2] D. Coppens, A. Shahid and E. De Poorter, \"Deep Reinforcement Learning for Automatic Run-Time Adaptation of UWB PHY Radio Settings,\" in *IEEE Transactions on Cognitive Communications and Networking*, vol. 10, no. 1, pp. 64-79
[3] Ridolfi, M., Fontaine, J., Herbruggen, B.V. *et al.* UWB anchor nodes self-calibration in NLOS conditions: a machine learning and adaptive PHY error correction approach.
## Two-way Ranging Correction Dataset
This dataset was collected in a complex industrial indoor environment to facilitate research in UWB ranging error correction. It contains two distinct data-collection moments, captured six months apart, to enable the study of model adaptation to environmental changes over time.
### Dataset Collection
The data was gathered in a representative industrial environment, featuring a mix of open spaces and areas with large metal racks, which create challenging Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) conditions
### Dataset Environment
- **Location:** The Industrial Internet of Things (IIoT) lab at imec-IDLab, Ghent University. This is a 240 m2 warehouse environment containing large metal racks that create significant Non-Line-of-Sight (NLOS) conditions.
- **Anchors:** 23 UWB anchors were distributed throughout the lab.
- **Hardware:** Data was captured using Wi-PoS devices, which feature the Qorvo DW1000 UWB transceiver.
- **Ground Truth:** A high-precision Qualisys Motion Capture (MOCAP) system was used to record ground truth trajectories with millimeter-level accuracy, which is available in the `gt.csv` file for evaluation purposes.
### Experimental Procedure
- A mobile robot moved through the lab at a speed of 0.1 m/s to capture UWB data along a repeatable trajectory.
- The ranging method used was Asymmetric Double-Sided Two-Way Ranging (ADS-TWR).
- Two distinct datasets were collected six months apart. The second dataset represents a more challenging environment with additional clutter, goods in the racks, and minor disturbances to the anchor nodes, making it ideal for evaluating model generalization and adaptation.

### Dataset one

[The dataset can be downloaded here.](https://cloud.ilabt.imec.be/index.php/s/iS3RkgeHAwitPTp)
Please always refer to our publication [4] when using this dataset.
### Dataset two

[The dataset can be downloaded here.](https://cloud.ilabt.imec.be/index.php/s/QtwFWDZG6PyrNXc)
Please always refer to our publication [4] when using this dataset.
### Dataset structure
The dataset is organized into folders, each representing a separate data collection run. The key files for UWB error correction research are `processed.csv` (raw UWB data) and `gt.csv` (ground truth data from MOCAP).
#### UWB data file `processed.csv`
| Column | Description |
|------------------|-----------------------------------------------------------------------------------------------------|
| superframe | A high-level timing or sequence identifier. |
| Logtime | The timestamp of the measurement, used to synchronize with ground truth data. |
| anchor | The hexadecimal ID of the anchor node. |
| tag | The hexadecimal ID of the tag node. |
| distance | The raw distance estimation (in mm) calculated by the UWB system. |
| UWB_time | A high-resolution internal timestamp from the UWB chip. |
| channel | The UWB communication channel used. |
| bitrate | The data rate of the transmission. |
| tx_power | The transmit power setting. |
| rxpacc | Preamble Accumulation Count: The number of preamble symbols accumulated by the receiver. |
| fp_index | The index of the detected First Path in the CIR array. |
| fpampl1, fpampl3 | The amplitude of the first (F_1) and third (F_3) harmonics of the first path signal. |
| ppampl | The amplitude of the Peak Path in the CIR. |
| rx_power | Estimated total Received Power (RX_p) in dBm. |
| fp_power | Estimated First Path Power (FP_p) in dBm. |
| LDE_threshold | The Leading Edge Detection threshold used to find the first path. |
| cir | The Channel Impulse Response: A string of complex-valued numbers representing the full CIR capture. |
#### Ground truth file `gt.csv`
| Column | Description |
|---------|------------------------------------------------------------|
| Logtime | The timestamp of the ground truth measurement. |
| x, y, z | The 3D coordinates of the tag\'s position in meters. |
#### Other files
| Column | Description |
|-----------------------------------|------------------------------------------------------------------------------|
| NUC.csv | Data from the anchors host computer (NUC) containing the unfiltered raw data |
| Imu.csv | Inertial Measurement Unit data from the tag (not used in the research) |
| Processed_full.csv | Same data as processed.csv with full diagnostics |
| Dataset_information.txt | Generated text during processing |
| Figures (png) | Various plots (.png) for quick visualization of the data |
| Anchorsiiot-28march23.csv | file containing the anchor positions |
### References
[4] D. Coppens, B. van Herbruggen, A. Shahid and E. de Poorter, \"Removing the Need for Ground Truth UWB Data Collection: Self-Supervised Ranging Error Correction Using Deep Reinforcement Learning,\" in IEEE Transactions on Machine Learning in Communications and Networking, vol. 2, pp. 1615-1627, 2024
[5] J. Fontaine, M. Ridolfi, B. Van Herbruggen, A. Shahid and E. De Poorter, \"Edge Inference for UWB Ranging Error Correction Using Autoencoders,\" in *IEEE Access*, vol. 8, pp. 139143-139155, 2020
[6] J. Fontaine *et al*., \"Transfer Learning for UWB Error Correction and (N)LOS Classification in Multiple Environments,\" in *IEEE Internet of Things Journal*, vol. 11, no. 3, pp. 4085-4101, 1 Feb.1, 2024
[7] F. Che *et al*., \"Feature-Based Generalized Gaussian Distribution Method for NLoS Detection in Ultra-Wideband (UWB) Indoor Positioning System
## UWB CIR Dataset for TDoA Correction and Fingerprinting
This dataset contains a collection of UWB datasets gathered in a complex industrial environment. The datasets include CIR captures, UWB radio diagnostics, and high-precision ground truth trajectories, making them suitable for developing and evaluating algorithms for both Time Difference of Arrival (TDoA) error correction and CIR-based fingerprinting.
### Dataset environment
- **Location:** The Industrial Internet of Things (IIoT) lab at imec-IDLab, Ghent University. This is a 240 m2 warehouse environment containing large metal racks that create significant Non-Line-of-Sight (NLOS) conditions.
- **Anchors:** 23 UWB anchors were distributed throughout the lab.
- **Hardware:** Data was captured using Wi-PoS devices, which feature the Qorvo DW1000 UWB transceiver.
- **Ground Truth:** A high-precision Qualisys Motion Capture (MOCAP) system was used to record ground truth trajectories with millimeter-level accuracy, which is available in the `gt.csv` file for evaluation purposes.
### Datasets
The repository contains data from several distinct trajectories, captured with a mobile robot. These trajectories can be used for different training and evaluation purposes.





| Name | #anchors | #samples | Environment | Link |
|----------------|----------------|-----------------|-----------------|---------------------------------------------------------------------|
| Racks | 15 | 15505 | NLOS | [Download](https://cloud.ilabt.imec.be/index.php/s/7FpmH2WqGNYdjcE) |
| Tour | 8 | 7697 | LOS | [Download](https://cloud.ilabt.imec.be/index.php/s/ff5ZWdwk42Gb7XW) |
| Random | 8 | 3268 | LOS | [Download](https://cloud.ilabt.imec.be/index.php/s/XAmg5spW2As3toD) |
| Grid | 8 | 6114 | LOS | [Download](https://cloud.ilabt.imec.be/index.php/s/JbKiBfoLwqyqAxD) |
| Racks 2 | 15 | 7123 | NLOS | [Download](https://cloud.ilabt.imec.be/index.php/s/FsRTtbfmwcfYWtA) |
Please always refer to our publication [9] when using this dataset.
### Dataset Structure
Each data collection run is organized in its own folder. The key files are:
- `processed.csv` / `processed_full.csv`: These contain the raw, per-packet UWB diagnostic data. processed.csv is a subset of the columns from processed_full.csv.
- `gt.csv`: Contains the synchronized ground truth position of the mobile tag from the MOCAP system.
- Other Files: You may find `imu.csv` (Inertial Measurement Unit data), `NUC.csv` (log data from the host computers), and .png files for quick visualization.
- `Anchorsiiot-28march23.csv`: file containing the anchor positions
#### Timestamps and synchronization
| Column | Description |
|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Time | A general timestamp from the logging system. |
| header.seq | The sequence number from the UWB packet header. |
| header.frame_id | The frame ID from the UWB packet header. |
| Logtime | Primary synchronization timestamp, used to merge with gt.csv. |
| data | The payload data from the UWB packet. |
| UWB_time | High-resolution internal timestamp from the UWB chip. |
| ts | Another timestamp field, related to reception reception. |
| superframe | Superframe identifier (a superframe indicates one transmission by the tag (received by multiple anchors) using this superframe, data at multiple anchors from the same transmission can be combined. |
| sync | Timestamp related to the TDoA synchronization signal. |
#### UWB diagnostics
| Column | Description |
|--------------------------|-------------------------------------------------------------------------------------------------------------------|
| Anchor/anchor_id | The hexadecimal ID of the receiving anchor node. |
| tag | The hexadecimal ID of the transmitting tag node. |
| rxpacc | Preamble Accumulation Count: Number of preamble symbols accumulated by the receiver. |
| fp_index | The index of the detected First Path in the CIR array. |
| ppindex | The index of the Peak Path in the CIR array. |
| fpampl1, fpaml2, fpampl3 | The amplitude of the first (F_1), second (F_2), and third (F_3) harmonics of the first path signal. |
| ppampl | The amplitude of the Peak Path in the CIR. |
| cir_pwr | The power of the Channel Impulse Response. |
| cir_noise | The standard deviation of the noise (N_c) in the CIR accumulator. |
| rx_power | Estimated total Received Power (RX_p) in dBm. |
| fp_power | Estimated First Path Power (FP_p) in dBm. |
| LDE_threshold | The Leading Edge Detection threshold used to find the first path. |
| CIR | The raw Channel Impulse Response data: A string of complex-valued (I/Q) numbers representing the full CIR capture |
#### References
[8] Duong, Phuong Bich, et al. \"Error Mitigation for TDoA UWB Indoor Localization using Unsupervised Machine Learning.\" *IEEE Sensors Journal* (2024).
[9] Coppens, Dieter, Adnan Shahid, and Eli De Poorter. \"UWB TDoA Error Correction using Transformers: Patching and Positional Encoding Strategies.\" *arXiv preprint arXiv:2507.03523* (2025).
[10] D. Coppens, A. Shahid and E. De Poorter, \"Beyond Convolutions: Transformer Networks for Improved UWB CIR-based Fingerprinting,\" *2024 14th International Conference on Indoor Positioning and Indoor Navigation (IPIN)*, Kowloon, Hong Kong, 2024,
## Contact
If you need any further details about the dataset, you can contact Eli.DePoorter@UGent.be | 6 | 0 | [
"language:en",
"license:apache-2.0",
"arxiv:2507.03523",
"region:us"
] | 2025-11-07T13:51:17+00:00 | 2025-11-12T15:53:09+00:00 | 0 |
dankeg/ArxivBulkDataset | # Arxiv Bulk Dataset
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset
# ArXiv Bulk Dataset
This dataset is a bulk fetch of ArXiv articles, based on the official metadata dataset maintained and updated by Cornell [https://www.kaggle.com/datasets/Cornell-University/arxiv/data](https://www.kaggle.com/datasets/Cornell-University/arxiv/data).
This dataset was created to provide cross-domain academic training data, with existing datasets being domain-specific, and manual fetches being time-intensive.
Each article has the id, enabling lookup within the metadata dataset, the raw PDF content, and the content with cleaning to remove PDF, XML, and LaTeX artifacts.
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | # Arxiv Bulk Dataset
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset
# ArXiv Bulk Dataset
This dataset is a bulk fetch of ArXiv articles, based on the official metadata dataset maintained and updated by Cornell [https://www.kaggle.com/datasets/Cornell-University/arxiv/data](https://www.kaggle.com/datasets/Cornell-University/arxiv/data).
This dataset was created to provide cross-domain academic training data, with existing datasets being domain-specific, and manual fetches being time-intensive.
Each article has the id, enabling lookup within the metadata dataset, the raw PDF content, and the content with cleaning to remove PDF, XML, and LaTeX artifacts.
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 16 | 1 | [
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:text-classification",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | 2025-11-09T02:57:26+00:00 | 2025-11-12T15:53:40+00:00 | 1 |
juliobellano/wristcam_ripeunripe_4 |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 102,
"total_frames": 90609,
"total_tasks": 2,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:102"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.side": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.up": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 720,
"video.width": 1280,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 102,
"total_frames": 90609,
"total_tasks": 2,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:102"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.side": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.up": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 720,
"video.width": 1280,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T15:18:16+00:00 | 2025-11-12T15:53:06+00:00 | 0 |
TheFactoryX/edition_0341_shi-labs-oneformer_demo-readymade |
# edition_0341_shi-labs-oneformer_demo-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[shi-labs/oneformer_demo](https://huggingface.co/datasets/shi-labs/oneformer_demo)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
|
# edition_0341_shi-labs-oneformer_demo-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[shi-labs/oneformer_demo](https://huggingface.co/datasets/shi-labs/oneformer_demo)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
| 0 | 0 | [
"license:other",
"region:us",
"readymades",
"art",
"shuffled",
"duchamp"
] | 2025-11-12T15:53:01+00:00 | 2025-11-12T15:53:04+00:00 | 0 |
mteb/results |
> [!NOTE]
> Previously, it was possible to submit model results to MTEB by adding them to the metadata of the model card on huggingface. However, this is no longer possible as we want to ensure that we can match the results with the model implementation. If you want to add your model, please follow the [guide](https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md) on how to do so.
This repository contains the results of the embedding benchmark evaluated using the package `mteb`.
| Reference | |
| ------------------- | ---------------------------------------------------------------------------------------- |
| 🦾 **[Leaderboard]** | An up to date leaderboard of embedding models |
| 📚 **[mteb]** | Guides and instructions on how to use `mteb`, including running, submitting scores, etc. |
| 🙋 **[Questions]** | Questions about the results |
| 🙋 **[Issues]** | Issues or bugs you have found |
[Leaderboard]: https://huggingface.co/spaces/mteb/leaderboard
[mteb]: https://github.com/embeddings-benchmark/mteb
[Questions]: https://github.com/embeddings-benchmark/mteb/discussions
[Issues]: https://github.com/embeddings-benchmark/mteb/issues
|
> [!NOTE]
> Previously, it was possible to submit model results to MTEB by adding them to the metadata of the model card on huggingface. However, this is no longer possible as we want to ensure that we can match the results with the model implementation. If you want to add your model, please follow the [guide](https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md) on how to do so.
This repository contains the results of the embedding benchmark evaluated using the package `mteb`.
| Reference | |
| ------------------- | ---------------------------------------------------------------------------------------- |
| 🦾 **[Leaderboard]** | An up to date leaderboard of embedding models |
| 📚 **[mteb]** | Guides and instructions on how to use `mteb`, including running, submitting scores, etc. |
| 🙋 **[Questions]** | Questions about the results |
| 🙋 **[Issues]** | Issues or bugs you have found |
[Leaderboard]: https://huggingface.co/spaces/mteb/leaderboard
[mteb]: https://github.com/embeddings-benchmark/mteb
[Questions]: https://github.com/embeddings-benchmark/mteb/discussions
[Issues]: https://github.com/embeddings-benchmark/mteb/issues
| 5,463 | 1 | [
"benchmark:mteb",
"region:us"
] | 2024-07-06T20:19:19+00:00 | 2025-11-12T15:53:28+00:00 | 0 |
pr0tos/so101_put_rc_pb |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 0,
"total_frames": 0,
"total_tasks": 0,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.up": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 0,
"total_frames": 0,
"total_tasks": 0,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.up": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T15:43:46+00:00 | 2025-11-12T15:43:49+00:00 | 0 |
Arururu12/flir_camera_record_test |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "ur5e_follower",
"total_episodes": 2,
"total_frames": 210,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"joint_1.pos",
"joint_2.pos",
"joint_3.pos",
"joint_4.pos",
"joint_5.pos",
"joint_6.pos",
"gripper.pos"
],
"shape": [
7
]
},
"observation.state": {
"dtype": "float32",
"names": [
"joint_1.pos",
"joint_2.pos",
"joint_3.pos",
"joint_4.pos",
"joint_5.pos",
"joint_6.pos",
"gripper.pos"
],
"shape": [
7
]
},
"observation.images.wrist_camera": {
"dtype": "video",
"shape": [
256,
320,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 320,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "ur5e_follower",
"total_episodes": 2,
"total_frames": 210,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"joint_1.pos",
"joint_2.pos",
"joint_3.pos",
"joint_4.pos",
"joint_5.pos",
"joint_6.pos",
"gripper.pos"
],
"shape": [
7
]
},
"observation.state": {
"dtype": "float32",
"names": [
"joint_1.pos",
"joint_2.pos",
"joint_3.pos",
"joint_4.pos",
"joint_5.pos",
"joint_6.pos",
"gripper.pos"
],
"shape": [
7
]
},
"observation.images.wrist_camera": {
"dtype": "video",
"shape": [
256,
320,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 256,
"video.width": 320,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T15:43:14+00:00 | 2025-11-12T15:43:20+00:00 | 0 |
phicoltan/pusht_image_renamed |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "unknown",
"total_episodes": 206,
"total_frames": 25650,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 10,
"splits": {
"train": "0:206"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": null,
"features": {
"action": {
"dtype": "float32",
"shape": [
2
],
"names": {
"motors": [
"motor_0",
"motor_1"
]
},
"fps": 10.0
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null,
"fps": 10.0
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null,
"fps": 10.0
},
"next.success": {
"dtype": "bool",
"shape": [
1
],
"names": null,
"fps": 10.0
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"observation.images.camera": {
"dtype": "image",
"shape": [
96,
96,
3
],
"names": [
"height",
"width",
"channel"
],
"fps": 10.0
},
"renamed_state": {
"dtype": "float32",
"shape": [
2
],
"names": {
"motors": [
"motor_0",
"motor_1"
]
},
"fps": 10.0
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "unknown",
"total_episodes": 206,
"total_frames": 25650,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 10,
"splits": {
"train": "0:206"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": null,
"features": {
"action": {
"dtype": "float32",
"shape": [
2
],
"names": {
"motors": [
"motor_0",
"motor_1"
]
},
"fps": 10.0
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null,
"fps": 10.0
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null,
"fps": 10.0
},
"next.success": {
"dtype": "bool",
"shape": [
1
],
"names": null,
"fps": 10.0
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"observation.images.camera": {
"dtype": "image",
"shape": [
96,
96,
3
],
"names": [
"height",
"width",
"channel"
],
"fps": 10.0
},
"renamed_state": {
"dtype": "float32",
"shape": [
2
],
"names": {
"motors": [
"motor_0",
"motor_1"
]
},
"fps": 10.0
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T15:42:32+00:00 | 2025-11-12T15:42:35+00:00 | 0 |
ekacare/Eka-IndicMTEB |
# Eka-IndicMTEB
## Dataset Description
**Eka-IndicMTEB**, is a evaluation dataset comprising Indian Multilingual Medical Terms designed to evaluate embedding models on medical terminology across multiple Indic languages and scripts. It contains 2,532 doctor-verified queries, capturing the linguistic and domain-specific diversity of the Indian healthcare ecosystem.
The dataset includes medical entities spanning **symptoms**, **diagnoses**, **procedures**, **medications**, and related concepts, enriched with real-world linguistic variations such **spelling errors**, **special characters**, **abbreviations**, and **colloquial expressions**.
The dataset covers multilple languages including English, Hindi, Bengali, Tamil, Telugu, Kannada, Marathi, and Malayalam.
## Dataset Creation
This dataset was curated by our internal medical professionals to ensure clinical accuracy and linguistic diversity. Each query was manually reviewed and annotated with its corresponding SNOMED CT identifier, ensuring concept-level alignment across languages.
## Why This Matters
Eka-IndicMTEB addresses a critical gap in multilingual medical AI evaluation by offering:
- **A Shared Evaluation Framework**: Researchers can now benchmark multilingual medical embeddings against a standardized, clinically-validated dataset spanning multiple Indian languages.
- **Insight into Model Strengths and Weaknesses**: The benchmark systematically reveals how models handle India's linguistic diversity, identifying specific failure modes and success patterns across different language families and medical domains.
- **Guidance for Model Development**: Performance analysis across varied query types provides actionable insights for targeted model improvements.
## Applications
This benchmark is invaluable for researchers developing cross-lingual medical information retrieval systems, and AI teams building multilingual clinical decision support tools. Healthcare organizations deploying language-agnostic medical chatbots or semantic search systems will find this dataset essential for validating performance across India's diverse linguistic landscape. Academic institutions working on low-resource medical NLP can leverage this benchmark to identify gaps and measure progress in Indian language healthcare AI.
## Dataset Structure
The dataset contains three subsets
- **queries**: This subset contains all the multi-lingual, multi-script queries. Each example is also tagged with its language, script, and is_abbreviation boolean for error analysis. <br>
- **qrels**: This subset contains the mapping of queries to the corpus; this relationship establishes ground truth of the relationship between query and search corpus. <br>
- **corpus**: This subset is the search space used in indexing for retrieval evaluation. We have included terms from SNOMEDCT (version: snomedct_internationalrf2_production_20250401). <br>
### Usage
Load specific subset and split:
```python
from datasets import load_dataset
# Load specific subset and split
dataset = load_dataset('ekacare/Eka-IndicMTEB', 'corpus', split='test')
# Load all splits from a subset
dataset = load_dataset('ekacare/Eka-IndicMTEB', 'corpus')
# Load everything
dataset = load_dataset('ekacare/Eka-IndicMTEB')
```
## Contributors / Annotators list
Dr Sanjana SN <br>
Dr Anushree Rana <br>
Dr Rajshree Badami <br>
## License
This dataset is released under the MIT License, enabling broad use while maintaining attribution requirements. |
# Eka-IndicMTEB
## Dataset Description
**Eka-IndicMTEB**, is a evaluation dataset comprising Indian Multilingual Medical Terms designed to evaluate embedding models on medical terminology across multiple Indic languages and scripts. It contains 2,532 doctor-verified queries, capturing the linguistic and domain-specific diversity of the Indian healthcare ecosystem.
The dataset includes medical entities spanning **symptoms**, **diagnoses**, **procedures**, **medications**, and related concepts, enriched with real-world linguistic variations such **spelling errors**, **special characters**, **abbreviations**, and **colloquial expressions**.
The dataset covers multilple languages including English, Hindi, Bengali, Tamil, Telugu, Kannada, Marathi, and Malayalam.
## Dataset Creation
This dataset was curated by our internal medical professionals to ensure clinical accuracy and linguistic diversity. Each query was manually reviewed and annotated with its corresponding SNOMED CT identifier, ensuring concept-level alignment across languages.
## Why This Matters
Eka-IndicMTEB addresses a critical gap in multilingual medical AI evaluation by offering:
- **A Shared Evaluation Framework**: Researchers can now benchmark multilingual medical embeddings against a standardized, clinically-validated dataset spanning multiple Indian languages.
- **Insight into Model Strengths and Weaknesses**: The benchmark systematically reveals how models handle India's linguistic diversity, identifying specific failure modes and success patterns across different language families and medical domains.
- **Guidance for Model Development**: Performance analysis across varied query types provides actionable insights for targeted model improvements.
## Applications
This benchmark is invaluable for researchers developing cross-lingual medical information retrieval systems, and AI teams building multilingual clinical decision support tools. Healthcare organizations deploying language-agnostic medical chatbots or semantic search systems will find this dataset essential for validating performance across India's diverse linguistic landscape. Academic institutions working on low-resource medical NLP can leverage this benchmark to identify gaps and measure progress in Indian language healthcare AI.
## Dataset Structure
The dataset contains three subsets
- **queries**: This subset contains all the multi-lingual, multi-script queries. Each example is also tagged with its language, script, and is_abbreviation boolean for error analysis. <br>
- **qrels**: This subset contains the mapping of queries to the corpus; this relationship establishes ground truth of the relationship between query and search corpus. <br>
- **corpus**: This subset is the search space used in indexing for retrieval evaluation. We have included terms from SNOMEDCT (version: snomedct_internationalrf2_production_20250401). <br>
### Usage
Load specific subset and split:
```python
from datasets import load_dataset
# Load specific subset and split
dataset = load_dataset('ekacare/Eka-IndicMTEB', 'corpus', split='test')
# Load all splits from a subset
dataset = load_dataset('ekacare/Eka-IndicMTEB', 'corpus')
# Load everything
dataset = load_dataset('ekacare/Eka-IndicMTEB')
```
## Contributors / Annotators list
Dr Sanjana SN <br>
Dr Anushree Rana <br>
Dr Rajshree Badami <br>
## License
This dataset is released under the MIT License, enabling broad use while maintaining attribution requirements. | 58 | 0 | [
"task_categories:text-classification",
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"dataset"
] | 2025-10-31T13:25:27+00:00 | 2025-11-12T15:40:43+00:00 | 0 |
omerbasik1/heart-disease-eda | ---
license: mit
Heart Disease Prediction Analysis and Preprocessing
Introduction, Data Source, and Project Goal
This project presents an Exploratory Data Analysis (EDA) and strategic data preparation for a heart disease prediction dataset. The dataset, sourced from Kaggle, contains over 1,000 patient records with 14 numeric health-related features such as age, cholesterol, blood pressure, and maximum heart rate. The main challenge addressed in this dataset is identifying which medical and demographic factors are most predictive of heart disease. The goal is to prepare and analyze the data for a classification model capable of predicting whether a patient is at risk of heart disease (Class 1) or not (Class 0).
Data Cleaning and Preprocessing
Initial cleaning confirmed that the dataset contains no missing or duplicate records, and that all values fall within realistic medical ranges. Outlier detection revealed extreme values in cholesterol and resting blood pressure, but these were retained to preserve real-world clinical variation. Descriptive statistics were calculated to understand the data distribution, and correlation analysis was performed to identify relationships between features and the target variable. The strongest predictors identified were chest pain type (cp), maximum heart rate achieved (thalach), and ST depression (oldpeak). These features showed significant correlation with heart disease presence. Weaker relationships were found in fasting blood sugar (fbs) and cholesterol (chol).
Key EDA Insights and Findings
Visual analysis revealed several important patterns relevant to heart disease prediction.
Age Pattern: The likelihood of heart disease increases significantly after the age of 45, reflecting higher cardiovascular risk in older populations.
Gender Pattern: Men exhibit a higher prevalence of heart disease compared to women.
Cholesterol Pattern: Although cholesterol levels are slightly higher among patients with heart disease, the overlap between healthy and affected groups is large, indicating limited standalone predictive power.
Heart Rate Pattern: Patients without heart disease tend to achieve higher maximum heart rates, confirming the strong negative relationship between thalach and the target variable.
Correlations: The heatmap analysis reinforced that cp, thalach, and oldpeak are the most influential features contributing to prediction strength.
Baseline Model Strategy
Following the EDA and preprocessing steps, the dataset is clean and balanced, making it suitable for training classification models. Logistic Regression is recommended as a baseline model, with the focus on achieving high Recall to ensure that as many at-risk patients as possible are correctly identified. Additional strategies could include feature scaling, regularization, and cross-validation to improve model stability and performance.
Video link to my EDA presentation -
https://drive.google.com/file/d/1B2H0iHAO2Xjw1qtnAA-Ug8P6X7GlAXic/view?usp=drivesdk
| ---
license: mit
Heart Disease Prediction Analysis and Preprocessing
Introduction, Data Source, and Project Goal
This project presents an Exploratory Data Analysis (EDA) and strategic data preparation for a heart disease prediction dataset. The dataset, sourced from Kaggle, contains over 1,000 patient records with 14 numeric health-related features such as age, cholesterol, blood pressure, and maximum heart rate. The main challenge addressed in this dataset is identifying which medical and demographic factors are most predictive of heart disease. The goal is to prepare and analyze the data for a classification model capable of predicting whether a patient is at risk of heart disease (Class 1) or not (Class 0).
Data Cleaning and Preprocessing
Initial cleaning confirmed that the dataset contains no missing or duplicate records, and that all values fall within realistic medical ranges. Outlier detection revealed extreme values in cholesterol and resting blood pressure, but these were retained to preserve real-world clinical variation. Descriptive statistics were calculated to understand the data distribution, and correlation analysis was performed to identify relationships between features and the target variable. The strongest predictors identified were chest pain type (cp), maximum heart rate achieved (thalach), and ST depression (oldpeak). These features showed significant correlation with heart disease presence. Weaker relationships were found in fasting blood sugar (fbs) and cholesterol (chol).
Key EDA Insights and Findings
Visual analysis revealed several important patterns relevant to heart disease prediction.
Age Pattern: The likelihood of heart disease increases significantly after the age of 45, reflecting higher cardiovascular risk in older populations.
Gender Pattern: Men exhibit a higher prevalence of heart disease compared to women.
Cholesterol Pattern: Although cholesterol levels are slightly higher among patients with heart disease, the overlap between healthy and affected groups is large, indicating limited standalone predictive power.
Heart Rate Pattern: Patients without heart disease tend to achieve higher maximum heart rates, confirming the strong negative relationship between thalach and the target variable.
Correlations: The heatmap analysis reinforced that cp, thalach, and oldpeak are the most influential features contributing to prediction strength.
Baseline Model Strategy
Following the EDA and preprocessing steps, the dataset is clean and balanced, making it suitable for training classification models. Logistic Regression is recommended as a baseline model, with the focus on achieving high Recall to ensure that as many at-risk patients as possible are correctly identified. Additional strategies could include feature scaling, regularization, and cross-validation to improve model stability and performance.
Video link to my EDA presentation -
https://drive.google.com/file/d/1B2H0iHAO2Xjw1qtnAA-Ug8P6X7GlAXic/view?usp=drivesdk
| 0 | 0 | [
"region:us"
] | 2025-11-12T12:41:19+00:00 | 2025-11-12T15:39:09+00:00 | 0 |
wanglab/PerturbQA-language | ## Author
Ivy Liu (Arc/Goodarzi)
## About
Natural language QA based on [perturbQA](https://github.com/Genentech/PerturbQA/tree/main) dataset, questions about differential expression (DE), direction of change (DIR), geneset responses to perurbation (GSE). | ## Author
Ivy Liu (Arc/Goodarzi)
## About
Natural language QA based on [perturbQA](https://github.com/Genentech/PerturbQA/tree/main) dataset, questions about differential expression (DE), direction of change (DIR), geneset responses to perurbation (GSE). | 4 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-11-11T23:12:26+00:00 | 2025-11-12T15:38:56+00:00 | 0 |
piebro/deutsche-bahn-data |
# Deutsche Bahn Train Data
This dataset contains public historical data from Deutsche Bahn, the largest German train company. It includes train schedules, delays, and cancellations from stations across Germany.
For more info visit the project page at GitHub: https://github.com/piebro/deutsche-bahn-data
## Dataset Structure
### Monthly Processed Data
The monthly processed data is located in `monthly_processed_data/` and contains files named `data-YYYY-MM.parquet`.
**Schema:**
| Column | Type | Description |
|--------|------|-------------|
| `station_name` | string | Name of the station |
| `xml_station_name` | string | Station name from the XML response |
| `eva` | string | EVA station number (unique identifier) |
| `train_name` | string | Name of the train (e.g., "ICE 123", "RE 5") |
| `final_destination_station` | string | Final destination of the train |
| `delay_in_min` | integer | Delay in minutes |
| `time` | timestamp | Actual arrival or departure time |
| `is_canceled` | boolean | Whether the train stop was canceled |
| `train_type` | string | Type of train (e.g., "ICE", "IC", "RE") |
| `train_line_ride_id` | string | Unique identifier for the train ride |
| `train_line_station_num` | integer | Station number in the train's route |
| `arrival_planned_time` | timestamp | Planned arrival time |
| `arrival_change_time` | timestamp | Actual/changed arrival time |
| `departure_planned_time` | timestamp | Planned departure time |
| `departure_change_time` | timestamp | Actual/changed departure time |
| `id` | string | Unique identifier for the train stop |
### Raw Data
The raw data is located in `raw_data/` and is partitioned by `year={year}/month={month}/day={day}/`. Each partition contains multiple parquet files with hourly data.
**Schema:**
| Column | Type | Description |
|--------|------|-------------|
| `timestamp` | timestamp | When the API request was made |
| `url` | string | The API endpoint URL that was queried |
| `api_name` | string | Name of the API (e.g., "timetables/v1/plan", "timetables/v1/fchg") |
| `query_params` | string | JSON string of query parameters used |
| `response_data` | string | Raw XML or JSON response from the API |
| `status_code` | string | HTTP status code of the response |
| `error` | string | Error message if the request failed |
| `duration_ms` | float | Request duration in milliseconds |
| `year` | integer | Year of the request (partition key) |
| `month` | integer | Month of the request (partition key) |
| `day` | integer | Day of the request (partition key) |
## License
The dataset is licensed under [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) by Deutsche Bahn.
## Acknowledgments
Data sourced from Deutsche Bahn's public APIs. Special thanks to Deutsche Bahn for providing open access to this data. |
# Deutsche Bahn Train Data
This dataset contains public historical data from Deutsche Bahn, the largest German train company. It includes train schedules, delays, and cancellations from stations across Germany.
For more info visit the project page at GitHub: https://github.com/piebro/deutsche-bahn-data
## Dataset Structure
### Monthly Processed Data
The monthly processed data is located in `monthly_processed_data/` and contains files named `data-YYYY-MM.parquet`.
**Schema:**
| Column | Type | Description |
|--------|------|-------------|
| `station_name` | string | Name of the station |
| `xml_station_name` | string | Station name from the XML response |
| `eva` | string | EVA station number (unique identifier) |
| `train_name` | string | Name of the train (e.g., "ICE 123", "RE 5") |
| `final_destination_station` | string | Final destination of the train |
| `delay_in_min` | integer | Delay in minutes |
| `time` | timestamp | Actual arrival or departure time |
| `is_canceled` | boolean | Whether the train stop was canceled |
| `train_type` | string | Type of train (e.g., "ICE", "IC", "RE") |
| `train_line_ride_id` | string | Unique identifier for the train ride |
| `train_line_station_num` | integer | Station number in the train's route |
| `arrival_planned_time` | timestamp | Planned arrival time |
| `arrival_change_time` | timestamp | Actual/changed arrival time |
| `departure_planned_time` | timestamp | Planned departure time |
| `departure_change_time` | timestamp | Actual/changed departure time |
| `id` | string | Unique identifier for the train stop |
### Raw Data
The raw data is located in `raw_data/` and is partitioned by `year={year}/month={month}/day={day}/`. Each partition contains multiple parquet files with hourly data.
**Schema:**
| Column | Type | Description |
|--------|------|-------------|
| `timestamp` | timestamp | When the API request was made |
| `url` | string | The API endpoint URL that was queried |
| `api_name` | string | Name of the API (e.g., "timetables/v1/plan", "timetables/v1/fchg") |
| `query_params` | string | JSON string of query parameters used |
| `response_data` | string | Raw XML or JSON response from the API |
| `status_code` | string | HTTP status code of the response |
| `error` | string | Error message if the request failed |
| `duration_ms` | float | Request duration in milliseconds |
| `year` | integer | Year of the request (partition key) |
| `month` | integer | Month of the request (partition key) |
| `day` | integer | Day of the request (partition key) |
## License
The dataset is licensed under [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) by Deutsche Bahn.
## Acknowledgments
Data sourced from Deutsche Bahn's public APIs. Special thanks to Deutsche Bahn for providing open access to this data. | 189 | 0 | [
"task_categories:time-series-forecasting",
"task_categories:tabular-regression",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"transportation",
"trains",
"germany",
"deutsche-bahn",
"delays",
"timetables"
] | 2025-10-19T09:10:16+00:00 | 2025-11-12T15:38:20+00:00 | 0 |
devingulliver/common-culture |
This contains an English subset of "Open Culture," originally released as part of [PleIAs/common_corpus](https://huggingface.co/datasets/PleIAs/common_corpus).
More specifically, it includes English-PD, US-PD-Books, and US-PD-Newspapers, totaling around 450B raw tokens.
No additional cleaning or filtering has been performed!! A good first step could be cleaning with [OCRonos](https://huggingface.co/PleIAs/OCRonos).
The data is still in the process of being uploaded in chunks. It should be finished in the coming weeks. |
This contains an English subset of "Open Culture," originally released as part of [PleIAs/common_corpus](https://huggingface.co/datasets/PleIAs/common_corpus).
More specifically, it includes English-PD, US-PD-Books, and US-PD-Newspapers, totaling around 450B raw tokens.
No additional cleaning or filtering has been performed!! A good first step could be cleaning with [OCRonos](https://huggingface.co/PleIAs/OCRonos).
The data is still in the process of being uploaded in chunks. It should be finished in the coming weeks. | 41 | 0 | [
"task_categories:text-generation",
"language:en",
"license:cc0-1.0",
"size_categories:10M<n<100M",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | 2025-10-31T23:24:48+00:00 | 2025-11-12T15:37:14+00:00 | 0 |
taesiri/imagenet-hard-4K | # Dataset Card for "Imagenet-Hard-4K"
[Project Page](https://taesiri.github.io/ZoomIsAllYouNeed/) - [Paper](https://arxiv.org/abs/2304.05538) - [Github](https://github.com/taesiri/ZoomIsAllYouNeed)
**ImageNet-Hard-4K** is 4K version of the original [**ImageNet-Hard**](https://huggingface.co/datasets/taesiri/imagenet-hard) dataset, which is a new benchmark that comprises 10,980 images collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). This dataset poses a significant challenge to state-of-the-art vision models as merely zooming in often fails to improve their ability to classify images correctly. As a result, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving a mere `2.02%` accuracy.
## Upscaling Procedure
We employed [GigaGAN](https://mingukkang.github.io/GigaGAN/) to upscale each image from the original ImageNet-Hard dataset to a resolution of 4K.
### Dataset Distribution

### Classifiers Performance
| Model | Accuracy |
| ------------------- | -------- |
| AlexNet | 7.08 |
| VGG-16 | 11.32 |
| ResNet-18 | 10.42 |
| ResNet-50 | 13.93 |
| ViT-B/32 | 18.12 |
| EfficientNet-B0 | 12.94 |
| EfficientNet-B7 | 18.67 |
| EfficientNet-L2-Ns | 28.42 |
| CLIP-ViT-L/14@224px | 1.81 |
| CLIP-ViT-L/14@336px | 1.88 |
| OpenCLIP-ViT-bigG-14| 14.33 |
| OpenCLIP-ViT-L-14 | 13.04 |
**Evaluation Code**
* CLIP <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Prompt_Engineering_for_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
* Other models <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Benchmark_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
## Supported Tasks
- `image-classification`: The objective of this task is to classify an image into one or more classes, selected from 1000 ImageNet categories (allowing for multiple ground-truth labels per image).
## Languages
The `english_label` field in the dataset are in English.
## Dataset Structure
Data Instances
An example looks like this:
```python
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=575x409 at 0x7F09456B53A0>,
'label': [0],
'origin': 'imagenet_sketch',
'english_label': ['tench']
}
```
### Data Fields
The data instances have the following fields:
- image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- label: A List[int] collection containing the ground-truth ids.
- origin: A string containing source dataset.
- english_label: A List[str] collection containg the english labels for the ground-truth classes.
<details>
<summary>
Click here to see the full list of ImageNet class labels mapping:
</summary>
|id|Class|
|--|-----|
|0 | tench, Tinca tinca|
|1 | goldfish, Carassius auratus|
|2 | great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias|
|3 | tiger shark, Galeocerdo cuvieri|
|4 | hammerhead, hammerhead shark|
|5 | electric ray, crampfish, numbfish, torpedo|
|6 | stingray|
|7 | cock|
|8 | hen|
|9 | ostrich, Struthio camelus|
|10 | brambling, Fringilla montifringilla|
|11 | goldfinch, Carduelis carduelis|
|12 | house finch, linnet, Carpodacus mexicanus|
|13 | junco, snowbird|
|14 | indigo bunting, indigo finch, indigo bird, Passerina cyanea|
|15 | robin, American robin, Turdus migratorius|
|16 | bulbul|
|17 | jay|
|18 | magpie|
|19 | chickadee|
|20 | water ouzel, dipper|
|21 | kite|
|22 | bald eagle, American eagle, Haliaeetus leucocephalus|
|23 | vulture|
|24 | great grey owl, great gray owl, Strix nebulosa|
|25 | European fire salamander, Salamandra salamandra|
|26 | common newt, Triturus vulgaris|
|27 | eft|
|28 | spotted salamander, Ambystoma maculatum|
|29 | axolotl, mud puppy, Ambystoma mexicanum|
|30 | bullfrog, Rana catesbeiana|
|31 | tree frog, tree-frog|
|32 | tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui|
|33 | loggerhead, loggerhead turtle, Caretta caretta|
|34 | leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea|
|35 | mud turtle|
|36 | terrapin|
|37 | box turtle, box tortoise|
|38 | banded gecko|
|39 | common iguana, iguana, Iguana iguana|
|40 | American chameleon, anole, Anolis carolinensis|
|41 | whiptail, whiptail lizard|
|42 | agama|
|43 | frilled lizard, Chlamydosaurus kingi|
|44 | alligator lizard|
|45 | Gila monster, Heloderma suspectum|
|46 | green lizard, Lacerta viridis|
|47 | African chameleon, Chamaeleo chamaeleon|
|48 | Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis|
|49 | African crocodile, Nile crocodile, Crocodylus niloticus|
|50 | American alligator, Alligator mississipiensis|
|51 | triceratops|
|52 | thunder snake, worm snake, Carphophis amoenus|
|53 | ringneck snake, ring-necked snake, ring snake|
|54 | hognose snake, puff adder, sand viper|
|55 | green snake, grass snake|
|56 | king snake, kingsnake|
|57 | garter snake, grass snake|
|58 | water snake|
|59 | vine snake|
|60 | night snake, Hypsiglena torquata|
|61 | boa constrictor, Constrictor constrictor|
|62 | rock python, rock snake, Python sebae|
|63 | Indian cobra, Naja naja|
|64 | green mamba|
|65 | sea snake|
|66 | horned viper, cerastes, sand viper, horned asp, Cerastes cornutus|
|67 | diamondback, diamondback rattlesnake, Crotalus adamanteus|
|68 | sidewinder, horned rattlesnake, Crotalus cerastes|
|69 | trilobite|
|70 | harvestman, daddy longlegs, Phalangium opilio|
|71 | scorpion|
|72 | black and gold garden spider, Argiope aurantia|
|73 | barn spider, Araneus cavaticus|
|74 | garden spider, Aranea diademata|
|75 | black widow, Latrodectus mactans|
|76 | tarantula|
|77 | wolf spider, hunting spider|
|78 | tick|
|79 | centipede|
|80 | black grouse|
|81 | ptarmigan|
|82 | ruffed grouse, partridge, Bonasa umbellus|
|83 | prairie chicken, prairie grouse, prairie fowl|
|84 | peacock|
|85 | quail|
|86 | partridge|
|87 | African grey, African gray, Psittacus erithacus|
|88 | macaw|
|89 | sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita|
|90 | lorikeet|
|91 | coucal|
|92 | bee eater|
|93 | hornbill|
|94 | hummingbird|
|95 | jacamar|
|96 | toucan|
|97 | drake|
|98 | red-breasted merganser, Mergus serrator|
|99 | goose|
|100 | black swan, Cygnus atratus|
|101 | tusker|
|102 | echidna, spiny anteater, anteater|
|103 | platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus|
|104 | wallaby, brush kangaroo|
|105 | koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus|
|106 | wombat|
|107 | jellyfish|
|108 | sea anemone, anemone|
|109 | brain coral|
|110 | flatworm, platyhelminth|
|111 | nematode, nematode worm, roundworm|
|112 | conch|
|113 | snail|
|114 | slug|
|115 | sea slug, nudibranch|
|116 | chiton, coat-of-mail shell, sea cradle, polyplacophore|
|117 | chambered nautilus, pearly nautilus, nautilus|
|118 | Dungeness crab, Cancer magister|
|119 | rock crab, Cancer irroratus|
|120 | fiddler crab|
|121 | king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica|
|122 | American lobster, Northern lobster, Maine lobster, Homarus americanus|
|123 | spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish|
|124 | crayfish, crawfish, crawdad, crawdaddy|
|125 | hermit crab|
|126 | isopod|
|127 | white stork, Ciconia ciconia|
|128 | black stork, Ciconia nigra|
|129 | spoonbill|
|130 | flamingo|
|131 | little blue heron, Egretta caerulea|
|132 | American egret, great white heron, Egretta albus|
|133 | bittern|
|134 | crane|
|135 | limpkin, Aramus pictus|
|136 | European gallinule, Porphyrio porphyrio|
|137 | American coot, marsh hen, mud hen, water hen, Fulica americana|
|138 | bustard|
|139 | ruddy turnstone, Arenaria interpres|
|140 | red-backed sandpiper, dunlin, Erolia alpina|
|141 | redshank, Tringa totanus|
|142 | dowitcher|
|143 | oystercatcher, oyster catcher|
|144 | pelican|
|145 | king penguin, Aptenodytes patagonica|
|146 | albatross, mollymawk|
|147 | grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus|
|148 | killer whale, killer, orca, grampus, sea wolf, Orcinus orca|
|149 | dugong, Dugong dugon|
|150 | sea lion|
|151 | Chihuahua|
|152 | Japanese spaniel|
|153 | Maltese dog, Maltese terrier, Maltese|
|154 | Pekinese, Pekingese, Peke|
|155 | Shih-Tzu|
|156 | Blenheim spaniel|
|157 | papillon|
|158 | toy terrier|
|159 | Rhodesian ridgeback|
|160 | Afghan hound, Afghan|
|161 | basset, basset hound|
|162 | beagle|
|163 | bloodhound, sleuthhound|
|164 | bluetick|
|165 | black-and-tan coonhound|
|166 | Walker hound, Walker foxhound|
|167 | English foxhound|
|168 | redbone|
|169 | borzoi, Russian wolfhound|
|170 | Irish wolfhound|
|171 | Italian greyhound|
|172 | whippet|
|173 | Ibizan hound, Ibizan Podenco|
|174 | Norwegian elkhound, elkhound|
|175 | otterhound, otter hound|
|176 | Saluki, gazelle hound|
|177 | Scottish deerhound, deerhound|
|178 | Weimaraner|
|179 | Staffordshire bullterrier, Staffordshire bull terrier|
|180 | American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier|
|181 | Bedlington terrier|
|182 | Border terrier|
|183 | Kerry blue terrier|
|184 | Irish terrier|
|185 | Norfolk terrier|
|186 | Norwich terrier|
|187 | Yorkshire terrier|
|188 | wire-haired fox terrier|
|189 | Lakeland terrier|
|190 | Sealyham terrier, Sealyham|
|191 | Airedale, Airedale terrier|
|192 | cairn, cairn terrier|
|193 | Australian terrier|
|194 | Dandie Dinmont, Dandie Dinmont terrier|
|195 | Boston bull, Boston terrier|
|196 | miniature schnauzer|
|197 | giant schnauzer|
|198 | standard schnauzer|
|199 | Scotch terrier, Scottish terrier, Scottie|
|200 | Tibetan terrier, chrysanthemum dog|
|201 | silky terrier, Sydney silky|
|202 | soft-coated wheaten terrier|
|203 | West Highland white terrier|
|204 | Lhasa, Lhasa apso|
|205 | flat-coated retriever|
|206 | curly-coated retriever|
|207 | golden retriever|
|208 | Labrador retriever|
|209 | Chesapeake Bay retriever|
|210 | German short-haired pointer|
|211 | vizsla, Hungarian pointer|
|212 | English setter|
|213 | Irish setter, red setter|
|214 | Gordon setter|
|215 | Brittany spaniel|
|216 | clumber, clumber spaniel|
|217 | English springer, English springer spaniel|
|218 | Welsh springer spaniel|
|219 | cocker spaniel, English cocker spaniel, cocker|
|220 | Sussex spaniel|
|221 | Irish water spaniel|
|222 | kuvasz|
|223 | schipperke|
|224 | groenendael|
|225 | malinois|
|226 | briard|
|227 | kelpie|
|228 | komondor|
|229 | Old English sheepdog, bobtail|
|230 | Shetland sheepdog, Shetland sheep dog, Shetland|
|231 | collie|
|232 | Border collie|
|233 | Bouvier des Flandres, Bouviers des Flandres|
|234 | Rottweiler|
|235 | German shepherd, German shepherd dog, German police dog, alsatian|
|236 | Doberman, Doberman pinscher|
|237 | miniature pinscher|
|238 | Greater Swiss Mountain dog|
|239 | Bernese mountain dog|
|240 | Appenzeller|
|241 | EntleBucher|
|242 | boxer|
|243 | bull mastiff|
|244 | Tibetan mastiff|
|245 | French bulldog|
|246 | Great Dane|
|247 | Saint Bernard, St Bernard|
|248 | Eskimo dog, husky|
|249 | malamute, malemute, Alaskan malamute|
|250 | Siberian husky|
|251 | dalmatian, coach dog, carriage dog|
|252 | affenpinscher, monkey pinscher, monkey dog|
|253 | basenji|
|254 | pug, pug-dog|
|255 | Leonberg|
|256 | Newfoundland, Newfoundland dog|
|257 | Great Pyrenees|
|258 | Samoyed, Samoyede|
|259 | Pomeranian|
|260 | chow, chow chow|
|261 | keeshond|
|262 | Brabancon griffon|
|263 | Pembroke, Pembroke Welsh corgi|
|264 | Cardigan, Cardigan Welsh corgi|
|265 | toy poodle|
|266 | miniature poodle|
|267 | standard poodle|
|268 | Mexican hairless|
|269 | timber wolf, grey wolf, gray wolf, Canis lupus|
|270 | white wolf, Arctic wolf, Canis lupus tundrarum|
|271 | red wolf, maned wolf, Canis rufus, Canis niger|
|272 | coyote, prairie wolf, brush wolf, Canis latrans|
|273 | dingo, warrigal, warragal, Canis dingo|
|274 | dhole, Cuon alpinus|
|275 | African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus|
|276 | hyena, hyaena|
|277 | red fox, Vulpes vulpes|
|278 | kit fox, Vulpes macrotis|
|279 | Arctic fox, white fox, Alopex lagopus|
|280 | grey fox, gray fox, Urocyon cinereoargenteus|
|281 | tabby, tabby cat|
|282 | tiger cat|
|283 | Persian cat|
|284 | Siamese cat, Siamese|
|285 | Egyptian cat|
|286 | cougar, puma, catamount, mountain lion, painter, panther, Felis concolor|
|287 | lynx, catamount|
|288 | leopard, Panthera pardus|
|289 | snow leopard, ounce, Panthera uncia|
|290 | jaguar, panther, Panthera onca, Felis onca|
|291 | lion, king of beasts, Panthera leo|
|292 | tiger, Panthera tigris|
|293 | cheetah, chetah, Acinonyx jubatus|
|294 | brown bear, bruin, Ursus arctos|
|295 | American black bear, black bear, Ursus americanus, Euarctos americanus|
|296 | ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus|
|297 | sloth bear, Melursus ursinus, Ursus ursinus|
|298 | mongoose|
|299 | meerkat, mierkat|
|300 | tiger beetle|
|301 | ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle|
|302 | ground beetle, carabid beetle|
|303 | long-horned beetle, longicorn, longicorn beetle|
|304 | leaf beetle, chrysomelid|
|305 | dung beetle|
|306 | rhinoceros beetle|
|307 | weevil|
|308 | fly|
|309 | bee|
|310 | ant, emmet, pismire|
|311 | grasshopper, hopper|
|312 | cricket|
|313 | walking stick, walkingstick, stick insect|
|314 | cockroach, roach|
|315 | mantis, mantid|
|316 | cicada, cicala|
|317 | leafhopper|
|318 | lacewing, lacewing fly|
|319 | dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk|
|320 | damselfly|
|321 | admiral|
|322 | ringlet, ringlet butterfly|
|323 | monarch, monarch butterfly, milkweed butterfly, Danaus plexippus|
|324 | cabbage butterfly|
|325 | sulphur butterfly, sulfur butterfly|
|326 | lycaenid, lycaenid butterfly|
|327 | starfish, sea star|
|328 | sea urchin|
|329 | sea cucumber, holothurian|
|330 | wood rabbit, cottontail, cottontail rabbit|
|331 | hare|
|332 | Angora, Angora rabbit|
|333 | hamster|
|334 | porcupine, hedgehog|
|335 | fox squirrel, eastern fox squirrel, Sciurus niger|
|336 | marmot|
|337 | beaver|
|338 | guinea pig, Cavia cobaya|
|339 | sorrel|
|340 | zebra|
|341 | hog, pig, grunter, squealer, Sus scrofa|
|342 | wild boar, boar, Sus scrofa|
|343 | warthog|
|344 | hippopotamus, hippo, river horse, Hippopotamus amphibius|
|345 | ox|
|346 | water buffalo, water ox, Asiatic buffalo, Bubalus bubalis|
|347 | bison|
|348 | ram, tup|
|349 | bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis|
|350 | ibex, Capra ibex|
|351 | hartebeest|
|352 | impala, Aepyceros melampus|
|353 | gazelle|
|354 | Arabian camel, dromedary, Camelus dromedarius|
|355 | llama|
|356 | weasel|
|357 | mink|
|358 | polecat, fitch, foulmart, foumart, Mustela putorius|
|359 | black-footed ferret, ferret, Mustela nigripes|
|360 | otter|
|361 | skunk, polecat, wood pussy|
|362 | badger|
|363 | armadillo|
|364 | three-toed sloth, ai, Bradypus tridactylus|
|365 | orangutan, orang, orangutang, Pongo pygmaeus|
|366 | gorilla, Gorilla gorilla|
|367 | chimpanzee, chimp, Pan troglodytes|
|368 | gibbon, Hylobates lar|
|369 | siamang, Hylobates syndactylus, Symphalangus syndactylus|
|370 | guenon, guenon monkey|
|371 | patas, hussar monkey, Erythrocebus patas|
|372 | baboon|
|373 | macaque|
|374 | langur|
|375 | colobus, colobus monkey|
|376 | proboscis monkey, Nasalis larvatus|
|377 | marmoset|
|378 | capuchin, ringtail, Cebus capucinus|
|379 | howler monkey, howler|
|380 | titi, titi monkey|
|381 | spider monkey, Ateles geoffroyi|
|382 | squirrel monkey, Saimiri sciureus|
|383 | Madagascar cat, ring-tailed lemur, Lemur catta|
|384 | indri, indris, Indri indri, Indri brevicaudatus|
|385 | Indian elephant, Elephas maximus|
|386 | African elephant, Loxodonta africana|
|387 | lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens|
|388 | giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca|
|389 | barracouta, snoek|
|390 | eel|
|391 | coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch|
|392 | rock beauty, Holocanthus tricolor|
|393 | anemone fish|
|394 | sturgeon|
|395 | gar, garfish, garpike, billfish, Lepisosteus osseus|
|396 | lionfish|
|397 | puffer, pufferfish, blowfish, globefish|
|398 | abacus|
|399 | abaya|
|400 | academic gown, academic robe, judge's robe|
|401 | accordion, piano accordion, squeeze box|
|402 | acoustic guitar|
|403 | aircraft carrier, carrier, flattop, attack aircraft carrier|
|404 | airliner|
|405 | airship, dirigible|
|406 | altar|
|407 | ambulance|
|408 | amphibian, amphibious vehicle|
|409 | analog clock|
|410 | apiary, bee house|
|411 | apron|
|412 | ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin|
|413 | assault rifle, assault gun|
|414 | backpack, back pack, knapsack, packsack, rucksack, haversack|
|415 | bakery, bakeshop, bakehouse|
|416 | balance beam, beam|
|417 | balloon|
|418 | ballpoint, ballpoint pen, ballpen, Biro|
|419 | Band Aid|
|420 | banjo|
|421 | bannister, banister, balustrade, balusters, handrail|
|422 | barbell|
|423 | barber chair|
|424 | barbershop|
|425 | barn|
|426 | barometer|
|427 | barrel, cask|
|428 | barrow, garden cart, lawn cart, wheelbarrow|
|429 | baseball|
|430 | basketball|
|431 | bassinet|
|432 | bassoon|
|433 | bathing cap, swimming cap|
|434 | bath towel|
|435 | bathtub, bathing tub, bath, tub|
|436 | beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon|
|437 | beacon, lighthouse, beacon light, pharos|
|438 | beaker|
|439 | bearskin, busby, shako|
|440 | beer bottle|
|441 | beer glass|
|442 | bell cote, bell cot|
|443 | bib|
|444 | bicycle-built-for-two, tandem bicycle, tandem|
|445 | bikini, two-piece|
|446 | binder, ring-binder|
|447 | binoculars, field glasses, opera glasses|
|448 | birdhouse|
|449 | boathouse|
|450 | bobsled, bobsleigh, bob|
|451 | bolo tie, bolo, bola tie, bola|
|452 | bonnet, poke bonnet|
|453 | bookcase|
|454 | bookshop, bookstore, bookstall|
|455 | bottlecap|
|456 | bow|
|457 | bow tie, bow-tie, bowtie|
|458 | brass, memorial tablet, plaque|
|459 | brassiere, bra, bandeau|
|460 | breakwater, groin, groyne, mole, bulwark, seawall, jetty|
|461 | breastplate, aegis, egis|
|462 | broom|
|463 | bucket, pail|
|464 | buckle|
|465 | bulletproof vest|
|466 | bullet train, bullet|
|467 | butcher shop, meat market|
|468 | cab, hack, taxi, taxicab|
|469 | caldron, cauldron|
|470 | candle, taper, wax light|
|471 | cannon|
|472 | canoe|
|473 | can opener, tin opener|
|474 | cardigan|
|475 | car mirror|
|476 | carousel, carrousel, merry-go-round, roundabout, whirligig|
|477 | carpenter's kit, tool kit|
|478 | carton|
|479 | car wheel|
|480 | cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM|
|481 | cassette|
|482 | cassette player|
|483 | castle|
|484 | catamaran|
|485 | CD player|
|486 | cello, violoncello|
|487 | cellular telephone, cellular phone, cellphone, cell, mobile phone|
|488 | chain|
|489 | chainlink fence|
|490 | chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour|
|491 | chain saw, chainsaw|
|492 | chest|
|493 | chiffonier, commode|
|494 | chime, bell, gong|
|495 | china cabinet, china closet|
|496 | Christmas stocking|
|497 | church, church building|
|498 | cinema, movie theater, movie theatre, movie house, picture palace|
|499 | cleaver, meat cleaver, chopper|
|500 | cliff dwelling|
|501 | cloak|
|502 | clog, geta, patten, sabot|
|503 | cocktail shaker|
|504 | coffee mug|
|505 | coffeepot|
|506 | coil, spiral, volute, whorl, helix|
|507 | combination lock|
|508 | computer keyboard, keypad|
|509 | confectionery, confectionary, candy store|
|510 | container ship, containership, container vessel|
|511 | convertible|
|512 | corkscrew, bottle screw|
|513 | cornet, horn, trumpet, trump|
|514 | cowboy boot|
|515 | cowboy hat, ten-gallon hat|
|516 | cradle|
|517 | crane_1|
|518 | crash helmet|
|519 | crate|
|520 | crib, cot|
|521 | Crock Pot|
|522 | croquet ball|
|523 | crutch|
|524 | cuirass|
|525 | dam, dike, dyke|
|526 | desk|
|527 | desktop computer|
|528 | dial telephone, dial phone|
|529 | diaper, nappy, napkin|
|530 | digital clock|
|531 | digital watch|
|532 | dining table, board|
|533 | dishrag, dishcloth|
|534 | dishwasher, dish washer, dishwashing machine|
|535 | disk brake, disc brake|
|536 | dock, dockage, docking facility|
|537 | dogsled, dog sled, dog sleigh|
|538 | dome|
|539 | doormat, welcome mat|
|540 | drilling platform, offshore rig|
|541 | drum, membranophone, tympan|
|542 | drumstick|
|543 | dumbbell|
|544 | Dutch oven|
|545 | electric fan, blower|
|546 | electric guitar|
|547 | electric locomotive|
|548 | entertainment center|
|549 | envelope|
|550 | espresso maker|
|551 | face powder|
|552 | feather boa, boa|
|553 | file, file cabinet, filing cabinet|
|554 | fireboat|
|555 | fire engine, fire truck|
|556 | fire screen, fireguard|
|557 | flagpole, flagstaff|
|558 | flute, transverse flute|
|559 | folding chair|
|560 | football helmet|
|561 | forklift|
|562 | fountain|
|563 | fountain pen|
|564 | four-poster|
|565 | freight car|
|566 | French horn, horn|
|567 | frying pan, frypan, skillet|
|568 | fur coat|
|569 | garbage truck, dustcart|
|570 | gasmask, respirator, gas helmet|
|571 | gas pump, gasoline pump, petrol pump, island dispenser|
|572 | goblet|
|573 | go-kart|
|574 | golf ball|
|575 | golfcart, golf cart|
|576 | gondola|
|577 | gong, tam-tam|
|578 | gown|
|579 | grand piano, grand|
|580 | greenhouse, nursery, glasshouse|
|581 | grille, radiator grille|
|582 | grocery store, grocery, food market, market|
|583 | guillotine|
|584 | hair slide|
|585 | hair spray|
|586 | half track|
|587 | hammer|
|588 | hamper|
|589 | hand blower, blow dryer, blow drier, hair dryer, hair drier|
|590 | hand-held computer, hand-held microcomputer|
|591 | handkerchief, hankie, hanky, hankey|
|592 | hard disc, hard disk, fixed disk|
|593 | harmonica, mouth organ, harp, mouth harp|
|594 | harp|
|595 | harvester, reaper|
|596 | hatchet|
|597 | holster|
|598 | home theater, home theatre|
|599 | honeycomb|
|600 | hook, claw|
|601 | hoopskirt, crinoline|
|602 | horizontal bar, high bar|
|603 | horse cart, horse-cart|
|604 | hourglass|
|605 | iPod|
|606 | iron, smoothing iron|
|607 | jack-o'-lantern|
|608 | jean, blue jean, denim|
|609 | jeep, landrover|
|610 | jersey, T-shirt, tee shirt|
|611 | jigsaw puzzle|
|612 | jinrikisha, ricksha, rickshaw|
|613 | joystick|
|614 | kimono|
|615 | knee pad|
|616 | knot|
|617 | lab coat, laboratory coat|
|618 | ladle|
|619 | lampshade, lamp shade|
|620 | laptop, laptop computer|
|621 | lawn mower, mower|
|622 | lens cap, lens cover|
|623 | letter opener, paper knife, paperknife|
|624 | library|
|625 | lifeboat|
|626 | lighter, light, igniter, ignitor|
|627 | limousine, limo|
|628 | liner, ocean liner|
|629 | lipstick, lip rouge|
|630 | Loafer|
|631 | lotion|
|632 | loudspeaker, speaker, speaker unit, loudspeaker system, speaker system|
|633 | loupe, jeweler's loupe|
|634 | lumbermill, sawmill|
|635 | magnetic compass|
|636 | mailbag, postbag|
|637 | mailbox, letter box|
|638 | maillot|
|639 | maillot, tank suit|
|640 | manhole cover|
|641 | maraca|
|642 | marimba, xylophone|
|643 | mask|
|644 | matchstick|
|645 | maypole|
|646 | maze, labyrinth|
|647 | measuring cup|
|648 | medicine chest, medicine cabinet|
|649 | megalith, megalithic structure|
|650 | microphone, mike|
|651 | microwave, microwave oven|
|652 | military uniform|
|653 | milk can|
|654 | minibus|
|655 | miniskirt, mini|
|656 | minivan|
|657 | missile|
|658 | mitten|
|659 | mixing bowl|
|660 | mobile home, manufactured home|
|661 | Model T|
|662 | modem|
|663 | monastery|
|664 | monitor|
|665 | moped|
|666 | mortar|
|667 | mortarboard|
|668 | mosque|
|669 | mosquito net|
|670 | motor scooter, scooter|
|671 | mountain bike, all-terrain bike, off-roader|
|672 | mountain tent|
|673 | mouse, computer mouse|
|674 | mousetrap|
|675 | moving van|
|676 | muzzle|
|677 | nail|
|678 | neck brace|
|679 | necklace|
|680 | nipple|
|681 | notebook, notebook computer|
|682 | obelisk|
|683 | oboe, hautboy, hautbois|
|684 | ocarina, sweet potato|
|685 | odometer, hodometer, mileometer, milometer|
|686 | oil filter|
|687 | organ, pipe organ|
|688 | oscilloscope, scope, cathode-ray oscilloscope, CRO|
|689 | overskirt|
|690 | oxcart|
|691 | oxygen mask|
|692 | packet|
|693 | paddle, boat paddle|
|694 | paddlewheel, paddle wheel|
|695 | padlock|
|696 | paintbrush|
|697 | pajama, pyjama, pj's, jammies|
|698 | palace|
|699 | panpipe, pandean pipe, syrinx|
|700 | paper towel|
|701 | parachute, chute|
|702 | parallel bars, bars|
|703 | park bench|
|704 | parking meter|
|705 | passenger car, coach, carriage|
|706 | patio, terrace|
|707 | pay-phone, pay-station|
|708 | pedestal, plinth, footstall|
|709 | pencil box, pencil case|
|710 | pencil sharpener|
|711 | perfume, essence|
|712 | Petri dish|
|713 | photocopier|
|714 | pick, plectrum, plectron|
|715 | pickelhaube|
|716 | picket fence, paling|
|717 | pickup, pickup truck|
|718 | pier|
|719 | piggy bank, penny bank|
|720 | pill bottle|
|721 | pillow|
|722 | ping-pong ball|
|723 | pinwheel|
|724 | pirate, pirate ship|
|725 | pitcher, ewer|
|726 | plane, carpenter's plane, woodworking plane|
|727 | planetarium|
|728 | plastic bag|
|729 | plate rack|
|730 | plow, plough|
|731 | plunger, plumber's helper|
|732 | Polaroid camera, Polaroid Land camera|
|733 | pole|
|734 | police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria|
|735 | poncho|
|736 | pool table, billiard table, snooker table|
|737 | pop bottle, soda bottle|
|738 | pot, flowerpot|
|739 | potter's wheel|
|740 | power drill|
|741 | prayer rug, prayer mat|
|742 | printer|
|743 | prison, prison house|
|744 | projectile, missile|
|745 | projector|
|746 | puck, hockey puck|
|747 | punching bag, punch bag, punching ball, punchball|
|748 | purse|
|749 | quill, quill pen|
|750 | quilt, comforter, comfort, puff|
|751 | racer, race car, racing car|
|752 | racket, racquet|
|753 | radiator|
|754 | radio, wireless|
|755 | radio telescope, radio reflector|
|756 | rain barrel|
|757 | recreational vehicle, RV, R.V.|
|758 | reel|
|759 | reflex camera|
|760 | refrigerator, icebox|
|761 | remote control, remote|
|762 | restaurant, eating house, eating place, eatery|
|763 | revolver, six-gun, six-shooter|
|764 | rifle|
|765 | rocking chair, rocker|
|766 | rotisserie|
|767 | rubber eraser, rubber, pencil eraser|
|768 | rugby ball|
|769 | rule, ruler|
|770 | running shoe|
|771 | safe|
|772 | safety pin|
|773 | saltshaker, salt shaker|
|774 | sandal|
|775 | sarong|
|776 | sax, saxophone|
|777 | scabbard|
|778 | scale, weighing machine|
|779 | school bus|
|780 | schooner|
|781 | scoreboard|
|782 | screen, CRT screen|
|783 | screw|
|784 | screwdriver|
|785 | seat belt, seatbelt|
|786 | sewing machine|
|787 | shield, buckler|
|788 | shoe shop, shoe-shop, shoe store|
|789 | shoji|
|790 | shopping basket|
|791 | shopping cart|
|792 | shovel|
|793 | shower cap|
|794 | shower curtain|
|795 | ski|
|796 | ski mask|
|797 | sleeping bag|
|798 | slide rule, slipstick|
|799 | sliding door|
|800 | slot, one-armed bandit|
|801 | snorkel|
|802 | snowmobile|
|803 | snowplow, snowplough|
|804 | soap dispenser|
|805 | soccer ball|
|806 | sock|
|807 | solar dish, solar collector, solar furnace|
|808 | sombrero|
|809 | soup bowl|
|810 | space bar|
|811 | space heater|
|812 | space shuttle|
|813 | spatula|
|814 | speedboat|
|815 | spider web, spider's web|
|816 | spindle|
|817 | sports car, sport car|
|818 | spotlight, spot|
|819 | stage|
|820 | steam locomotive|
|821 | steel arch bridge|
|822 | steel drum|
|823 | stethoscope|
|824 | stole|
|825 | stone wall|
|826 | stopwatch, stop watch|
|827 | stove|
|828 | strainer|
|829 | streetcar, tram, tramcar, trolley, trolley car|
|830 | stretcher|
|831 | studio couch, day bed|
|832 | stupa, tope|
|833 | submarine, pigboat, sub, U-boat|
|834 | suit, suit of clothes|
|835 | sundial|
|836 | sunglass|
|837 | sunglasses, dark glasses, shades|
|838 | sunscreen, sunblock, sun blocker|
|839 | suspension bridge|
|840 | swab, swob, mop|
|841 | sweatshirt|
|842 | swimming trunks, bathing trunks|
|843 | swing|
|844 | switch, electric switch, electrical switch|
|845 | syringe|
|846 | table lamp|
|847 | tank, army tank, armored combat vehicle, armoured combat vehicle|
|848 | tape player|
|849 | teapot|
|850 | teddy, teddy bear|
|851 | television, television system|
|852 | tennis ball|
|853 | thatch, thatched roof|
|854 | theater curtain, theatre curtain|
|855 | thimble|
|856 | thresher, thrasher, threshing machine|
|857 | throne|
|858 | tile roof|
|859 | toaster|
|860 | tobacco shop, tobacconist shop, tobacconist|
|861 | toilet seat|
|862 | torch|
|863 | totem pole|
|864 | tow truck, tow car, wrecker|
|865 | toyshop|
|866 | tractor|
|867 | trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi|
|868 | tray|
|869 | trench coat|
|870 | tricycle, trike, velocipede|
|871 | trimaran|
|872 | tripod|
|873 | triumphal arch|
|874 | trolleybus, trolley coach, trackless trolley|
|875 | trombone|
|876 | tub, vat|
|877 | turnstile|
|878 | typewriter keyboard|
|879 | umbrella|
|880 | unicycle, monocycle|
|881 | upright, upright piano|
|882 | vacuum, vacuum cleaner|
|883 | vase|
|884 | vault|
|885 | velvet|
|886 | vending machine|
|887 | vestment|
|888 | viaduct|
|889 | violin, fiddle|
|890 | volleyball|
|891 | waffle iron|
|892 | wall clock|
|893 | wallet, billfold, notecase, pocketbook|
|894 | wardrobe, closet, press|
|895 | warplane, military plane|
|896 | washbasin, handbasin, washbowl, lavabo, wash-hand basin|
|897 | washer, automatic washer, washing machine|
|898 | water bottle|
|899 | water jug|
|900 | water tower|
|901 | whiskey jug|
|902 | whistle|
|903 | wig|
|904 | window screen|
|905 | window shade|
|906 | Windsor tie|
|907 | wine bottle|
|908 | wing|
|909 | wok|
|910 | wooden spoon|
|911 | wool, woolen, woollen|
|912 | worm fence, snake fence, snake-rail fence, Virginia fence|
|913 | wreck|
|914 | yawl|
|915 | yurt|
|916 | web site, website, internet site, site|
|917 | comic book|
|918 | crossword puzzle, crossword|
|919 | street sign|
|920 | traffic light, traffic signal, stoplight|
|921 | book jacket, dust cover, dust jacket, dust wrapper|
|922 | menu|
|923 | plate|
|924 | guacamole|
|925 | consomme|
|926 | hot pot, hotpot|
|927 | trifle|
|928 | ice cream, icecream|
|929 | ice lolly, lolly, lollipop, popsicle|
|930 | French loaf|
|931 | bagel, beigel|
|932 | pretzel|
|933 | cheeseburger|
|934 | hotdog, hot dog, red hot|
|935 | mashed potato|
|936 | head cabbage|
|937 | broccoli|
|938 | cauliflower|
|939 | zucchini, courgette|
|940 | spaghetti squash|
|941 | acorn squash|
|942 | butternut squash|
|943 | cucumber, cuke|
|944 | artichoke, globe artichoke|
|945 | bell pepper|
|946 | cardoon|
|947 | mushroom|
|948 | Granny Smith|
|949 | strawberry|
|950 | orange|
|951 | lemon|
|952 | fig|
|953 | pineapple, ananas|
|954 | banana|
|955 | jackfruit, jak, jack|
|956 | custard apple|
|957 | pomegranate|
|958 | hay|
|959 | carbonara|
|960 | chocolate sauce, chocolate syrup|
|961 | dough|
|962 | meat loaf, meatloaf|
|963 | pizza, pizza pie|
|964 | potpie|
|965 | burrito|
|966 | red wine|
|967 | espresso|
|968 | cup|
|969 | eggnog|
|970 | alp|
|971 | bubble|
|972 | cliff, drop, drop-off|
|973 | coral reef|
|974 | geyser|
|975 | lakeside, lakeshore|
|976 | promontory, headland, head, foreland|
|977 | sandbar, sand bar|
|978 | seashore, coast, seacoast, sea-coast|
|979 | valley, vale|
|980 | volcano|
|981 | ballplayer, baseball player|
|982 | groom, bridegroom|
|983 | scuba diver|
|984 | rapeseed|
|985 | daisy|
|986 | yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum|
|987 | corn|
|988 | acorn|
|989 | hip, rose hip, rosehip|
|990 | buckeye, horse chestnut, conker|
|991 | coral fungus|
|992 | agaric|
|993 | gyromitra|
|994 | stinkhorn, carrion fungus|
|995 | earthstar|
|996 | hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa|
|997 | bolete|
|998 | ear, spike, capitulum|
|999 | toilet tissue, toilet paper, bathroom tissue|
</details>
### Data Splits
This dataset is a validation-only set.
## Dataset Creation
### Source Data
This dataset is sourced from ImageNet, ImageNet-ReaL, ImageNet-V2, ImageNet-A, ImageNet-C, ImageNet-R, ImageNet-Sketch, and ObjectNet.
## Citation Information
```
@article{taesiri2023zoom,
title={ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification},
author={Taesiri, Mohammad Reza and Nguyen, Giang and Habchi, Sarra and Bezemer, Cor-Paul and Nguyen, Anh},
journal={arXiv preprint arXiv:2304.05538},
year={2023}
}
``` | # Dataset Card for "Imagenet-Hard-4K"
[Project Page](https://taesiri.github.io/ZoomIsAllYouNeed/) - [Paper](https://arxiv.org/abs/2304.05538) - [Github](https://github.com/taesiri/ZoomIsAllYouNeed)
**ImageNet-Hard-4K** is 4K version of the original [**ImageNet-Hard**](https://huggingface.co/datasets/taesiri/imagenet-hard) dataset, which is a new benchmark that comprises 10,980 images collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). This dataset poses a significant challenge to state-of-the-art vision models as merely zooming in often fails to improve their ability to classify images correctly. As a result, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving a mere `2.02%` accuracy.
## Upscaling Procedure
We employed [GigaGAN](https://mingukkang.github.io/GigaGAN/) to upscale each image from the original ImageNet-Hard dataset to a resolution of 4K.
### Dataset Distribution

### Classifiers Performance
| Model | Accuracy |
| ------------------- | -------- |
| AlexNet | 7.08 |
| VGG-16 | 11.32 |
| ResNet-18 | 10.42 |
| ResNet-50 | 13.93 |
| ViT-B/32 | 18.12 |
| EfficientNet-B0 | 12.94 |
| EfficientNet-B7 | 18.67 |
| EfficientNet-L2-Ns | 28.42 |
| CLIP-ViT-L/14@224px | 1.81 |
| CLIP-ViT-L/14@336px | 1.88 |
| OpenCLIP-ViT-bigG-14| 14.33 |
| OpenCLIP-ViT-L-14 | 13.04 |
**Evaluation Code**
* CLIP <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Prompt_Engineering_for_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
* Other models <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Benchmark_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
## Supported Tasks
- `image-classification`: The objective of this task is to classify an image into one or more classes, selected from 1000 ImageNet categories (allowing for multiple ground-truth labels per image).
## Languages
The `english_label` field in the dataset are in English.
## Dataset Structure
Data Instances
An example looks like this:
```python
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=575x409 at 0x7F09456B53A0>,
'label': [0],
'origin': 'imagenet_sketch',
'english_label': ['tench']
}
```
### Data Fields
The data instances have the following fields:
- image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- label: A List[int] collection containing the ground-truth ids.
- origin: A string containing source dataset.
- english_label: A List[str] collection containg the english labels for the ground-truth classes.
<details>
<summary>
Click here to see the full list of ImageNet class labels mapping:
</summary>
|id|Class|
|--|-----|
|0 | tench, Tinca tinca|
|1 | goldfish, Carassius auratus|
|2 | great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias|
|3 | tiger shark, Galeocerdo cuvieri|
|4 | hammerhead, hammerhead shark|
|5 | electric ray, crampfish, numbfish, torpedo|
|6 | stingray|
|7 | cock|
|8 | hen|
|9 | ostrich, Struthio camelus|
|10 | brambling, Fringilla montifringilla|
|11 | goldfinch, Carduelis carduelis|
|12 | house finch, linnet, Carpodacus mexicanus|
|13 | junco, snowbird|
|14 | indigo bunting, indigo finch, indigo bird, Passerina cyanea|
|15 | robin, American robin, Turdus migratorius|
|16 | bulbul|
|17 | jay|
|18 | magpie|
|19 | chickadee|
|20 | water ouzel, dipper|
|21 | kite|
|22 | bald eagle, American eagle, Haliaeetus leucocephalus|
|23 | vulture|
|24 | great grey owl, great gray owl, Strix nebulosa|
|25 | European fire salamander, Salamandra salamandra|
|26 | common newt, Triturus vulgaris|
|27 | eft|
|28 | spotted salamander, Ambystoma maculatum|
|29 | axolotl, mud puppy, Ambystoma mexicanum|
|30 | bullfrog, Rana catesbeiana|
|31 | tree frog, tree-frog|
|32 | tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui|
|33 | loggerhead, loggerhead turtle, Caretta caretta|
|34 | leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea|
|35 | mud turtle|
|36 | terrapin|
|37 | box turtle, box tortoise|
|38 | banded gecko|
|39 | common iguana, iguana, Iguana iguana|
|40 | American chameleon, anole, Anolis carolinensis|
|41 | whiptail, whiptail lizard|
|42 | agama|
|43 | frilled lizard, Chlamydosaurus kingi|
|44 | alligator lizard|
|45 | Gila monster, Heloderma suspectum|
|46 | green lizard, Lacerta viridis|
|47 | African chameleon, Chamaeleo chamaeleon|
|48 | Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis|
|49 | African crocodile, Nile crocodile, Crocodylus niloticus|
|50 | American alligator, Alligator mississipiensis|
|51 | triceratops|
|52 | thunder snake, worm snake, Carphophis amoenus|
|53 | ringneck snake, ring-necked snake, ring snake|
|54 | hognose snake, puff adder, sand viper|
|55 | green snake, grass snake|
|56 | king snake, kingsnake|
|57 | garter snake, grass snake|
|58 | water snake|
|59 | vine snake|
|60 | night snake, Hypsiglena torquata|
|61 | boa constrictor, Constrictor constrictor|
|62 | rock python, rock snake, Python sebae|
|63 | Indian cobra, Naja naja|
|64 | green mamba|
|65 | sea snake|
|66 | horned viper, cerastes, sand viper, horned asp, Cerastes cornutus|
|67 | diamondback, diamondback rattlesnake, Crotalus adamanteus|
|68 | sidewinder, horned rattlesnake, Crotalus cerastes|
|69 | trilobite|
|70 | harvestman, daddy longlegs, Phalangium opilio|
|71 | scorpion|
|72 | black and gold garden spider, Argiope aurantia|
|73 | barn spider, Araneus cavaticus|
|74 | garden spider, Aranea diademata|
|75 | black widow, Latrodectus mactans|
|76 | tarantula|
|77 | wolf spider, hunting spider|
|78 | tick|
|79 | centipede|
|80 | black grouse|
|81 | ptarmigan|
|82 | ruffed grouse, partridge, Bonasa umbellus|
|83 | prairie chicken, prairie grouse, prairie fowl|
|84 | peacock|
|85 | quail|
|86 | partridge|
|87 | African grey, African gray, Psittacus erithacus|
|88 | macaw|
|89 | sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita|
|90 | lorikeet|
|91 | coucal|
|92 | bee eater|
|93 | hornbill|
|94 | hummingbird|
|95 | jacamar|
|96 | toucan|
|97 | drake|
|98 | red-breasted merganser, Mergus serrator|
|99 | goose|
|100 | black swan, Cygnus atratus|
|101 | tusker|
|102 | echidna, spiny anteater, anteater|
|103 | platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus|
|104 | wallaby, brush kangaroo|
|105 | koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus|
|106 | wombat|
|107 | jellyfish|
|108 | sea anemone, anemone|
|109 | brain coral|
|110 | flatworm, platyhelminth|
|111 | nematode, nematode worm, roundworm|
|112 | conch|
|113 | snail|
|114 | slug|
|115 | sea slug, nudibranch|
|116 | chiton, coat-of-mail shell, sea cradle, polyplacophore|
|117 | chambered nautilus, pearly nautilus, nautilus|
|118 | Dungeness crab, Cancer magister|
|119 | rock crab, Cancer irroratus|
|120 | fiddler crab|
|121 | king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica|
|122 | American lobster, Northern lobster, Maine lobster, Homarus americanus|
|123 | spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish|
|124 | crayfish, crawfish, crawdad, crawdaddy|
|125 | hermit crab|
|126 | isopod|
|127 | white stork, Ciconia ciconia|
|128 | black stork, Ciconia nigra|
|129 | spoonbill|
|130 | flamingo|
|131 | little blue heron, Egretta caerulea|
|132 | American egret, great white heron, Egretta albus|
|133 | bittern|
|134 | crane|
|135 | limpkin, Aramus pictus|
|136 | European gallinule, Porphyrio porphyrio|
|137 | American coot, marsh hen, mud hen, water hen, Fulica americana|
|138 | bustard|
|139 | ruddy turnstone, Arenaria interpres|
|140 | red-backed sandpiper, dunlin, Erolia alpina|
|141 | redshank, Tringa totanus|
|142 | dowitcher|
|143 | oystercatcher, oyster catcher|
|144 | pelican|
|145 | king penguin, Aptenodytes patagonica|
|146 | albatross, mollymawk|
|147 | grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus|
|148 | killer whale, killer, orca, grampus, sea wolf, Orcinus orca|
|149 | dugong, Dugong dugon|
|150 | sea lion|
|151 | Chihuahua|
|152 | Japanese spaniel|
|153 | Maltese dog, Maltese terrier, Maltese|
|154 | Pekinese, Pekingese, Peke|
|155 | Shih-Tzu|
|156 | Blenheim spaniel|
|157 | papillon|
|158 | toy terrier|
|159 | Rhodesian ridgeback|
|160 | Afghan hound, Afghan|
|161 | basset, basset hound|
|162 | beagle|
|163 | bloodhound, sleuthhound|
|164 | bluetick|
|165 | black-and-tan coonhound|
|166 | Walker hound, Walker foxhound|
|167 | English foxhound|
|168 | redbone|
|169 | borzoi, Russian wolfhound|
|170 | Irish wolfhound|
|171 | Italian greyhound|
|172 | whippet|
|173 | Ibizan hound, Ibizan Podenco|
|174 | Norwegian elkhound, elkhound|
|175 | otterhound, otter hound|
|176 | Saluki, gazelle hound|
|177 | Scottish deerhound, deerhound|
|178 | Weimaraner|
|179 | Staffordshire bullterrier, Staffordshire bull terrier|
|180 | American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier|
|181 | Bedlington terrier|
|182 | Border terrier|
|183 | Kerry blue terrier|
|184 | Irish terrier|
|185 | Norfolk terrier|
|186 | Norwich terrier|
|187 | Yorkshire terrier|
|188 | wire-haired fox terrier|
|189 | Lakeland terrier|
|190 | Sealyham terrier, Sealyham|
|191 | Airedale, Airedale terrier|
|192 | cairn, cairn terrier|
|193 | Australian terrier|
|194 | Dandie Dinmont, Dandie Dinmont terrier|
|195 | Boston bull, Boston terrier|
|196 | miniature schnauzer|
|197 | giant schnauzer|
|198 | standard schnauzer|
|199 | Scotch terrier, Scottish terrier, Scottie|
|200 | Tibetan terrier, chrysanthemum dog|
|201 | silky terrier, Sydney silky|
|202 | soft-coated wheaten terrier|
|203 | West Highland white terrier|
|204 | Lhasa, Lhasa apso|
|205 | flat-coated retriever|
|206 | curly-coated retriever|
|207 | golden retriever|
|208 | Labrador retriever|
|209 | Chesapeake Bay retriever|
|210 | German short-haired pointer|
|211 | vizsla, Hungarian pointer|
|212 | English setter|
|213 | Irish setter, red setter|
|214 | Gordon setter|
|215 | Brittany spaniel|
|216 | clumber, clumber spaniel|
|217 | English springer, English springer spaniel|
|218 | Welsh springer spaniel|
|219 | cocker spaniel, English cocker spaniel, cocker|
|220 | Sussex spaniel|
|221 | Irish water spaniel|
|222 | kuvasz|
|223 | schipperke|
|224 | groenendael|
|225 | malinois|
|226 | briard|
|227 | kelpie|
|228 | komondor|
|229 | Old English sheepdog, bobtail|
|230 | Shetland sheepdog, Shetland sheep dog, Shetland|
|231 | collie|
|232 | Border collie|
|233 | Bouvier des Flandres, Bouviers des Flandres|
|234 | Rottweiler|
|235 | German shepherd, German shepherd dog, German police dog, alsatian|
|236 | Doberman, Doberman pinscher|
|237 | miniature pinscher|
|238 | Greater Swiss Mountain dog|
|239 | Bernese mountain dog|
|240 | Appenzeller|
|241 | EntleBucher|
|242 | boxer|
|243 | bull mastiff|
|244 | Tibetan mastiff|
|245 | French bulldog|
|246 | Great Dane|
|247 | Saint Bernard, St Bernard|
|248 | Eskimo dog, husky|
|249 | malamute, malemute, Alaskan malamute|
|250 | Siberian husky|
|251 | dalmatian, coach dog, carriage dog|
|252 | affenpinscher, monkey pinscher, monkey dog|
|253 | basenji|
|254 | pug, pug-dog|
|255 | Leonberg|
|256 | Newfoundland, Newfoundland dog|
|257 | Great Pyrenees|
|258 | Samoyed, Samoyede|
|259 | Pomeranian|
|260 | chow, chow chow|
|261 | keeshond|
|262 | Brabancon griffon|
|263 | Pembroke, Pembroke Welsh corgi|
|264 | Cardigan, Cardigan Welsh corgi|
|265 | toy poodle|
|266 | miniature poodle|
|267 | standard poodle|
|268 | Mexican hairless|
|269 | timber wolf, grey wolf, gray wolf, Canis lupus|
|270 | white wolf, Arctic wolf, Canis lupus tundrarum|
|271 | red wolf, maned wolf, Canis rufus, Canis niger|
|272 | coyote, prairie wolf, brush wolf, Canis latrans|
|273 | dingo, warrigal, warragal, Canis dingo|
|274 | dhole, Cuon alpinus|
|275 | African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus|
|276 | hyena, hyaena|
|277 | red fox, Vulpes vulpes|
|278 | kit fox, Vulpes macrotis|
|279 | Arctic fox, white fox, Alopex lagopus|
|280 | grey fox, gray fox, Urocyon cinereoargenteus|
|281 | tabby, tabby cat|
|282 | tiger cat|
|283 | Persian cat|
|284 | Siamese cat, Siamese|
|285 | Egyptian cat|
|286 | cougar, puma, catamount, mountain lion, painter, panther, Felis concolor|
|287 | lynx, catamount|
|288 | leopard, Panthera pardus|
|289 | snow leopard, ounce, Panthera uncia|
|290 | jaguar, panther, Panthera onca, Felis onca|
|291 | lion, king of beasts, Panthera leo|
|292 | tiger, Panthera tigris|
|293 | cheetah, chetah, Acinonyx jubatus|
|294 | brown bear, bruin, Ursus arctos|
|295 | American black bear, black bear, Ursus americanus, Euarctos americanus|
|296 | ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus|
|297 | sloth bear, Melursus ursinus, Ursus ursinus|
|298 | mongoose|
|299 | meerkat, mierkat|
|300 | tiger beetle|
|301 | ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle|
|302 | ground beetle, carabid beetle|
|303 | long-horned beetle, longicorn, longicorn beetle|
|304 | leaf beetle, chrysomelid|
|305 | dung beetle|
|306 | rhinoceros beetle|
|307 | weevil|
|308 | fly|
|309 | bee|
|310 | ant, emmet, pismire|
|311 | grasshopper, hopper|
|312 | cricket|
|313 | walking stick, walkingstick, stick insect|
|314 | cockroach, roach|
|315 | mantis, mantid|
|316 | cicada, cicala|
|317 | leafhopper|
|318 | lacewing, lacewing fly|
|319 | dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk|
|320 | damselfly|
|321 | admiral|
|322 | ringlet, ringlet butterfly|
|323 | monarch, monarch butterfly, milkweed butterfly, Danaus plexippus|
|324 | cabbage butterfly|
|325 | sulphur butterfly, sulfur butterfly|
|326 | lycaenid, lycaenid butterfly|
|327 | starfish, sea star|
|328 | sea urchin|
|329 | sea cucumber, holothurian|
|330 | wood rabbit, cottontail, cottontail rabbit|
|331 | hare|
|332 | Angora, Angora rabbit|
|333 | hamster|
|334 | porcupine, hedgehog|
|335 | fox squirrel, eastern fox squirrel, Sciurus niger|
|336 | marmot|
|337 | beaver|
|338 | guinea pig, Cavia cobaya|
|339 | sorrel|
|340 | zebra|
|341 | hog, pig, grunter, squealer, Sus scrofa|
|342 | wild boar, boar, Sus scrofa|
|343 | warthog|
|344 | hippopotamus, hippo, river horse, Hippopotamus amphibius|
|345 | ox|
|346 | water buffalo, water ox, Asiatic buffalo, Bubalus bubalis|
|347 | bison|
|348 | ram, tup|
|349 | bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis|
|350 | ibex, Capra ibex|
|351 | hartebeest|
|352 | impala, Aepyceros melampus|
|353 | gazelle|
|354 | Arabian camel, dromedary, Camelus dromedarius|
|355 | llama|
|356 | weasel|
|357 | mink|
|358 | polecat, fitch, foulmart, foumart, Mustela putorius|
|359 | black-footed ferret, ferret, Mustela nigripes|
|360 | otter|
|361 | skunk, polecat, wood pussy|
|362 | badger|
|363 | armadillo|
|364 | three-toed sloth, ai, Bradypus tridactylus|
|365 | orangutan, orang, orangutang, Pongo pygmaeus|
|366 | gorilla, Gorilla gorilla|
|367 | chimpanzee, chimp, Pan troglodytes|
|368 | gibbon, Hylobates lar|
|369 | siamang, Hylobates syndactylus, Symphalangus syndactylus|
|370 | guenon, guenon monkey|
|371 | patas, hussar monkey, Erythrocebus patas|
|372 | baboon|
|373 | macaque|
|374 | langur|
|375 | colobus, colobus monkey|
|376 | proboscis monkey, Nasalis larvatus|
|377 | marmoset|
|378 | capuchin, ringtail, Cebus capucinus|
|379 | howler monkey, howler|
|380 | titi, titi monkey|
|381 | spider monkey, Ateles geoffroyi|
|382 | squirrel monkey, Saimiri sciureus|
|383 | Madagascar cat, ring-tailed lemur, Lemur catta|
|384 | indri, indris, Indri indri, Indri brevicaudatus|
|385 | Indian elephant, Elephas maximus|
|386 | African elephant, Loxodonta africana|
|387 | lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens|
|388 | giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca|
|389 | barracouta, snoek|
|390 | eel|
|391 | coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch|
|392 | rock beauty, Holocanthus tricolor|
|393 | anemone fish|
|394 | sturgeon|
|395 | gar, garfish, garpike, billfish, Lepisosteus osseus|
|396 | lionfish|
|397 | puffer, pufferfish, blowfish, globefish|
|398 | abacus|
|399 | abaya|
|400 | academic gown, academic robe, judge's robe|
|401 | accordion, piano accordion, squeeze box|
|402 | acoustic guitar|
|403 | aircraft carrier, carrier, flattop, attack aircraft carrier|
|404 | airliner|
|405 | airship, dirigible|
|406 | altar|
|407 | ambulance|
|408 | amphibian, amphibious vehicle|
|409 | analog clock|
|410 | apiary, bee house|
|411 | apron|
|412 | ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin|
|413 | assault rifle, assault gun|
|414 | backpack, back pack, knapsack, packsack, rucksack, haversack|
|415 | bakery, bakeshop, bakehouse|
|416 | balance beam, beam|
|417 | balloon|
|418 | ballpoint, ballpoint pen, ballpen, Biro|
|419 | Band Aid|
|420 | banjo|
|421 | bannister, banister, balustrade, balusters, handrail|
|422 | barbell|
|423 | barber chair|
|424 | barbershop|
|425 | barn|
|426 | barometer|
|427 | barrel, cask|
|428 | barrow, garden cart, lawn cart, wheelbarrow|
|429 | baseball|
|430 | basketball|
|431 | bassinet|
|432 | bassoon|
|433 | bathing cap, swimming cap|
|434 | bath towel|
|435 | bathtub, bathing tub, bath, tub|
|436 | beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon|
|437 | beacon, lighthouse, beacon light, pharos|
|438 | beaker|
|439 | bearskin, busby, shako|
|440 | beer bottle|
|441 | beer glass|
|442 | bell cote, bell cot|
|443 | bib|
|444 | bicycle-built-for-two, tandem bicycle, tandem|
|445 | bikini, two-piece|
|446 | binder, ring-binder|
|447 | binoculars, field glasses, opera glasses|
|448 | birdhouse|
|449 | boathouse|
|450 | bobsled, bobsleigh, bob|
|451 | bolo tie, bolo, bola tie, bola|
|452 | bonnet, poke bonnet|
|453 | bookcase|
|454 | bookshop, bookstore, bookstall|
|455 | bottlecap|
|456 | bow|
|457 | bow tie, bow-tie, bowtie|
|458 | brass, memorial tablet, plaque|
|459 | brassiere, bra, bandeau|
|460 | breakwater, groin, groyne, mole, bulwark, seawall, jetty|
|461 | breastplate, aegis, egis|
|462 | broom|
|463 | bucket, pail|
|464 | buckle|
|465 | bulletproof vest|
|466 | bullet train, bullet|
|467 | butcher shop, meat market|
|468 | cab, hack, taxi, taxicab|
|469 | caldron, cauldron|
|470 | candle, taper, wax light|
|471 | cannon|
|472 | canoe|
|473 | can opener, tin opener|
|474 | cardigan|
|475 | car mirror|
|476 | carousel, carrousel, merry-go-round, roundabout, whirligig|
|477 | carpenter's kit, tool kit|
|478 | carton|
|479 | car wheel|
|480 | cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM|
|481 | cassette|
|482 | cassette player|
|483 | castle|
|484 | catamaran|
|485 | CD player|
|486 | cello, violoncello|
|487 | cellular telephone, cellular phone, cellphone, cell, mobile phone|
|488 | chain|
|489 | chainlink fence|
|490 | chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour|
|491 | chain saw, chainsaw|
|492 | chest|
|493 | chiffonier, commode|
|494 | chime, bell, gong|
|495 | china cabinet, china closet|
|496 | Christmas stocking|
|497 | church, church building|
|498 | cinema, movie theater, movie theatre, movie house, picture palace|
|499 | cleaver, meat cleaver, chopper|
|500 | cliff dwelling|
|501 | cloak|
|502 | clog, geta, patten, sabot|
|503 | cocktail shaker|
|504 | coffee mug|
|505 | coffeepot|
|506 | coil, spiral, volute, whorl, helix|
|507 | combination lock|
|508 | computer keyboard, keypad|
|509 | confectionery, confectionary, candy store|
|510 | container ship, containership, container vessel|
|511 | convertible|
|512 | corkscrew, bottle screw|
|513 | cornet, horn, trumpet, trump|
|514 | cowboy boot|
|515 | cowboy hat, ten-gallon hat|
|516 | cradle|
|517 | crane_1|
|518 | crash helmet|
|519 | crate|
|520 | crib, cot|
|521 | Crock Pot|
|522 | croquet ball|
|523 | crutch|
|524 | cuirass|
|525 | dam, dike, dyke|
|526 | desk|
|527 | desktop computer|
|528 | dial telephone, dial phone|
|529 | diaper, nappy, napkin|
|530 | digital clock|
|531 | digital watch|
|532 | dining table, board|
|533 | dishrag, dishcloth|
|534 | dishwasher, dish washer, dishwashing machine|
|535 | disk brake, disc brake|
|536 | dock, dockage, docking facility|
|537 | dogsled, dog sled, dog sleigh|
|538 | dome|
|539 | doormat, welcome mat|
|540 | drilling platform, offshore rig|
|541 | drum, membranophone, tympan|
|542 | drumstick|
|543 | dumbbell|
|544 | Dutch oven|
|545 | electric fan, blower|
|546 | electric guitar|
|547 | electric locomotive|
|548 | entertainment center|
|549 | envelope|
|550 | espresso maker|
|551 | face powder|
|552 | feather boa, boa|
|553 | file, file cabinet, filing cabinet|
|554 | fireboat|
|555 | fire engine, fire truck|
|556 | fire screen, fireguard|
|557 | flagpole, flagstaff|
|558 | flute, transverse flute|
|559 | folding chair|
|560 | football helmet|
|561 | forklift|
|562 | fountain|
|563 | fountain pen|
|564 | four-poster|
|565 | freight car|
|566 | French horn, horn|
|567 | frying pan, frypan, skillet|
|568 | fur coat|
|569 | garbage truck, dustcart|
|570 | gasmask, respirator, gas helmet|
|571 | gas pump, gasoline pump, petrol pump, island dispenser|
|572 | goblet|
|573 | go-kart|
|574 | golf ball|
|575 | golfcart, golf cart|
|576 | gondola|
|577 | gong, tam-tam|
|578 | gown|
|579 | grand piano, grand|
|580 | greenhouse, nursery, glasshouse|
|581 | grille, radiator grille|
|582 | grocery store, grocery, food market, market|
|583 | guillotine|
|584 | hair slide|
|585 | hair spray|
|586 | half track|
|587 | hammer|
|588 | hamper|
|589 | hand blower, blow dryer, blow drier, hair dryer, hair drier|
|590 | hand-held computer, hand-held microcomputer|
|591 | handkerchief, hankie, hanky, hankey|
|592 | hard disc, hard disk, fixed disk|
|593 | harmonica, mouth organ, harp, mouth harp|
|594 | harp|
|595 | harvester, reaper|
|596 | hatchet|
|597 | holster|
|598 | home theater, home theatre|
|599 | honeycomb|
|600 | hook, claw|
|601 | hoopskirt, crinoline|
|602 | horizontal bar, high bar|
|603 | horse cart, horse-cart|
|604 | hourglass|
|605 | iPod|
|606 | iron, smoothing iron|
|607 | jack-o'-lantern|
|608 | jean, blue jean, denim|
|609 | jeep, landrover|
|610 | jersey, T-shirt, tee shirt|
|611 | jigsaw puzzle|
|612 | jinrikisha, ricksha, rickshaw|
|613 | joystick|
|614 | kimono|
|615 | knee pad|
|616 | knot|
|617 | lab coat, laboratory coat|
|618 | ladle|
|619 | lampshade, lamp shade|
|620 | laptop, laptop computer|
|621 | lawn mower, mower|
|622 | lens cap, lens cover|
|623 | letter opener, paper knife, paperknife|
|624 | library|
|625 | lifeboat|
|626 | lighter, light, igniter, ignitor|
|627 | limousine, limo|
|628 | liner, ocean liner|
|629 | lipstick, lip rouge|
|630 | Loafer|
|631 | lotion|
|632 | loudspeaker, speaker, speaker unit, loudspeaker system, speaker system|
|633 | loupe, jeweler's loupe|
|634 | lumbermill, sawmill|
|635 | magnetic compass|
|636 | mailbag, postbag|
|637 | mailbox, letter box|
|638 | maillot|
|639 | maillot, tank suit|
|640 | manhole cover|
|641 | maraca|
|642 | marimba, xylophone|
|643 | mask|
|644 | matchstick|
|645 | maypole|
|646 | maze, labyrinth|
|647 | measuring cup|
|648 | medicine chest, medicine cabinet|
|649 | megalith, megalithic structure|
|650 | microphone, mike|
|651 | microwave, microwave oven|
|652 | military uniform|
|653 | milk can|
|654 | minibus|
|655 | miniskirt, mini|
|656 | minivan|
|657 | missile|
|658 | mitten|
|659 | mixing bowl|
|660 | mobile home, manufactured home|
|661 | Model T|
|662 | modem|
|663 | monastery|
|664 | monitor|
|665 | moped|
|666 | mortar|
|667 | mortarboard|
|668 | mosque|
|669 | mosquito net|
|670 | motor scooter, scooter|
|671 | mountain bike, all-terrain bike, off-roader|
|672 | mountain tent|
|673 | mouse, computer mouse|
|674 | mousetrap|
|675 | moving van|
|676 | muzzle|
|677 | nail|
|678 | neck brace|
|679 | necklace|
|680 | nipple|
|681 | notebook, notebook computer|
|682 | obelisk|
|683 | oboe, hautboy, hautbois|
|684 | ocarina, sweet potato|
|685 | odometer, hodometer, mileometer, milometer|
|686 | oil filter|
|687 | organ, pipe organ|
|688 | oscilloscope, scope, cathode-ray oscilloscope, CRO|
|689 | overskirt|
|690 | oxcart|
|691 | oxygen mask|
|692 | packet|
|693 | paddle, boat paddle|
|694 | paddlewheel, paddle wheel|
|695 | padlock|
|696 | paintbrush|
|697 | pajama, pyjama, pj's, jammies|
|698 | palace|
|699 | panpipe, pandean pipe, syrinx|
|700 | paper towel|
|701 | parachute, chute|
|702 | parallel bars, bars|
|703 | park bench|
|704 | parking meter|
|705 | passenger car, coach, carriage|
|706 | patio, terrace|
|707 | pay-phone, pay-station|
|708 | pedestal, plinth, footstall|
|709 | pencil box, pencil case|
|710 | pencil sharpener|
|711 | perfume, essence|
|712 | Petri dish|
|713 | photocopier|
|714 | pick, plectrum, plectron|
|715 | pickelhaube|
|716 | picket fence, paling|
|717 | pickup, pickup truck|
|718 | pier|
|719 | piggy bank, penny bank|
|720 | pill bottle|
|721 | pillow|
|722 | ping-pong ball|
|723 | pinwheel|
|724 | pirate, pirate ship|
|725 | pitcher, ewer|
|726 | plane, carpenter's plane, woodworking plane|
|727 | planetarium|
|728 | plastic bag|
|729 | plate rack|
|730 | plow, plough|
|731 | plunger, plumber's helper|
|732 | Polaroid camera, Polaroid Land camera|
|733 | pole|
|734 | police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria|
|735 | poncho|
|736 | pool table, billiard table, snooker table|
|737 | pop bottle, soda bottle|
|738 | pot, flowerpot|
|739 | potter's wheel|
|740 | power drill|
|741 | prayer rug, prayer mat|
|742 | printer|
|743 | prison, prison house|
|744 | projectile, missile|
|745 | projector|
|746 | puck, hockey puck|
|747 | punching bag, punch bag, punching ball, punchball|
|748 | purse|
|749 | quill, quill pen|
|750 | quilt, comforter, comfort, puff|
|751 | racer, race car, racing car|
|752 | racket, racquet|
|753 | radiator|
|754 | radio, wireless|
|755 | radio telescope, radio reflector|
|756 | rain barrel|
|757 | recreational vehicle, RV, R.V.|
|758 | reel|
|759 | reflex camera|
|760 | refrigerator, icebox|
|761 | remote control, remote|
|762 | restaurant, eating house, eating place, eatery|
|763 | revolver, six-gun, six-shooter|
|764 | rifle|
|765 | rocking chair, rocker|
|766 | rotisserie|
|767 | rubber eraser, rubber, pencil eraser|
|768 | rugby ball|
|769 | rule, ruler|
|770 | running shoe|
|771 | safe|
|772 | safety pin|
|773 | saltshaker, salt shaker|
|774 | sandal|
|775 | sarong|
|776 | sax, saxophone|
|777 | scabbard|
|778 | scale, weighing machine|
|779 | school bus|
|780 | schooner|
|781 | scoreboard|
|782 | screen, CRT screen|
|783 | screw|
|784 | screwdriver|
|785 | seat belt, seatbelt|
|786 | sewing machine|
|787 | shield, buckler|
|788 | shoe shop, shoe-shop, shoe store|
|789 | shoji|
|790 | shopping basket|
|791 | shopping cart|
|792 | shovel|
|793 | shower cap|
|794 | shower curtain|
|795 | ski|
|796 | ski mask|
|797 | sleeping bag|
|798 | slide rule, slipstick|
|799 | sliding door|
|800 | slot, one-armed bandit|
|801 | snorkel|
|802 | snowmobile|
|803 | snowplow, snowplough|
|804 | soap dispenser|
|805 | soccer ball|
|806 | sock|
|807 | solar dish, solar collector, solar furnace|
|808 | sombrero|
|809 | soup bowl|
|810 | space bar|
|811 | space heater|
|812 | space shuttle|
|813 | spatula|
|814 | speedboat|
|815 | spider web, spider's web|
|816 | spindle|
|817 | sports car, sport car|
|818 | spotlight, spot|
|819 | stage|
|820 | steam locomotive|
|821 | steel arch bridge|
|822 | steel drum|
|823 | stethoscope|
|824 | stole|
|825 | stone wall|
|826 | stopwatch, stop watch|
|827 | stove|
|828 | strainer|
|829 | streetcar, tram, tramcar, trolley, trolley car|
|830 | stretcher|
|831 | studio couch, day bed|
|832 | stupa, tope|
|833 | submarine, pigboat, sub, U-boat|
|834 | suit, suit of clothes|
|835 | sundial|
|836 | sunglass|
|837 | sunglasses, dark glasses, shades|
|838 | sunscreen, sunblock, sun blocker|
|839 | suspension bridge|
|840 | swab, swob, mop|
|841 | sweatshirt|
|842 | swimming trunks, bathing trunks|
|843 | swing|
|844 | switch, electric switch, electrical switch|
|845 | syringe|
|846 | table lamp|
|847 | tank, army tank, armored combat vehicle, armoured combat vehicle|
|848 | tape player|
|849 | teapot|
|850 | teddy, teddy bear|
|851 | television, television system|
|852 | tennis ball|
|853 | thatch, thatched roof|
|854 | theater curtain, theatre curtain|
|855 | thimble|
|856 | thresher, thrasher, threshing machine|
|857 | throne|
|858 | tile roof|
|859 | toaster|
|860 | tobacco shop, tobacconist shop, tobacconist|
|861 | toilet seat|
|862 | torch|
|863 | totem pole|
|864 | tow truck, tow car, wrecker|
|865 | toyshop|
|866 | tractor|
|867 | trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi|
|868 | tray|
|869 | trench coat|
|870 | tricycle, trike, velocipede|
|871 | trimaran|
|872 | tripod|
|873 | triumphal arch|
|874 | trolleybus, trolley coach, trackless trolley|
|875 | trombone|
|876 | tub, vat|
|877 | turnstile|
|878 | typewriter keyboard|
|879 | umbrella|
|880 | unicycle, monocycle|
|881 | upright, upright piano|
|882 | vacuum, vacuum cleaner|
|883 | vase|
|884 | vault|
|885 | velvet|
|886 | vending machine|
|887 | vestment|
|888 | viaduct|
|889 | violin, fiddle|
|890 | volleyball|
|891 | waffle iron|
|892 | wall clock|
|893 | wallet, billfold, notecase, pocketbook|
|894 | wardrobe, closet, press|
|895 | warplane, military plane|
|896 | washbasin, handbasin, washbowl, lavabo, wash-hand basin|
|897 | washer, automatic washer, washing machine|
|898 | water bottle|
|899 | water jug|
|900 | water tower|
|901 | whiskey jug|
|902 | whistle|
|903 | wig|
|904 | window screen|
|905 | window shade|
|906 | Windsor tie|
|907 | wine bottle|
|908 | wing|
|909 | wok|
|910 | wooden spoon|
|911 | wool, woolen, woollen|
|912 | worm fence, snake fence, snake-rail fence, Virginia fence|
|913 | wreck|
|914 | yawl|
|915 | yurt|
|916 | web site, website, internet site, site|
|917 | comic book|
|918 | crossword puzzle, crossword|
|919 | street sign|
|920 | traffic light, traffic signal, stoplight|
|921 | book jacket, dust cover, dust jacket, dust wrapper|
|922 | menu|
|923 | plate|
|924 | guacamole|
|925 | consomme|
|926 | hot pot, hotpot|
|927 | trifle|
|928 | ice cream, icecream|
|929 | ice lolly, lolly, lollipop, popsicle|
|930 | French loaf|
|931 | bagel, beigel|
|932 | pretzel|
|933 | cheeseburger|
|934 | hotdog, hot dog, red hot|
|935 | mashed potato|
|936 | head cabbage|
|937 | broccoli|
|938 | cauliflower|
|939 | zucchini, courgette|
|940 | spaghetti squash|
|941 | acorn squash|
|942 | butternut squash|
|943 | cucumber, cuke|
|944 | artichoke, globe artichoke|
|945 | bell pepper|
|946 | cardoon|
|947 | mushroom|
|948 | Granny Smith|
|949 | strawberry|
|950 | orange|
|951 | lemon|
|952 | fig|
|953 | pineapple, ananas|
|954 | banana|
|955 | jackfruit, jak, jack|
|956 | custard apple|
|957 | pomegranate|
|958 | hay|
|959 | carbonara|
|960 | chocolate sauce, chocolate syrup|
|961 | dough|
|962 | meat loaf, meatloaf|
|963 | pizza, pizza pie|
|964 | potpie|
|965 | burrito|
|966 | red wine|
|967 | espresso|
|968 | cup|
|969 | eggnog|
|970 | alp|
|971 | bubble|
|972 | cliff, drop, drop-off|
|973 | coral reef|
|974 | geyser|
|975 | lakeside, lakeshore|
|976 | promontory, headland, head, foreland|
|977 | sandbar, sand bar|
|978 | seashore, coast, seacoast, sea-coast|
|979 | valley, vale|
|980 | volcano|
|981 | ballplayer, baseball player|
|982 | groom, bridegroom|
|983 | scuba diver|
|984 | rapeseed|
|985 | daisy|
|986 | yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum|
|987 | corn|
|988 | acorn|
|989 | hip, rose hip, rosehip|
|990 | buckeye, horse chestnut, conker|
|991 | coral fungus|
|992 | agaric|
|993 | gyromitra|
|994 | stinkhorn, carrion fungus|
|995 | earthstar|
|996 | hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa|
|997 | bolete|
|998 | ear, spike, capitulum|
|999 | toilet tissue, toilet paper, bathroom tissue|
</details>
### Data Splits
This dataset is a validation-only set.
## Dataset Creation
### Source Data
This dataset is sourced from ImageNet, ImageNet-ReaL, ImageNet-V2, ImageNet-A, ImageNet-C, ImageNet-R, ImageNet-Sketch, and ObjectNet.
## Citation Information
```
@article{taesiri2023zoom,
title={ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification},
author={Taesiri, Mohammad Reza and Nguyen, Giang and Habchi, Sarra and Bezemer, Cor-Paul and Nguyen, Anh},
journal={arXiv preprint arXiv:2304.05538},
year={2023}
}
``` | 838 | 4 | [
"task_categories:image-classification",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2304.05538",
"region:us",
"OOD",
"ImageNet",
"Out Of Distribution"
] | 2023-05-21T17:33:17+00:00 | 2025-11-12T15:36:04+00:00 | 0 |
wmaousley/finrebut-400 |
# Dataset Summary
FinRebut-400 is a tiny open-source corpus of 400 (rationale, counter-argument) sentence pairs designed for adversarial fine-tuning of a financial-LLM critic (MiniCrit-1.5B).
Released under CC-BY-4.0.
# Languages
English only (`en`)
# Licence
[Creative Commons Attribution 4.0 International (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/)
# Dataset Curator
[wmaousley](https://github.com/wmaousley)
# Citation
```bibtex
@misc{finrebut400,
title={FinRebut-400: rationale / rebuttal pairs for trading-LLM critic},
author={wmaousley},
year={2024},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/wmaousley/finrebut-400}}
} |
# Dataset Summary
FinRebut-400 is a tiny open-source corpus of 400 (rationale, counter-argument) sentence pairs designed for adversarial fine-tuning of a financial-LLM critic (MiniCrit-1.5B).
Released under CC-BY-4.0.
# Languages
English only (`en`)
# Licence
[Creative Commons Attribution 4.0 International (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/)
# Dataset Curator
[wmaousley](https://github.com/wmaousley)
# Citation
```bibtex
@misc{finrebut400,
title={FinRebut-400: rationale / rebuttal pairs for trading-LLM critic},
author={wmaousley},
year={2024},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/wmaousley/finrebut-400}}
} | 0 | 0 | [
"task_categories:text-generation",
"task_categories:text-classification",
"language:en",
"license:cc-by-4.0",
"region:us",
"finance",
"adversarial",
"llm-critic",
"trading"
] | 2025-11-12T15:21:54+00:00 | 2025-11-12T15:34:41+00:00 | 0 |
taesiri/imagenet-hard | # Dataset Card for "ImageNet-Hard"
[Project Page](https://taesiri.github.io/ZoomIsAllYouNeed/) - [ArXiv](https://arxiv.org/abs/2304.05538) - [Paper](https://huggingface.co/papers/2304.05538) - [Github](https://github.com/taesiri/ZoomIsAllYouNeed) - [Image Browser](https://huggingface.co/spaces/taesiri/ImageNet-Hard-Browser)
## Dataset Summary
**ImageNet-Hard** is a new benchmark that comprises 10,980 images collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). This dataset poses a significant challenge to state-of-the-art vision models as merely zooming in often fails to improve their ability to classify images correctly. As a result, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving a mere `2.02%` accuracy.
*ImageNet-Hard-4K*: For the 4K version please refere to [this dataset](https://huggingface.co/datasets/taesiri/imagenet-hard-4K).
### Dataset Distribution

### Classifiers Performance
| Model | Accuracy |
| ------------------- | -------- |
| AlexNet | 7.34 |
| VGG-16 | 12.00 |
| ResNet-18 | 10.86 |
| ResNet-50 | 14.74 |
| ViT-B/32 | 18.52 |
| EfficientNet-B0 | 16.57 |
| EfficientNet-B7 | 23.20 |
| EfficientNet-L2-Ns | 39.00 |
| CLIP-ViT-L/14@224px | 1.86 |
| CLIP-ViT-L/14@336px | 2.02 |
| OpenCLIP-ViT-bigG-14| 15.93 |
| OpenCLIP-ViT-L-14 | 15.60 |
**Evaluation Code**
* CLIP <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Prompt_Engineering_for_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
* [OpenCLIP](https://github.com/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/benchmark_openclip.py)
* Other models <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Benchmark_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
## Supported Tasks
- `image-classification`: The objective of this task is to classify an image into one or more classes, selected from 1000 ImageNet categories (allowing for multiple ground-truth labels per image).
## Languages
The `english_label` field in the dataset are in English.
## Dataset Structure
Data Instances
An example looks like this:
```python
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=575x409 at 0x7F09456B53A0>,
'label': [0],
'origin': 'imagenet_sketch',
'english_label': ['tench']
}
```
### Data Fields
The data instances have the following fields:
- image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- label: A List[int] collection containing the ground-truth ids.
- origin: A string containing source dataset.
- english_label: A List[str] collection containg the english labels for the ground-truth classes.
<details>
<summary>
Click here to see the full list of ImageNet class labels mapping:
</summary>
|id|Class|
|--|-----|
|0 | tench, Tinca tinca|
|1 | goldfish, Carassius auratus|
|2 | great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias|
|3 | tiger shark, Galeocerdo cuvieri|
|4 | hammerhead, hammerhead shark|
|5 | electric ray, crampfish, numbfish, torpedo|
|6 | stingray|
|7 | cock|
|8 | hen|
|9 | ostrich, Struthio camelus|
|10 | brambling, Fringilla montifringilla|
|11 | goldfinch, Carduelis carduelis|
|12 | house finch, linnet, Carpodacus mexicanus|
|13 | junco, snowbird|
|14 | indigo bunting, indigo finch, indigo bird, Passerina cyanea|
|15 | robin, American robin, Turdus migratorius|
|16 | bulbul|
|17 | jay|
|18 | magpie|
|19 | chickadee|
|20 | water ouzel, dipper|
|21 | kite|
|22 | bald eagle, American eagle, Haliaeetus leucocephalus|
|23 | vulture|
|24 | great grey owl, great gray owl, Strix nebulosa|
|25 | European fire salamander, Salamandra salamandra|
|26 | common newt, Triturus vulgaris|
|27 | eft|
|28 | spotted salamander, Ambystoma maculatum|
|29 | axolotl, mud puppy, Ambystoma mexicanum|
|30 | bullfrog, Rana catesbeiana|
|31 | tree frog, tree-frog|
|32 | tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui|
|33 | loggerhead, loggerhead turtle, Caretta caretta|
|34 | leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea|
|35 | mud turtle|
|36 | terrapin|
|37 | box turtle, box tortoise|
|38 | banded gecko|
|39 | common iguana, iguana, Iguana iguana|
|40 | American chameleon, anole, Anolis carolinensis|
|41 | whiptail, whiptail lizard|
|42 | agama|
|43 | frilled lizard, Chlamydosaurus kingi|
|44 | alligator lizard|
|45 | Gila monster, Heloderma suspectum|
|46 | green lizard, Lacerta viridis|
|47 | African chameleon, Chamaeleo chamaeleon|
|48 | Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis|
|49 | African crocodile, Nile crocodile, Crocodylus niloticus|
|50 | American alligator, Alligator mississipiensis|
|51 | triceratops|
|52 | thunder snake, worm snake, Carphophis amoenus|
|53 | ringneck snake, ring-necked snake, ring snake|
|54 | hognose snake, puff adder, sand viper|
|55 | green snake, grass snake|
|56 | king snake, kingsnake|
|57 | garter snake, grass snake|
|58 | water snake|
|59 | vine snake|
|60 | night snake, Hypsiglena torquata|
|61 | boa constrictor, Constrictor constrictor|
|62 | rock python, rock snake, Python sebae|
|63 | Indian cobra, Naja naja|
|64 | green mamba|
|65 | sea snake|
|66 | horned viper, cerastes, sand viper, horned asp, Cerastes cornutus|
|67 | diamondback, diamondback rattlesnake, Crotalus adamanteus|
|68 | sidewinder, horned rattlesnake, Crotalus cerastes|
|69 | trilobite|
|70 | harvestman, daddy longlegs, Phalangium opilio|
|71 | scorpion|
|72 | black and gold garden spider, Argiope aurantia|
|73 | barn spider, Araneus cavaticus|
|74 | garden spider, Aranea diademata|
|75 | black widow, Latrodectus mactans|
|76 | tarantula|
|77 | wolf spider, hunting spider|
|78 | tick|
|79 | centipede|
|80 | black grouse|
|81 | ptarmigan|
|82 | ruffed grouse, partridge, Bonasa umbellus|
|83 | prairie chicken, prairie grouse, prairie fowl|
|84 | peacock|
|85 | quail|
|86 | partridge|
|87 | African grey, African gray, Psittacus erithacus|
|88 | macaw|
|89 | sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita|
|90 | lorikeet|
|91 | coucal|
|92 | bee eater|
|93 | hornbill|
|94 | hummingbird|
|95 | jacamar|
|96 | toucan|
|97 | drake|
|98 | red-breasted merganser, Mergus serrator|
|99 | goose|
|100 | black swan, Cygnus atratus|
|101 | tusker|
|102 | echidna, spiny anteater, anteater|
|103 | platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus|
|104 | wallaby, brush kangaroo|
|105 | koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus|
|106 | wombat|
|107 | jellyfish|
|108 | sea anemone, anemone|
|109 | brain coral|
|110 | flatworm, platyhelminth|
|111 | nematode, nematode worm, roundworm|
|112 | conch|
|113 | snail|
|114 | slug|
|115 | sea slug, nudibranch|
|116 | chiton, coat-of-mail shell, sea cradle, polyplacophore|
|117 | chambered nautilus, pearly nautilus, nautilus|
|118 | Dungeness crab, Cancer magister|
|119 | rock crab, Cancer irroratus|
|120 | fiddler crab|
|121 | king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica|
|122 | American lobster, Northern lobster, Maine lobster, Homarus americanus|
|123 | spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish|
|124 | crayfish, crawfish, crawdad, crawdaddy|
|125 | hermit crab|
|126 | isopod|
|127 | white stork, Ciconia ciconia|
|128 | black stork, Ciconia nigra|
|129 | spoonbill|
|130 | flamingo|
|131 | little blue heron, Egretta caerulea|
|132 | American egret, great white heron, Egretta albus|
|133 | bittern|
|134 | crane|
|135 | limpkin, Aramus pictus|
|136 | European gallinule, Porphyrio porphyrio|
|137 | American coot, marsh hen, mud hen, water hen, Fulica americana|
|138 | bustard|
|139 | ruddy turnstone, Arenaria interpres|
|140 | red-backed sandpiper, dunlin, Erolia alpina|
|141 | redshank, Tringa totanus|
|142 | dowitcher|
|143 | oystercatcher, oyster catcher|
|144 | pelican|
|145 | king penguin, Aptenodytes patagonica|
|146 | albatross, mollymawk|
|147 | grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus|
|148 | killer whale, killer, orca, grampus, sea wolf, Orcinus orca|
|149 | dugong, Dugong dugon|
|150 | sea lion|
|151 | Chihuahua|
|152 | Japanese spaniel|
|153 | Maltese dog, Maltese terrier, Maltese|
|154 | Pekinese, Pekingese, Peke|
|155 | Shih-Tzu|
|156 | Blenheim spaniel|
|157 | papillon|
|158 | toy terrier|
|159 | Rhodesian ridgeback|
|160 | Afghan hound, Afghan|
|161 | basset, basset hound|
|162 | beagle|
|163 | bloodhound, sleuthhound|
|164 | bluetick|
|165 | black-and-tan coonhound|
|166 | Walker hound, Walker foxhound|
|167 | English foxhound|
|168 | redbone|
|169 | borzoi, Russian wolfhound|
|170 | Irish wolfhound|
|171 | Italian greyhound|
|172 | whippet|
|173 | Ibizan hound, Ibizan Podenco|
|174 | Norwegian elkhound, elkhound|
|175 | otterhound, otter hound|
|176 | Saluki, gazelle hound|
|177 | Scottish deerhound, deerhound|
|178 | Weimaraner|
|179 | Staffordshire bullterrier, Staffordshire bull terrier|
|180 | American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier|
|181 | Bedlington terrier|
|182 | Border terrier|
|183 | Kerry blue terrier|
|184 | Irish terrier|
|185 | Norfolk terrier|
|186 | Norwich terrier|
|187 | Yorkshire terrier|
|188 | wire-haired fox terrier|
|189 | Lakeland terrier|
|190 | Sealyham terrier, Sealyham|
|191 | Airedale, Airedale terrier|
|192 | cairn, cairn terrier|
|193 | Australian terrier|
|194 | Dandie Dinmont, Dandie Dinmont terrier|
|195 | Boston bull, Boston terrier|
|196 | miniature schnauzer|
|197 | giant schnauzer|
|198 | standard schnauzer|
|199 | Scotch terrier, Scottish terrier, Scottie|
|200 | Tibetan terrier, chrysanthemum dog|
|201 | silky terrier, Sydney silky|
|202 | soft-coated wheaten terrier|
|203 | West Highland white terrier|
|204 | Lhasa, Lhasa apso|
|205 | flat-coated retriever|
|206 | curly-coated retriever|
|207 | golden retriever|
|208 | Labrador retriever|
|209 | Chesapeake Bay retriever|
|210 | German short-haired pointer|
|211 | vizsla, Hungarian pointer|
|212 | English setter|
|213 | Irish setter, red setter|
|214 | Gordon setter|
|215 | Brittany spaniel|
|216 | clumber, clumber spaniel|
|217 | English springer, English springer spaniel|
|218 | Welsh springer spaniel|
|219 | cocker spaniel, English cocker spaniel, cocker|
|220 | Sussex spaniel|
|221 | Irish water spaniel|
|222 | kuvasz|
|223 | schipperke|
|224 | groenendael|
|225 | malinois|
|226 | briard|
|227 | kelpie|
|228 | komondor|
|229 | Old English sheepdog, bobtail|
|230 | Shetland sheepdog, Shetland sheep dog, Shetland|
|231 | collie|
|232 | Border collie|
|233 | Bouvier des Flandres, Bouviers des Flandres|
|234 | Rottweiler|
|235 | German shepherd, German shepherd dog, German police dog, alsatian|
|236 | Doberman, Doberman pinscher|
|237 | miniature pinscher|
|238 | Greater Swiss Mountain dog|
|239 | Bernese mountain dog|
|240 | Appenzeller|
|241 | EntleBucher|
|242 | boxer|
|243 | bull mastiff|
|244 | Tibetan mastiff|
|245 | French bulldog|
|246 | Great Dane|
|247 | Saint Bernard, St Bernard|
|248 | Eskimo dog, husky|
|249 | malamute, malemute, Alaskan malamute|
|250 | Siberian husky|
|251 | dalmatian, coach dog, carriage dog|
|252 | affenpinscher, monkey pinscher, monkey dog|
|253 | basenji|
|254 | pug, pug-dog|
|255 | Leonberg|
|256 | Newfoundland, Newfoundland dog|
|257 | Great Pyrenees|
|258 | Samoyed, Samoyede|
|259 | Pomeranian|
|260 | chow, chow chow|
|261 | keeshond|
|262 | Brabancon griffon|
|263 | Pembroke, Pembroke Welsh corgi|
|264 | Cardigan, Cardigan Welsh corgi|
|265 | toy poodle|
|266 | miniature poodle|
|267 | standard poodle|
|268 | Mexican hairless|
|269 | timber wolf, grey wolf, gray wolf, Canis lupus|
|270 | white wolf, Arctic wolf, Canis lupus tundrarum|
|271 | red wolf, maned wolf, Canis rufus, Canis niger|
|272 | coyote, prairie wolf, brush wolf, Canis latrans|
|273 | dingo, warrigal, warragal, Canis dingo|
|274 | dhole, Cuon alpinus|
|275 | African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus|
|276 | hyena, hyaena|
|277 | red fox, Vulpes vulpes|
|278 | kit fox, Vulpes macrotis|
|279 | Arctic fox, white fox, Alopex lagopus|
|280 | grey fox, gray fox, Urocyon cinereoargenteus|
|281 | tabby, tabby cat|
|282 | tiger cat|
|283 | Persian cat|
|284 | Siamese cat, Siamese|
|285 | Egyptian cat|
|286 | cougar, puma, catamount, mountain lion, painter, panther, Felis concolor|
|287 | lynx, catamount|
|288 | leopard, Panthera pardus|
|289 | snow leopard, ounce, Panthera uncia|
|290 | jaguar, panther, Panthera onca, Felis onca|
|291 | lion, king of beasts, Panthera leo|
|292 | tiger, Panthera tigris|
|293 | cheetah, chetah, Acinonyx jubatus|
|294 | brown bear, bruin, Ursus arctos|
|295 | American black bear, black bear, Ursus americanus, Euarctos americanus|
|296 | ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus|
|297 | sloth bear, Melursus ursinus, Ursus ursinus|
|298 | mongoose|
|299 | meerkat, mierkat|
|300 | tiger beetle|
|301 | ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle|
|302 | ground beetle, carabid beetle|
|303 | long-horned beetle, longicorn, longicorn beetle|
|304 | leaf beetle, chrysomelid|
|305 | dung beetle|
|306 | rhinoceros beetle|
|307 | weevil|
|308 | fly|
|309 | bee|
|310 | ant, emmet, pismire|
|311 | grasshopper, hopper|
|312 | cricket|
|313 | walking stick, walkingstick, stick insect|
|314 | cockroach, roach|
|315 | mantis, mantid|
|316 | cicada, cicala|
|317 | leafhopper|
|318 | lacewing, lacewing fly|
|319 | dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk|
|320 | damselfly|
|321 | admiral|
|322 | ringlet, ringlet butterfly|
|323 | monarch, monarch butterfly, milkweed butterfly, Danaus plexippus|
|324 | cabbage butterfly|
|325 | sulphur butterfly, sulfur butterfly|
|326 | lycaenid, lycaenid butterfly|
|327 | starfish, sea star|
|328 | sea urchin|
|329 | sea cucumber, holothurian|
|330 | wood rabbit, cottontail, cottontail rabbit|
|331 | hare|
|332 | Angora, Angora rabbit|
|333 | hamster|
|334 | porcupine, hedgehog|
|335 | fox squirrel, eastern fox squirrel, Sciurus niger|
|336 | marmot|
|337 | beaver|
|338 | guinea pig, Cavia cobaya|
|339 | sorrel|
|340 | zebra|
|341 | hog, pig, grunter, squealer, Sus scrofa|
|342 | wild boar, boar, Sus scrofa|
|343 | warthog|
|344 | hippopotamus, hippo, river horse, Hippopotamus amphibius|
|345 | ox|
|346 | water buffalo, water ox, Asiatic buffalo, Bubalus bubalis|
|347 | bison|
|348 | ram, tup|
|349 | bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis|
|350 | ibex, Capra ibex|
|351 | hartebeest|
|352 | impala, Aepyceros melampus|
|353 | gazelle|
|354 | Arabian camel, dromedary, Camelus dromedarius|
|355 | llama|
|356 | weasel|
|357 | mink|
|358 | polecat, fitch, foulmart, foumart, Mustela putorius|
|359 | black-footed ferret, ferret, Mustela nigripes|
|360 | otter|
|361 | skunk, polecat, wood pussy|
|362 | badger|
|363 | armadillo|
|364 | three-toed sloth, ai, Bradypus tridactylus|
|365 | orangutan, orang, orangutang, Pongo pygmaeus|
|366 | gorilla, Gorilla gorilla|
|367 | chimpanzee, chimp, Pan troglodytes|
|368 | gibbon, Hylobates lar|
|369 | siamang, Hylobates syndactylus, Symphalangus syndactylus|
|370 | guenon, guenon monkey|
|371 | patas, hussar monkey, Erythrocebus patas|
|372 | baboon|
|373 | macaque|
|374 | langur|
|375 | colobus, colobus monkey|
|376 | proboscis monkey, Nasalis larvatus|
|377 | marmoset|
|378 | capuchin, ringtail, Cebus capucinus|
|379 | howler monkey, howler|
|380 | titi, titi monkey|
|381 | spider monkey, Ateles geoffroyi|
|382 | squirrel monkey, Saimiri sciureus|
|383 | Madagascar cat, ring-tailed lemur, Lemur catta|
|384 | indri, indris, Indri indri, Indri brevicaudatus|
|385 | Indian elephant, Elephas maximus|
|386 | African elephant, Loxodonta africana|
|387 | lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens|
|388 | giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca|
|389 | barracouta, snoek|
|390 | eel|
|391 | coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch|
|392 | rock beauty, Holocanthus tricolor|
|393 | anemone fish|
|394 | sturgeon|
|395 | gar, garfish, garpike, billfish, Lepisosteus osseus|
|396 | lionfish|
|397 | puffer, pufferfish, blowfish, globefish|
|398 | abacus|
|399 | abaya|
|400 | academic gown, academic robe, judge's robe|
|401 | accordion, piano accordion, squeeze box|
|402 | acoustic guitar|
|403 | aircraft carrier, carrier, flattop, attack aircraft carrier|
|404 | airliner|
|405 | airship, dirigible|
|406 | altar|
|407 | ambulance|
|408 | amphibian, amphibious vehicle|
|409 | analog clock|
|410 | apiary, bee house|
|411 | apron|
|412 | ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin|
|413 | assault rifle, assault gun|
|414 | backpack, back pack, knapsack, packsack, rucksack, haversack|
|415 | bakery, bakeshop, bakehouse|
|416 | balance beam, beam|
|417 | balloon|
|418 | ballpoint, ballpoint pen, ballpen, Biro|
|419 | Band Aid|
|420 | banjo|
|421 | bannister, banister, balustrade, balusters, handrail|
|422 | barbell|
|423 | barber chair|
|424 | barbershop|
|425 | barn|
|426 | barometer|
|427 | barrel, cask|
|428 | barrow, garden cart, lawn cart, wheelbarrow|
|429 | baseball|
|430 | basketball|
|431 | bassinet|
|432 | bassoon|
|433 | bathing cap, swimming cap|
|434 | bath towel|
|435 | bathtub, bathing tub, bath, tub|
|436 | beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon|
|437 | beacon, lighthouse, beacon light, pharos|
|438 | beaker|
|439 | bearskin, busby, shako|
|440 | beer bottle|
|441 | beer glass|
|442 | bell cote, bell cot|
|443 | bib|
|444 | bicycle-built-for-two, tandem bicycle, tandem|
|445 | bikini, two-piece|
|446 | binder, ring-binder|
|447 | binoculars, field glasses, opera glasses|
|448 | birdhouse|
|449 | boathouse|
|450 | bobsled, bobsleigh, bob|
|451 | bolo tie, bolo, bola tie, bola|
|452 | bonnet, poke bonnet|
|453 | bookcase|
|454 | bookshop, bookstore, bookstall|
|455 | bottlecap|
|456 | bow|
|457 | bow tie, bow-tie, bowtie|
|458 | brass, memorial tablet, plaque|
|459 | brassiere, bra, bandeau|
|460 | breakwater, groin, groyne, mole, bulwark, seawall, jetty|
|461 | breastplate, aegis, egis|
|462 | broom|
|463 | bucket, pail|
|464 | buckle|
|465 | bulletproof vest|
|466 | bullet train, bullet|
|467 | butcher shop, meat market|
|468 | cab, hack, taxi, taxicab|
|469 | caldron, cauldron|
|470 | candle, taper, wax light|
|471 | cannon|
|472 | canoe|
|473 | can opener, tin opener|
|474 | cardigan|
|475 | car mirror|
|476 | carousel, carrousel, merry-go-round, roundabout, whirligig|
|477 | carpenter's kit, tool kit|
|478 | carton|
|479 | car wheel|
|480 | cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM|
|481 | cassette|
|482 | cassette player|
|483 | castle|
|484 | catamaran|
|485 | CD player|
|486 | cello, violoncello|
|487 | cellular telephone, cellular phone, cellphone, cell, mobile phone|
|488 | chain|
|489 | chainlink fence|
|490 | chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour|
|491 | chain saw, chainsaw|
|492 | chest|
|493 | chiffonier, commode|
|494 | chime, bell, gong|
|495 | china cabinet, china closet|
|496 | Christmas stocking|
|497 | church, church building|
|498 | cinema, movie theater, movie theatre, movie house, picture palace|
|499 | cleaver, meat cleaver, chopper|
|500 | cliff dwelling|
|501 | cloak|
|502 | clog, geta, patten, sabot|
|503 | cocktail shaker|
|504 | coffee mug|
|505 | coffeepot|
|506 | coil, spiral, volute, whorl, helix|
|507 | combination lock|
|508 | computer keyboard, keypad|
|509 | confectionery, confectionary, candy store|
|510 | container ship, containership, container vessel|
|511 | convertible|
|512 | corkscrew, bottle screw|
|513 | cornet, horn, trumpet, trump|
|514 | cowboy boot|
|515 | cowboy hat, ten-gallon hat|
|516 | cradle|
|517 | crane_1|
|518 | crash helmet|
|519 | crate|
|520 | crib, cot|
|521 | Crock Pot|
|522 | croquet ball|
|523 | crutch|
|524 | cuirass|
|525 | dam, dike, dyke|
|526 | desk|
|527 | desktop computer|
|528 | dial telephone, dial phone|
|529 | diaper, nappy, napkin|
|530 | digital clock|
|531 | digital watch|
|532 | dining table, board|
|533 | dishrag, dishcloth|
|534 | dishwasher, dish washer, dishwashing machine|
|535 | disk brake, disc brake|
|536 | dock, dockage, docking facility|
|537 | dogsled, dog sled, dog sleigh|
|538 | dome|
|539 | doormat, welcome mat|
|540 | drilling platform, offshore rig|
|541 | drum, membranophone, tympan|
|542 | drumstick|
|543 | dumbbell|
|544 | Dutch oven|
|545 | electric fan, blower|
|546 | electric guitar|
|547 | electric locomotive|
|548 | entertainment center|
|549 | envelope|
|550 | espresso maker|
|551 | face powder|
|552 | feather boa, boa|
|553 | file, file cabinet, filing cabinet|
|554 | fireboat|
|555 | fire engine, fire truck|
|556 | fire screen, fireguard|
|557 | flagpole, flagstaff|
|558 | flute, transverse flute|
|559 | folding chair|
|560 | football helmet|
|561 | forklift|
|562 | fountain|
|563 | fountain pen|
|564 | four-poster|
|565 | freight car|
|566 | French horn, horn|
|567 | frying pan, frypan, skillet|
|568 | fur coat|
|569 | garbage truck, dustcart|
|570 | gasmask, respirator, gas helmet|
|571 | gas pump, gasoline pump, petrol pump, island dispenser|
|572 | goblet|
|573 | go-kart|
|574 | golf ball|
|575 | golfcart, golf cart|
|576 | gondola|
|577 | gong, tam-tam|
|578 | gown|
|579 | grand piano, grand|
|580 | greenhouse, nursery, glasshouse|
|581 | grille, radiator grille|
|582 | grocery store, grocery, food market, market|
|583 | guillotine|
|584 | hair slide|
|585 | hair spray|
|586 | half track|
|587 | hammer|
|588 | hamper|
|589 | hand blower, blow dryer, blow drier, hair dryer, hair drier|
|590 | hand-held computer, hand-held microcomputer|
|591 | handkerchief, hankie, hanky, hankey|
|592 | hard disc, hard disk, fixed disk|
|593 | harmonica, mouth organ, harp, mouth harp|
|594 | harp|
|595 | harvester, reaper|
|596 | hatchet|
|597 | holster|
|598 | home theater, home theatre|
|599 | honeycomb|
|600 | hook, claw|
|601 | hoopskirt, crinoline|
|602 | horizontal bar, high bar|
|603 | horse cart, horse-cart|
|604 | hourglass|
|605 | iPod|
|606 | iron, smoothing iron|
|607 | jack-o'-lantern|
|608 | jean, blue jean, denim|
|609 | jeep, landrover|
|610 | jersey, T-shirt, tee shirt|
|611 | jigsaw puzzle|
|612 | jinrikisha, ricksha, rickshaw|
|613 | joystick|
|614 | kimono|
|615 | knee pad|
|616 | knot|
|617 | lab coat, laboratory coat|
|618 | ladle|
|619 | lampshade, lamp shade|
|620 | laptop, laptop computer|
|621 | lawn mower, mower|
|622 | lens cap, lens cover|
|623 | letter opener, paper knife, paperknife|
|624 | library|
|625 | lifeboat|
|626 | lighter, light, igniter, ignitor|
|627 | limousine, limo|
|628 | liner, ocean liner|
|629 | lipstick, lip rouge|
|630 | Loafer|
|631 | lotion|
|632 | loudspeaker, speaker, speaker unit, loudspeaker system, speaker system|
|633 | loupe, jeweler's loupe|
|634 | lumbermill, sawmill|
|635 | magnetic compass|
|636 | mailbag, postbag|
|637 | mailbox, letter box|
|638 | maillot|
|639 | maillot, tank suit|
|640 | manhole cover|
|641 | maraca|
|642 | marimba, xylophone|
|643 | mask|
|644 | matchstick|
|645 | maypole|
|646 | maze, labyrinth|
|647 | measuring cup|
|648 | medicine chest, medicine cabinet|
|649 | megalith, megalithic structure|
|650 | microphone, mike|
|651 | microwave, microwave oven|
|652 | military uniform|
|653 | milk can|
|654 | minibus|
|655 | miniskirt, mini|
|656 | minivan|
|657 | missile|
|658 | mitten|
|659 | mixing bowl|
|660 | mobile home, manufactured home|
|661 | Model T|
|662 | modem|
|663 | monastery|
|664 | monitor|
|665 | moped|
|666 | mortar|
|667 | mortarboard|
|668 | mosque|
|669 | mosquito net|
|670 | motor scooter, scooter|
|671 | mountain bike, all-terrain bike, off-roader|
|672 | mountain tent|
|673 | mouse, computer mouse|
|674 | mousetrap|
|675 | moving van|
|676 | muzzle|
|677 | nail|
|678 | neck brace|
|679 | necklace|
|680 | nipple|
|681 | notebook, notebook computer|
|682 | obelisk|
|683 | oboe, hautboy, hautbois|
|684 | ocarina, sweet potato|
|685 | odometer, hodometer, mileometer, milometer|
|686 | oil filter|
|687 | organ, pipe organ|
|688 | oscilloscope, scope, cathode-ray oscilloscope, CRO|
|689 | overskirt|
|690 | oxcart|
|691 | oxygen mask|
|692 | packet|
|693 | paddle, boat paddle|
|694 | paddlewheel, paddle wheel|
|695 | padlock|
|696 | paintbrush|
|697 | pajama, pyjama, pj's, jammies|
|698 | palace|
|699 | panpipe, pandean pipe, syrinx|
|700 | paper towel|
|701 | parachute, chute|
|702 | parallel bars, bars|
|703 | park bench|
|704 | parking meter|
|705 | passenger car, coach, carriage|
|706 | patio, terrace|
|707 | pay-phone, pay-station|
|708 | pedestal, plinth, footstall|
|709 | pencil box, pencil case|
|710 | pencil sharpener|
|711 | perfume, essence|
|712 | Petri dish|
|713 | photocopier|
|714 | pick, plectrum, plectron|
|715 | pickelhaube|
|716 | picket fence, paling|
|717 | pickup, pickup truck|
|718 | pier|
|719 | piggy bank, penny bank|
|720 | pill bottle|
|721 | pillow|
|722 | ping-pong ball|
|723 | pinwheel|
|724 | pirate, pirate ship|
|725 | pitcher, ewer|
|726 | plane, carpenter's plane, woodworking plane|
|727 | planetarium|
|728 | plastic bag|
|729 | plate rack|
|730 | plow, plough|
|731 | plunger, plumber's helper|
|732 | Polaroid camera, Polaroid Land camera|
|733 | pole|
|734 | police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria|
|735 | poncho|
|736 | pool table, billiard table, snooker table|
|737 | pop bottle, soda bottle|
|738 | pot, flowerpot|
|739 | potter's wheel|
|740 | power drill|
|741 | prayer rug, prayer mat|
|742 | printer|
|743 | prison, prison house|
|744 | projectile, missile|
|745 | projector|
|746 | puck, hockey puck|
|747 | punching bag, punch bag, punching ball, punchball|
|748 | purse|
|749 | quill, quill pen|
|750 | quilt, comforter, comfort, puff|
|751 | racer, race car, racing car|
|752 | racket, racquet|
|753 | radiator|
|754 | radio, wireless|
|755 | radio telescope, radio reflector|
|756 | rain barrel|
|757 | recreational vehicle, RV, R.V.|
|758 | reel|
|759 | reflex camera|
|760 | refrigerator, icebox|
|761 | remote control, remote|
|762 | restaurant, eating house, eating place, eatery|
|763 | revolver, six-gun, six-shooter|
|764 | rifle|
|765 | rocking chair, rocker|
|766 | rotisserie|
|767 | rubber eraser, rubber, pencil eraser|
|768 | rugby ball|
|769 | rule, ruler|
|770 | running shoe|
|771 | safe|
|772 | safety pin|
|773 | saltshaker, salt shaker|
|774 | sandal|
|775 | sarong|
|776 | sax, saxophone|
|777 | scabbard|
|778 | scale, weighing machine|
|779 | school bus|
|780 | schooner|
|781 | scoreboard|
|782 | screen, CRT screen|
|783 | screw|
|784 | screwdriver|
|785 | seat belt, seatbelt|
|786 | sewing machine|
|787 | shield, buckler|
|788 | shoe shop, shoe-shop, shoe store|
|789 | shoji|
|790 | shopping basket|
|791 | shopping cart|
|792 | shovel|
|793 | shower cap|
|794 | shower curtain|
|795 | ski|
|796 | ski mask|
|797 | sleeping bag|
|798 | slide rule, slipstick|
|799 | sliding door|
|800 | slot, one-armed bandit|
|801 | snorkel|
|802 | snowmobile|
|803 | snowplow, snowplough|
|804 | soap dispenser|
|805 | soccer ball|
|806 | sock|
|807 | solar dish, solar collector, solar furnace|
|808 | sombrero|
|809 | soup bowl|
|810 | space bar|
|811 | space heater|
|812 | space shuttle|
|813 | spatula|
|814 | speedboat|
|815 | spider web, spider's web|
|816 | spindle|
|817 | sports car, sport car|
|818 | spotlight, spot|
|819 | stage|
|820 | steam locomotive|
|821 | steel arch bridge|
|822 | steel drum|
|823 | stethoscope|
|824 | stole|
|825 | stone wall|
|826 | stopwatch, stop watch|
|827 | stove|
|828 | strainer|
|829 | streetcar, tram, tramcar, trolley, trolley car|
|830 | stretcher|
|831 | studio couch, day bed|
|832 | stupa, tope|
|833 | submarine, pigboat, sub, U-boat|
|834 | suit, suit of clothes|
|835 | sundial|
|836 | sunglass|
|837 | sunglasses, dark glasses, shades|
|838 | sunscreen, sunblock, sun blocker|
|839 | suspension bridge|
|840 | swab, swob, mop|
|841 | sweatshirt|
|842 | swimming trunks, bathing trunks|
|843 | swing|
|844 | switch, electric switch, electrical switch|
|845 | syringe|
|846 | table lamp|
|847 | tank, army tank, armored combat vehicle, armoured combat vehicle|
|848 | tape player|
|849 | teapot|
|850 | teddy, teddy bear|
|851 | television, television system|
|852 | tennis ball|
|853 | thatch, thatched roof|
|854 | theater curtain, theatre curtain|
|855 | thimble|
|856 | thresher, thrasher, threshing machine|
|857 | throne|
|858 | tile roof|
|859 | toaster|
|860 | tobacco shop, tobacconist shop, tobacconist|
|861 | toilet seat|
|862 | torch|
|863 | totem pole|
|864 | tow truck, tow car, wrecker|
|865 | toyshop|
|866 | tractor|
|867 | trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi|
|868 | tray|
|869 | trench coat|
|870 | tricycle, trike, velocipede|
|871 | trimaran|
|872 | tripod|
|873 | triumphal arch|
|874 | trolleybus, trolley coach, trackless trolley|
|875 | trombone|
|876 | tub, vat|
|877 | turnstile|
|878 | typewriter keyboard|
|879 | umbrella|
|880 | unicycle, monocycle|
|881 | upright, upright piano|
|882 | vacuum, vacuum cleaner|
|883 | vase|
|884 | vault|
|885 | velvet|
|886 | vending machine|
|887 | vestment|
|888 | viaduct|
|889 | violin, fiddle|
|890 | volleyball|
|891 | waffle iron|
|892 | wall clock|
|893 | wallet, billfold, notecase, pocketbook|
|894 | wardrobe, closet, press|
|895 | warplane, military plane|
|896 | washbasin, handbasin, washbowl, lavabo, wash-hand basin|
|897 | washer, automatic washer, washing machine|
|898 | water bottle|
|899 | water jug|
|900 | water tower|
|901 | whiskey jug|
|902 | whistle|
|903 | wig|
|904 | window screen|
|905 | window shade|
|906 | Windsor tie|
|907 | wine bottle|
|908 | wing|
|909 | wok|
|910 | wooden spoon|
|911 | wool, woolen, woollen|
|912 | worm fence, snake fence, snake-rail fence, Virginia fence|
|913 | wreck|
|914 | yawl|
|915 | yurt|
|916 | web site, website, internet site, site|
|917 | comic book|
|918 | crossword puzzle, crossword|
|919 | street sign|
|920 | traffic light, traffic signal, stoplight|
|921 | book jacket, dust cover, dust jacket, dust wrapper|
|922 | menu|
|923 | plate|
|924 | guacamole|
|925 | consomme|
|926 | hot pot, hotpot|
|927 | trifle|
|928 | ice cream, icecream|
|929 | ice lolly, lolly, lollipop, popsicle|
|930 | French loaf|
|931 | bagel, beigel|
|932 | pretzel|
|933 | cheeseburger|
|934 | hotdog, hot dog, red hot|
|935 | mashed potato|
|936 | head cabbage|
|937 | broccoli|
|938 | cauliflower|
|939 | zucchini, courgette|
|940 | spaghetti squash|
|941 | acorn squash|
|942 | butternut squash|
|943 | cucumber, cuke|
|944 | artichoke, globe artichoke|
|945 | bell pepper|
|946 | cardoon|
|947 | mushroom|
|948 | Granny Smith|
|949 | strawberry|
|950 | orange|
|951 | lemon|
|952 | fig|
|953 | pineapple, ananas|
|954 | banana|
|955 | jackfruit, jak, jack|
|956 | custard apple|
|957 | pomegranate|
|958 | hay|
|959 | carbonara|
|960 | chocolate sauce, chocolate syrup|
|961 | dough|
|962 | meat loaf, meatloaf|
|963 | pizza, pizza pie|
|964 | potpie|
|965 | burrito|
|966 | red wine|
|967 | espresso|
|968 | cup|
|969 | eggnog|
|970 | alp|
|971 | bubble|
|972 | cliff, drop, drop-off|
|973 | coral reef|
|974 | geyser|
|975 | lakeside, lakeshore|
|976 | promontory, headland, head, foreland|
|977 | sandbar, sand bar|
|978 | seashore, coast, seacoast, sea-coast|
|979 | valley, vale|
|980 | volcano|
|981 | ballplayer, baseball player|
|982 | groom, bridegroom|
|983 | scuba diver|
|984 | rapeseed|
|985 | daisy|
|986 | yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum|
|987 | corn|
|988 | acorn|
|989 | hip, rose hip, rosehip|
|990 | buckeye, horse chestnut, conker|
|991 | coral fungus|
|992 | agaric|
|993 | gyromitra|
|994 | stinkhorn, carrion fungus|
|995 | earthstar|
|996 | hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa|
|997 | bolete|
|998 | ear, spike, capitulum|
|999 | toilet tissue, toilet paper, bathroom tissue|
</details>
### Data Splits
This dataset is a validation-only set.
## Dataset Creation
### Source Data
This dataset is sourced from ImageNet, ImageNet-ReaL, ImageNet-V2, ImageNet-A, ImageNet-C, ImageNet-R, ImageNet-Sketch, and ObjectNet.
## Citation Information
```
@article{taesiri2023zoom,
title={ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification},
author={Taesiri, Mohammad Reza and Nguyen, Giang and Habchi, Sarra and Bezemer, Cor-Paul and Nguyen, Anh},
journal={arXiv preprint arXiv:2304.05538},
year={2023}
}
``` | # Dataset Card for "ImageNet-Hard"
[Project Page](https://taesiri.github.io/ZoomIsAllYouNeed/) - [ArXiv](https://arxiv.org/abs/2304.05538) - [Paper](https://huggingface.co/papers/2304.05538) - [Github](https://github.com/taesiri/ZoomIsAllYouNeed) - [Image Browser](https://huggingface.co/spaces/taesiri/ImageNet-Hard-Browser)
## Dataset Summary
**ImageNet-Hard** is a new benchmark that comprises 10,980 images collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). This dataset poses a significant challenge to state-of-the-art vision models as merely zooming in often fails to improve their ability to classify images correctly. As a result, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving a mere `2.02%` accuracy.
*ImageNet-Hard-4K*: For the 4K version please refere to [this dataset](https://huggingface.co/datasets/taesiri/imagenet-hard-4K).
### Dataset Distribution

### Classifiers Performance
| Model | Accuracy |
| ------------------- | -------- |
| AlexNet | 7.34 |
| VGG-16 | 12.00 |
| ResNet-18 | 10.86 |
| ResNet-50 | 14.74 |
| ViT-B/32 | 18.52 |
| EfficientNet-B0 | 16.57 |
| EfficientNet-B7 | 23.20 |
| EfficientNet-L2-Ns | 39.00 |
| CLIP-ViT-L/14@224px | 1.86 |
| CLIP-ViT-L/14@336px | 2.02 |
| OpenCLIP-ViT-bigG-14| 15.93 |
| OpenCLIP-ViT-L-14 | 15.60 |
**Evaluation Code**
* CLIP <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Prompt_Engineering_for_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
* [OpenCLIP](https://github.com/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/benchmark_openclip.py)
* Other models <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Benchmark_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
## Supported Tasks
- `image-classification`: The objective of this task is to classify an image into one or more classes, selected from 1000 ImageNet categories (allowing for multiple ground-truth labels per image).
## Languages
The `english_label` field in the dataset are in English.
## Dataset Structure
Data Instances
An example looks like this:
```python
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=575x409 at 0x7F09456B53A0>,
'label': [0],
'origin': 'imagenet_sketch',
'english_label': ['tench']
}
```
### Data Fields
The data instances have the following fields:
- image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- label: A List[int] collection containing the ground-truth ids.
- origin: A string containing source dataset.
- english_label: A List[str] collection containg the english labels for the ground-truth classes.
<details>
<summary>
Click here to see the full list of ImageNet class labels mapping:
</summary>
|id|Class|
|--|-----|
|0 | tench, Tinca tinca|
|1 | goldfish, Carassius auratus|
|2 | great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias|
|3 | tiger shark, Galeocerdo cuvieri|
|4 | hammerhead, hammerhead shark|
|5 | electric ray, crampfish, numbfish, torpedo|
|6 | stingray|
|7 | cock|
|8 | hen|
|9 | ostrich, Struthio camelus|
|10 | brambling, Fringilla montifringilla|
|11 | goldfinch, Carduelis carduelis|
|12 | house finch, linnet, Carpodacus mexicanus|
|13 | junco, snowbird|
|14 | indigo bunting, indigo finch, indigo bird, Passerina cyanea|
|15 | robin, American robin, Turdus migratorius|
|16 | bulbul|
|17 | jay|
|18 | magpie|
|19 | chickadee|
|20 | water ouzel, dipper|
|21 | kite|
|22 | bald eagle, American eagle, Haliaeetus leucocephalus|
|23 | vulture|
|24 | great grey owl, great gray owl, Strix nebulosa|
|25 | European fire salamander, Salamandra salamandra|
|26 | common newt, Triturus vulgaris|
|27 | eft|
|28 | spotted salamander, Ambystoma maculatum|
|29 | axolotl, mud puppy, Ambystoma mexicanum|
|30 | bullfrog, Rana catesbeiana|
|31 | tree frog, tree-frog|
|32 | tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui|
|33 | loggerhead, loggerhead turtle, Caretta caretta|
|34 | leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea|
|35 | mud turtle|
|36 | terrapin|
|37 | box turtle, box tortoise|
|38 | banded gecko|
|39 | common iguana, iguana, Iguana iguana|
|40 | American chameleon, anole, Anolis carolinensis|
|41 | whiptail, whiptail lizard|
|42 | agama|
|43 | frilled lizard, Chlamydosaurus kingi|
|44 | alligator lizard|
|45 | Gila monster, Heloderma suspectum|
|46 | green lizard, Lacerta viridis|
|47 | African chameleon, Chamaeleo chamaeleon|
|48 | Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis|
|49 | African crocodile, Nile crocodile, Crocodylus niloticus|
|50 | American alligator, Alligator mississipiensis|
|51 | triceratops|
|52 | thunder snake, worm snake, Carphophis amoenus|
|53 | ringneck snake, ring-necked snake, ring snake|
|54 | hognose snake, puff adder, sand viper|
|55 | green snake, grass snake|
|56 | king snake, kingsnake|
|57 | garter snake, grass snake|
|58 | water snake|
|59 | vine snake|
|60 | night snake, Hypsiglena torquata|
|61 | boa constrictor, Constrictor constrictor|
|62 | rock python, rock snake, Python sebae|
|63 | Indian cobra, Naja naja|
|64 | green mamba|
|65 | sea snake|
|66 | horned viper, cerastes, sand viper, horned asp, Cerastes cornutus|
|67 | diamondback, diamondback rattlesnake, Crotalus adamanteus|
|68 | sidewinder, horned rattlesnake, Crotalus cerastes|
|69 | trilobite|
|70 | harvestman, daddy longlegs, Phalangium opilio|
|71 | scorpion|
|72 | black and gold garden spider, Argiope aurantia|
|73 | barn spider, Araneus cavaticus|
|74 | garden spider, Aranea diademata|
|75 | black widow, Latrodectus mactans|
|76 | tarantula|
|77 | wolf spider, hunting spider|
|78 | tick|
|79 | centipede|
|80 | black grouse|
|81 | ptarmigan|
|82 | ruffed grouse, partridge, Bonasa umbellus|
|83 | prairie chicken, prairie grouse, prairie fowl|
|84 | peacock|
|85 | quail|
|86 | partridge|
|87 | African grey, African gray, Psittacus erithacus|
|88 | macaw|
|89 | sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita|
|90 | lorikeet|
|91 | coucal|
|92 | bee eater|
|93 | hornbill|
|94 | hummingbird|
|95 | jacamar|
|96 | toucan|
|97 | drake|
|98 | red-breasted merganser, Mergus serrator|
|99 | goose|
|100 | black swan, Cygnus atratus|
|101 | tusker|
|102 | echidna, spiny anteater, anteater|
|103 | platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus|
|104 | wallaby, brush kangaroo|
|105 | koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus|
|106 | wombat|
|107 | jellyfish|
|108 | sea anemone, anemone|
|109 | brain coral|
|110 | flatworm, platyhelminth|
|111 | nematode, nematode worm, roundworm|
|112 | conch|
|113 | snail|
|114 | slug|
|115 | sea slug, nudibranch|
|116 | chiton, coat-of-mail shell, sea cradle, polyplacophore|
|117 | chambered nautilus, pearly nautilus, nautilus|
|118 | Dungeness crab, Cancer magister|
|119 | rock crab, Cancer irroratus|
|120 | fiddler crab|
|121 | king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica|
|122 | American lobster, Northern lobster, Maine lobster, Homarus americanus|
|123 | spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish|
|124 | crayfish, crawfish, crawdad, crawdaddy|
|125 | hermit crab|
|126 | isopod|
|127 | white stork, Ciconia ciconia|
|128 | black stork, Ciconia nigra|
|129 | spoonbill|
|130 | flamingo|
|131 | little blue heron, Egretta caerulea|
|132 | American egret, great white heron, Egretta albus|
|133 | bittern|
|134 | crane|
|135 | limpkin, Aramus pictus|
|136 | European gallinule, Porphyrio porphyrio|
|137 | American coot, marsh hen, mud hen, water hen, Fulica americana|
|138 | bustard|
|139 | ruddy turnstone, Arenaria interpres|
|140 | red-backed sandpiper, dunlin, Erolia alpina|
|141 | redshank, Tringa totanus|
|142 | dowitcher|
|143 | oystercatcher, oyster catcher|
|144 | pelican|
|145 | king penguin, Aptenodytes patagonica|
|146 | albatross, mollymawk|
|147 | grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus|
|148 | killer whale, killer, orca, grampus, sea wolf, Orcinus orca|
|149 | dugong, Dugong dugon|
|150 | sea lion|
|151 | Chihuahua|
|152 | Japanese spaniel|
|153 | Maltese dog, Maltese terrier, Maltese|
|154 | Pekinese, Pekingese, Peke|
|155 | Shih-Tzu|
|156 | Blenheim spaniel|
|157 | papillon|
|158 | toy terrier|
|159 | Rhodesian ridgeback|
|160 | Afghan hound, Afghan|
|161 | basset, basset hound|
|162 | beagle|
|163 | bloodhound, sleuthhound|
|164 | bluetick|
|165 | black-and-tan coonhound|
|166 | Walker hound, Walker foxhound|
|167 | English foxhound|
|168 | redbone|
|169 | borzoi, Russian wolfhound|
|170 | Irish wolfhound|
|171 | Italian greyhound|
|172 | whippet|
|173 | Ibizan hound, Ibizan Podenco|
|174 | Norwegian elkhound, elkhound|
|175 | otterhound, otter hound|
|176 | Saluki, gazelle hound|
|177 | Scottish deerhound, deerhound|
|178 | Weimaraner|
|179 | Staffordshire bullterrier, Staffordshire bull terrier|
|180 | American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier|
|181 | Bedlington terrier|
|182 | Border terrier|
|183 | Kerry blue terrier|
|184 | Irish terrier|
|185 | Norfolk terrier|
|186 | Norwich terrier|
|187 | Yorkshire terrier|
|188 | wire-haired fox terrier|
|189 | Lakeland terrier|
|190 | Sealyham terrier, Sealyham|
|191 | Airedale, Airedale terrier|
|192 | cairn, cairn terrier|
|193 | Australian terrier|
|194 | Dandie Dinmont, Dandie Dinmont terrier|
|195 | Boston bull, Boston terrier|
|196 | miniature schnauzer|
|197 | giant schnauzer|
|198 | standard schnauzer|
|199 | Scotch terrier, Scottish terrier, Scottie|
|200 | Tibetan terrier, chrysanthemum dog|
|201 | silky terrier, Sydney silky|
|202 | soft-coated wheaten terrier|
|203 | West Highland white terrier|
|204 | Lhasa, Lhasa apso|
|205 | flat-coated retriever|
|206 | curly-coated retriever|
|207 | golden retriever|
|208 | Labrador retriever|
|209 | Chesapeake Bay retriever|
|210 | German short-haired pointer|
|211 | vizsla, Hungarian pointer|
|212 | English setter|
|213 | Irish setter, red setter|
|214 | Gordon setter|
|215 | Brittany spaniel|
|216 | clumber, clumber spaniel|
|217 | English springer, English springer spaniel|
|218 | Welsh springer spaniel|
|219 | cocker spaniel, English cocker spaniel, cocker|
|220 | Sussex spaniel|
|221 | Irish water spaniel|
|222 | kuvasz|
|223 | schipperke|
|224 | groenendael|
|225 | malinois|
|226 | briard|
|227 | kelpie|
|228 | komondor|
|229 | Old English sheepdog, bobtail|
|230 | Shetland sheepdog, Shetland sheep dog, Shetland|
|231 | collie|
|232 | Border collie|
|233 | Bouvier des Flandres, Bouviers des Flandres|
|234 | Rottweiler|
|235 | German shepherd, German shepherd dog, German police dog, alsatian|
|236 | Doberman, Doberman pinscher|
|237 | miniature pinscher|
|238 | Greater Swiss Mountain dog|
|239 | Bernese mountain dog|
|240 | Appenzeller|
|241 | EntleBucher|
|242 | boxer|
|243 | bull mastiff|
|244 | Tibetan mastiff|
|245 | French bulldog|
|246 | Great Dane|
|247 | Saint Bernard, St Bernard|
|248 | Eskimo dog, husky|
|249 | malamute, malemute, Alaskan malamute|
|250 | Siberian husky|
|251 | dalmatian, coach dog, carriage dog|
|252 | affenpinscher, monkey pinscher, monkey dog|
|253 | basenji|
|254 | pug, pug-dog|
|255 | Leonberg|
|256 | Newfoundland, Newfoundland dog|
|257 | Great Pyrenees|
|258 | Samoyed, Samoyede|
|259 | Pomeranian|
|260 | chow, chow chow|
|261 | keeshond|
|262 | Brabancon griffon|
|263 | Pembroke, Pembroke Welsh corgi|
|264 | Cardigan, Cardigan Welsh corgi|
|265 | toy poodle|
|266 | miniature poodle|
|267 | standard poodle|
|268 | Mexican hairless|
|269 | timber wolf, grey wolf, gray wolf, Canis lupus|
|270 | white wolf, Arctic wolf, Canis lupus tundrarum|
|271 | red wolf, maned wolf, Canis rufus, Canis niger|
|272 | coyote, prairie wolf, brush wolf, Canis latrans|
|273 | dingo, warrigal, warragal, Canis dingo|
|274 | dhole, Cuon alpinus|
|275 | African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus|
|276 | hyena, hyaena|
|277 | red fox, Vulpes vulpes|
|278 | kit fox, Vulpes macrotis|
|279 | Arctic fox, white fox, Alopex lagopus|
|280 | grey fox, gray fox, Urocyon cinereoargenteus|
|281 | tabby, tabby cat|
|282 | tiger cat|
|283 | Persian cat|
|284 | Siamese cat, Siamese|
|285 | Egyptian cat|
|286 | cougar, puma, catamount, mountain lion, painter, panther, Felis concolor|
|287 | lynx, catamount|
|288 | leopard, Panthera pardus|
|289 | snow leopard, ounce, Panthera uncia|
|290 | jaguar, panther, Panthera onca, Felis onca|
|291 | lion, king of beasts, Panthera leo|
|292 | tiger, Panthera tigris|
|293 | cheetah, chetah, Acinonyx jubatus|
|294 | brown bear, bruin, Ursus arctos|
|295 | American black bear, black bear, Ursus americanus, Euarctos americanus|
|296 | ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus|
|297 | sloth bear, Melursus ursinus, Ursus ursinus|
|298 | mongoose|
|299 | meerkat, mierkat|
|300 | tiger beetle|
|301 | ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle|
|302 | ground beetle, carabid beetle|
|303 | long-horned beetle, longicorn, longicorn beetle|
|304 | leaf beetle, chrysomelid|
|305 | dung beetle|
|306 | rhinoceros beetle|
|307 | weevil|
|308 | fly|
|309 | bee|
|310 | ant, emmet, pismire|
|311 | grasshopper, hopper|
|312 | cricket|
|313 | walking stick, walkingstick, stick insect|
|314 | cockroach, roach|
|315 | mantis, mantid|
|316 | cicada, cicala|
|317 | leafhopper|
|318 | lacewing, lacewing fly|
|319 | dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk|
|320 | damselfly|
|321 | admiral|
|322 | ringlet, ringlet butterfly|
|323 | monarch, monarch butterfly, milkweed butterfly, Danaus plexippus|
|324 | cabbage butterfly|
|325 | sulphur butterfly, sulfur butterfly|
|326 | lycaenid, lycaenid butterfly|
|327 | starfish, sea star|
|328 | sea urchin|
|329 | sea cucumber, holothurian|
|330 | wood rabbit, cottontail, cottontail rabbit|
|331 | hare|
|332 | Angora, Angora rabbit|
|333 | hamster|
|334 | porcupine, hedgehog|
|335 | fox squirrel, eastern fox squirrel, Sciurus niger|
|336 | marmot|
|337 | beaver|
|338 | guinea pig, Cavia cobaya|
|339 | sorrel|
|340 | zebra|
|341 | hog, pig, grunter, squealer, Sus scrofa|
|342 | wild boar, boar, Sus scrofa|
|343 | warthog|
|344 | hippopotamus, hippo, river horse, Hippopotamus amphibius|
|345 | ox|
|346 | water buffalo, water ox, Asiatic buffalo, Bubalus bubalis|
|347 | bison|
|348 | ram, tup|
|349 | bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis|
|350 | ibex, Capra ibex|
|351 | hartebeest|
|352 | impala, Aepyceros melampus|
|353 | gazelle|
|354 | Arabian camel, dromedary, Camelus dromedarius|
|355 | llama|
|356 | weasel|
|357 | mink|
|358 | polecat, fitch, foulmart, foumart, Mustela putorius|
|359 | black-footed ferret, ferret, Mustela nigripes|
|360 | otter|
|361 | skunk, polecat, wood pussy|
|362 | badger|
|363 | armadillo|
|364 | three-toed sloth, ai, Bradypus tridactylus|
|365 | orangutan, orang, orangutang, Pongo pygmaeus|
|366 | gorilla, Gorilla gorilla|
|367 | chimpanzee, chimp, Pan troglodytes|
|368 | gibbon, Hylobates lar|
|369 | siamang, Hylobates syndactylus, Symphalangus syndactylus|
|370 | guenon, guenon monkey|
|371 | patas, hussar monkey, Erythrocebus patas|
|372 | baboon|
|373 | macaque|
|374 | langur|
|375 | colobus, colobus monkey|
|376 | proboscis monkey, Nasalis larvatus|
|377 | marmoset|
|378 | capuchin, ringtail, Cebus capucinus|
|379 | howler monkey, howler|
|380 | titi, titi monkey|
|381 | spider monkey, Ateles geoffroyi|
|382 | squirrel monkey, Saimiri sciureus|
|383 | Madagascar cat, ring-tailed lemur, Lemur catta|
|384 | indri, indris, Indri indri, Indri brevicaudatus|
|385 | Indian elephant, Elephas maximus|
|386 | African elephant, Loxodonta africana|
|387 | lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens|
|388 | giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca|
|389 | barracouta, snoek|
|390 | eel|
|391 | coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch|
|392 | rock beauty, Holocanthus tricolor|
|393 | anemone fish|
|394 | sturgeon|
|395 | gar, garfish, garpike, billfish, Lepisosteus osseus|
|396 | lionfish|
|397 | puffer, pufferfish, blowfish, globefish|
|398 | abacus|
|399 | abaya|
|400 | academic gown, academic robe, judge's robe|
|401 | accordion, piano accordion, squeeze box|
|402 | acoustic guitar|
|403 | aircraft carrier, carrier, flattop, attack aircraft carrier|
|404 | airliner|
|405 | airship, dirigible|
|406 | altar|
|407 | ambulance|
|408 | amphibian, amphibious vehicle|
|409 | analog clock|
|410 | apiary, bee house|
|411 | apron|
|412 | ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin|
|413 | assault rifle, assault gun|
|414 | backpack, back pack, knapsack, packsack, rucksack, haversack|
|415 | bakery, bakeshop, bakehouse|
|416 | balance beam, beam|
|417 | balloon|
|418 | ballpoint, ballpoint pen, ballpen, Biro|
|419 | Band Aid|
|420 | banjo|
|421 | bannister, banister, balustrade, balusters, handrail|
|422 | barbell|
|423 | barber chair|
|424 | barbershop|
|425 | barn|
|426 | barometer|
|427 | barrel, cask|
|428 | barrow, garden cart, lawn cart, wheelbarrow|
|429 | baseball|
|430 | basketball|
|431 | bassinet|
|432 | bassoon|
|433 | bathing cap, swimming cap|
|434 | bath towel|
|435 | bathtub, bathing tub, bath, tub|
|436 | beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon|
|437 | beacon, lighthouse, beacon light, pharos|
|438 | beaker|
|439 | bearskin, busby, shako|
|440 | beer bottle|
|441 | beer glass|
|442 | bell cote, bell cot|
|443 | bib|
|444 | bicycle-built-for-two, tandem bicycle, tandem|
|445 | bikini, two-piece|
|446 | binder, ring-binder|
|447 | binoculars, field glasses, opera glasses|
|448 | birdhouse|
|449 | boathouse|
|450 | bobsled, bobsleigh, bob|
|451 | bolo tie, bolo, bola tie, bola|
|452 | bonnet, poke bonnet|
|453 | bookcase|
|454 | bookshop, bookstore, bookstall|
|455 | bottlecap|
|456 | bow|
|457 | bow tie, bow-tie, bowtie|
|458 | brass, memorial tablet, plaque|
|459 | brassiere, bra, bandeau|
|460 | breakwater, groin, groyne, mole, bulwark, seawall, jetty|
|461 | breastplate, aegis, egis|
|462 | broom|
|463 | bucket, pail|
|464 | buckle|
|465 | bulletproof vest|
|466 | bullet train, bullet|
|467 | butcher shop, meat market|
|468 | cab, hack, taxi, taxicab|
|469 | caldron, cauldron|
|470 | candle, taper, wax light|
|471 | cannon|
|472 | canoe|
|473 | can opener, tin opener|
|474 | cardigan|
|475 | car mirror|
|476 | carousel, carrousel, merry-go-round, roundabout, whirligig|
|477 | carpenter's kit, tool kit|
|478 | carton|
|479 | car wheel|
|480 | cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM|
|481 | cassette|
|482 | cassette player|
|483 | castle|
|484 | catamaran|
|485 | CD player|
|486 | cello, violoncello|
|487 | cellular telephone, cellular phone, cellphone, cell, mobile phone|
|488 | chain|
|489 | chainlink fence|
|490 | chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour|
|491 | chain saw, chainsaw|
|492 | chest|
|493 | chiffonier, commode|
|494 | chime, bell, gong|
|495 | china cabinet, china closet|
|496 | Christmas stocking|
|497 | church, church building|
|498 | cinema, movie theater, movie theatre, movie house, picture palace|
|499 | cleaver, meat cleaver, chopper|
|500 | cliff dwelling|
|501 | cloak|
|502 | clog, geta, patten, sabot|
|503 | cocktail shaker|
|504 | coffee mug|
|505 | coffeepot|
|506 | coil, spiral, volute, whorl, helix|
|507 | combination lock|
|508 | computer keyboard, keypad|
|509 | confectionery, confectionary, candy store|
|510 | container ship, containership, container vessel|
|511 | convertible|
|512 | corkscrew, bottle screw|
|513 | cornet, horn, trumpet, trump|
|514 | cowboy boot|
|515 | cowboy hat, ten-gallon hat|
|516 | cradle|
|517 | crane_1|
|518 | crash helmet|
|519 | crate|
|520 | crib, cot|
|521 | Crock Pot|
|522 | croquet ball|
|523 | crutch|
|524 | cuirass|
|525 | dam, dike, dyke|
|526 | desk|
|527 | desktop computer|
|528 | dial telephone, dial phone|
|529 | diaper, nappy, napkin|
|530 | digital clock|
|531 | digital watch|
|532 | dining table, board|
|533 | dishrag, dishcloth|
|534 | dishwasher, dish washer, dishwashing machine|
|535 | disk brake, disc brake|
|536 | dock, dockage, docking facility|
|537 | dogsled, dog sled, dog sleigh|
|538 | dome|
|539 | doormat, welcome mat|
|540 | drilling platform, offshore rig|
|541 | drum, membranophone, tympan|
|542 | drumstick|
|543 | dumbbell|
|544 | Dutch oven|
|545 | electric fan, blower|
|546 | electric guitar|
|547 | electric locomotive|
|548 | entertainment center|
|549 | envelope|
|550 | espresso maker|
|551 | face powder|
|552 | feather boa, boa|
|553 | file, file cabinet, filing cabinet|
|554 | fireboat|
|555 | fire engine, fire truck|
|556 | fire screen, fireguard|
|557 | flagpole, flagstaff|
|558 | flute, transverse flute|
|559 | folding chair|
|560 | football helmet|
|561 | forklift|
|562 | fountain|
|563 | fountain pen|
|564 | four-poster|
|565 | freight car|
|566 | French horn, horn|
|567 | frying pan, frypan, skillet|
|568 | fur coat|
|569 | garbage truck, dustcart|
|570 | gasmask, respirator, gas helmet|
|571 | gas pump, gasoline pump, petrol pump, island dispenser|
|572 | goblet|
|573 | go-kart|
|574 | golf ball|
|575 | golfcart, golf cart|
|576 | gondola|
|577 | gong, tam-tam|
|578 | gown|
|579 | grand piano, grand|
|580 | greenhouse, nursery, glasshouse|
|581 | grille, radiator grille|
|582 | grocery store, grocery, food market, market|
|583 | guillotine|
|584 | hair slide|
|585 | hair spray|
|586 | half track|
|587 | hammer|
|588 | hamper|
|589 | hand blower, blow dryer, blow drier, hair dryer, hair drier|
|590 | hand-held computer, hand-held microcomputer|
|591 | handkerchief, hankie, hanky, hankey|
|592 | hard disc, hard disk, fixed disk|
|593 | harmonica, mouth organ, harp, mouth harp|
|594 | harp|
|595 | harvester, reaper|
|596 | hatchet|
|597 | holster|
|598 | home theater, home theatre|
|599 | honeycomb|
|600 | hook, claw|
|601 | hoopskirt, crinoline|
|602 | horizontal bar, high bar|
|603 | horse cart, horse-cart|
|604 | hourglass|
|605 | iPod|
|606 | iron, smoothing iron|
|607 | jack-o'-lantern|
|608 | jean, blue jean, denim|
|609 | jeep, landrover|
|610 | jersey, T-shirt, tee shirt|
|611 | jigsaw puzzle|
|612 | jinrikisha, ricksha, rickshaw|
|613 | joystick|
|614 | kimono|
|615 | knee pad|
|616 | knot|
|617 | lab coat, laboratory coat|
|618 | ladle|
|619 | lampshade, lamp shade|
|620 | laptop, laptop computer|
|621 | lawn mower, mower|
|622 | lens cap, lens cover|
|623 | letter opener, paper knife, paperknife|
|624 | library|
|625 | lifeboat|
|626 | lighter, light, igniter, ignitor|
|627 | limousine, limo|
|628 | liner, ocean liner|
|629 | lipstick, lip rouge|
|630 | Loafer|
|631 | lotion|
|632 | loudspeaker, speaker, speaker unit, loudspeaker system, speaker system|
|633 | loupe, jeweler's loupe|
|634 | lumbermill, sawmill|
|635 | magnetic compass|
|636 | mailbag, postbag|
|637 | mailbox, letter box|
|638 | maillot|
|639 | maillot, tank suit|
|640 | manhole cover|
|641 | maraca|
|642 | marimba, xylophone|
|643 | mask|
|644 | matchstick|
|645 | maypole|
|646 | maze, labyrinth|
|647 | measuring cup|
|648 | medicine chest, medicine cabinet|
|649 | megalith, megalithic structure|
|650 | microphone, mike|
|651 | microwave, microwave oven|
|652 | military uniform|
|653 | milk can|
|654 | minibus|
|655 | miniskirt, mini|
|656 | minivan|
|657 | missile|
|658 | mitten|
|659 | mixing bowl|
|660 | mobile home, manufactured home|
|661 | Model T|
|662 | modem|
|663 | monastery|
|664 | monitor|
|665 | moped|
|666 | mortar|
|667 | mortarboard|
|668 | mosque|
|669 | mosquito net|
|670 | motor scooter, scooter|
|671 | mountain bike, all-terrain bike, off-roader|
|672 | mountain tent|
|673 | mouse, computer mouse|
|674 | mousetrap|
|675 | moving van|
|676 | muzzle|
|677 | nail|
|678 | neck brace|
|679 | necklace|
|680 | nipple|
|681 | notebook, notebook computer|
|682 | obelisk|
|683 | oboe, hautboy, hautbois|
|684 | ocarina, sweet potato|
|685 | odometer, hodometer, mileometer, milometer|
|686 | oil filter|
|687 | organ, pipe organ|
|688 | oscilloscope, scope, cathode-ray oscilloscope, CRO|
|689 | overskirt|
|690 | oxcart|
|691 | oxygen mask|
|692 | packet|
|693 | paddle, boat paddle|
|694 | paddlewheel, paddle wheel|
|695 | padlock|
|696 | paintbrush|
|697 | pajama, pyjama, pj's, jammies|
|698 | palace|
|699 | panpipe, pandean pipe, syrinx|
|700 | paper towel|
|701 | parachute, chute|
|702 | parallel bars, bars|
|703 | park bench|
|704 | parking meter|
|705 | passenger car, coach, carriage|
|706 | patio, terrace|
|707 | pay-phone, pay-station|
|708 | pedestal, plinth, footstall|
|709 | pencil box, pencil case|
|710 | pencil sharpener|
|711 | perfume, essence|
|712 | Petri dish|
|713 | photocopier|
|714 | pick, plectrum, plectron|
|715 | pickelhaube|
|716 | picket fence, paling|
|717 | pickup, pickup truck|
|718 | pier|
|719 | piggy bank, penny bank|
|720 | pill bottle|
|721 | pillow|
|722 | ping-pong ball|
|723 | pinwheel|
|724 | pirate, pirate ship|
|725 | pitcher, ewer|
|726 | plane, carpenter's plane, woodworking plane|
|727 | planetarium|
|728 | plastic bag|
|729 | plate rack|
|730 | plow, plough|
|731 | plunger, plumber's helper|
|732 | Polaroid camera, Polaroid Land camera|
|733 | pole|
|734 | police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria|
|735 | poncho|
|736 | pool table, billiard table, snooker table|
|737 | pop bottle, soda bottle|
|738 | pot, flowerpot|
|739 | potter's wheel|
|740 | power drill|
|741 | prayer rug, prayer mat|
|742 | printer|
|743 | prison, prison house|
|744 | projectile, missile|
|745 | projector|
|746 | puck, hockey puck|
|747 | punching bag, punch bag, punching ball, punchball|
|748 | purse|
|749 | quill, quill pen|
|750 | quilt, comforter, comfort, puff|
|751 | racer, race car, racing car|
|752 | racket, racquet|
|753 | radiator|
|754 | radio, wireless|
|755 | radio telescope, radio reflector|
|756 | rain barrel|
|757 | recreational vehicle, RV, R.V.|
|758 | reel|
|759 | reflex camera|
|760 | refrigerator, icebox|
|761 | remote control, remote|
|762 | restaurant, eating house, eating place, eatery|
|763 | revolver, six-gun, six-shooter|
|764 | rifle|
|765 | rocking chair, rocker|
|766 | rotisserie|
|767 | rubber eraser, rubber, pencil eraser|
|768 | rugby ball|
|769 | rule, ruler|
|770 | running shoe|
|771 | safe|
|772 | safety pin|
|773 | saltshaker, salt shaker|
|774 | sandal|
|775 | sarong|
|776 | sax, saxophone|
|777 | scabbard|
|778 | scale, weighing machine|
|779 | school bus|
|780 | schooner|
|781 | scoreboard|
|782 | screen, CRT screen|
|783 | screw|
|784 | screwdriver|
|785 | seat belt, seatbelt|
|786 | sewing machine|
|787 | shield, buckler|
|788 | shoe shop, shoe-shop, shoe store|
|789 | shoji|
|790 | shopping basket|
|791 | shopping cart|
|792 | shovel|
|793 | shower cap|
|794 | shower curtain|
|795 | ski|
|796 | ski mask|
|797 | sleeping bag|
|798 | slide rule, slipstick|
|799 | sliding door|
|800 | slot, one-armed bandit|
|801 | snorkel|
|802 | snowmobile|
|803 | snowplow, snowplough|
|804 | soap dispenser|
|805 | soccer ball|
|806 | sock|
|807 | solar dish, solar collector, solar furnace|
|808 | sombrero|
|809 | soup bowl|
|810 | space bar|
|811 | space heater|
|812 | space shuttle|
|813 | spatula|
|814 | speedboat|
|815 | spider web, spider's web|
|816 | spindle|
|817 | sports car, sport car|
|818 | spotlight, spot|
|819 | stage|
|820 | steam locomotive|
|821 | steel arch bridge|
|822 | steel drum|
|823 | stethoscope|
|824 | stole|
|825 | stone wall|
|826 | stopwatch, stop watch|
|827 | stove|
|828 | strainer|
|829 | streetcar, tram, tramcar, trolley, trolley car|
|830 | stretcher|
|831 | studio couch, day bed|
|832 | stupa, tope|
|833 | submarine, pigboat, sub, U-boat|
|834 | suit, suit of clothes|
|835 | sundial|
|836 | sunglass|
|837 | sunglasses, dark glasses, shades|
|838 | sunscreen, sunblock, sun blocker|
|839 | suspension bridge|
|840 | swab, swob, mop|
|841 | sweatshirt|
|842 | swimming trunks, bathing trunks|
|843 | swing|
|844 | switch, electric switch, electrical switch|
|845 | syringe|
|846 | table lamp|
|847 | tank, army tank, armored combat vehicle, armoured combat vehicle|
|848 | tape player|
|849 | teapot|
|850 | teddy, teddy bear|
|851 | television, television system|
|852 | tennis ball|
|853 | thatch, thatched roof|
|854 | theater curtain, theatre curtain|
|855 | thimble|
|856 | thresher, thrasher, threshing machine|
|857 | throne|
|858 | tile roof|
|859 | toaster|
|860 | tobacco shop, tobacconist shop, tobacconist|
|861 | toilet seat|
|862 | torch|
|863 | totem pole|
|864 | tow truck, tow car, wrecker|
|865 | toyshop|
|866 | tractor|
|867 | trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi|
|868 | tray|
|869 | trench coat|
|870 | tricycle, trike, velocipede|
|871 | trimaran|
|872 | tripod|
|873 | triumphal arch|
|874 | trolleybus, trolley coach, trackless trolley|
|875 | trombone|
|876 | tub, vat|
|877 | turnstile|
|878 | typewriter keyboard|
|879 | umbrella|
|880 | unicycle, monocycle|
|881 | upright, upright piano|
|882 | vacuum, vacuum cleaner|
|883 | vase|
|884 | vault|
|885 | velvet|
|886 | vending machine|
|887 | vestment|
|888 | viaduct|
|889 | violin, fiddle|
|890 | volleyball|
|891 | waffle iron|
|892 | wall clock|
|893 | wallet, billfold, notecase, pocketbook|
|894 | wardrobe, closet, press|
|895 | warplane, military plane|
|896 | washbasin, handbasin, washbowl, lavabo, wash-hand basin|
|897 | washer, automatic washer, washing machine|
|898 | water bottle|
|899 | water jug|
|900 | water tower|
|901 | whiskey jug|
|902 | whistle|
|903 | wig|
|904 | window screen|
|905 | window shade|
|906 | Windsor tie|
|907 | wine bottle|
|908 | wing|
|909 | wok|
|910 | wooden spoon|
|911 | wool, woolen, woollen|
|912 | worm fence, snake fence, snake-rail fence, Virginia fence|
|913 | wreck|
|914 | yawl|
|915 | yurt|
|916 | web site, website, internet site, site|
|917 | comic book|
|918 | crossword puzzle, crossword|
|919 | street sign|
|920 | traffic light, traffic signal, stoplight|
|921 | book jacket, dust cover, dust jacket, dust wrapper|
|922 | menu|
|923 | plate|
|924 | guacamole|
|925 | consomme|
|926 | hot pot, hotpot|
|927 | trifle|
|928 | ice cream, icecream|
|929 | ice lolly, lolly, lollipop, popsicle|
|930 | French loaf|
|931 | bagel, beigel|
|932 | pretzel|
|933 | cheeseburger|
|934 | hotdog, hot dog, red hot|
|935 | mashed potato|
|936 | head cabbage|
|937 | broccoli|
|938 | cauliflower|
|939 | zucchini, courgette|
|940 | spaghetti squash|
|941 | acorn squash|
|942 | butternut squash|
|943 | cucumber, cuke|
|944 | artichoke, globe artichoke|
|945 | bell pepper|
|946 | cardoon|
|947 | mushroom|
|948 | Granny Smith|
|949 | strawberry|
|950 | orange|
|951 | lemon|
|952 | fig|
|953 | pineapple, ananas|
|954 | banana|
|955 | jackfruit, jak, jack|
|956 | custard apple|
|957 | pomegranate|
|958 | hay|
|959 | carbonara|
|960 | chocolate sauce, chocolate syrup|
|961 | dough|
|962 | meat loaf, meatloaf|
|963 | pizza, pizza pie|
|964 | potpie|
|965 | burrito|
|966 | red wine|
|967 | espresso|
|968 | cup|
|969 | eggnog|
|970 | alp|
|971 | bubble|
|972 | cliff, drop, drop-off|
|973 | coral reef|
|974 | geyser|
|975 | lakeside, lakeshore|
|976 | promontory, headland, head, foreland|
|977 | sandbar, sand bar|
|978 | seashore, coast, seacoast, sea-coast|
|979 | valley, vale|
|980 | volcano|
|981 | ballplayer, baseball player|
|982 | groom, bridegroom|
|983 | scuba diver|
|984 | rapeseed|
|985 | daisy|
|986 | yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum|
|987 | corn|
|988 | acorn|
|989 | hip, rose hip, rosehip|
|990 | buckeye, horse chestnut, conker|
|991 | coral fungus|
|992 | agaric|
|993 | gyromitra|
|994 | stinkhorn, carrion fungus|
|995 | earthstar|
|996 | hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa|
|997 | bolete|
|998 | ear, spike, capitulum|
|999 | toilet tissue, toilet paper, bathroom tissue|
</details>
### Data Splits
This dataset is a validation-only set.
## Dataset Creation
### Source Data
This dataset is sourced from ImageNet, ImageNet-ReaL, ImageNet-V2, ImageNet-A, ImageNet-C, ImageNet-R, ImageNet-Sketch, and ObjectNet.
## Citation Information
```
@article{taesiri2023zoom,
title={ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification},
author={Taesiri, Mohammad Reza and Nguyen, Giang and Habchi, Sarra and Bezemer, Cor-Paul and Nguyen, Anh},
journal={arXiv preprint arXiv:2304.05538},
year={2023}
}
``` | 346 | 12 | [
"task_categories:image-classification",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2304.05538",
"region:us",
"OOD",
"ImageNet",
"Out Of Distribution"
] | 2023-03-31T05:48:23+00:00 | 2025-11-12T15:35:17+00:00 | 0 |
royrin/KLOM-models | Dataset for the evaluation of data-unlearning techniques using KLOM (KL-divergence of Margins).
# How KLOM works:
KLOM works by:
1. training N models (original models)
2. Training N fully-retrained models (oracles) on forget set F
3. unlearning forget set F from the original models
4. Comparing the outputs of the unlearned models from the retrained models on different points
(specifically, computing the KL divergence between the distribution of _margins_ of oracle models and distribution of _margins_ of the unlearned models)
Originally proposed in the work Attribute-to-Delete: Machine Unlearning via Datamodel Matching (https://arxiv.org/abs/2410.23232), described in detail in E.1.
**Outline of how KLOM works:**

**Algorithm Description:**

# Structure of Data
The overal structure is as follows:
```
full_models
├── CIFAR10
├── CIFAR10_augmented
└── LIVING17
oracles
└── CIFAR10
├── forget_set_1
├── forget_set_2
├── forget_set_3
├── forget_set_4
├── forget_set_5
├── forget_set_6
├── forget_set_7
├── forget_set_8
├── forget_set_9
└── forget_set_10
```
Each folder has
* train_logits_##.pt - logits at the end of training for model `##` for validation points
* val_logits_##.pt - logits at the end of training for model `##` for train points
* `##__val_margins_#.npy` - margins of model `##` at epoch `#` (this is derived from logits)
* `sd_##____epoch_#.pt` - model `##` checkpoint at epoch `#`
# How to download
Create script `download_folder.sh`
```
#!/bin/bash
REPO_URL=https://huggingface.co/datasets/royrin/KLOM-models
TARGET_DIR=KLOM-models # name it what you wish
FOLDER=$1 # e.g., "oracles/CIFAR10/forget_set_3"
mkdir -p $TARGET_DIR
git clone --filter=blob:none --no-checkout $REPO_URL $TARGET_DIR
cd $TARGET_DIR
git sparse-checkout init --cone
git sparse-checkout set $FOLDER
git checkout main
```
Example how to run script:
```
bash download_folder.sh oracles/CIFAR10/forget_set_3
```
## How forget sets generated
We have 10 different forget sets: sets 1,2,3 are random forget sets of sizes 10,100,1000 respectively; sets 4-9 correspond to semantically coherent subpopulations of examples (e.g., all dogs facing a similar direction) identified using clustering methods.
Specifically, we take a $n \times n$ datamodel matrix constructed by concatenating ``train x train`` datamodels ($n=50,000$). Next, we compute the top principal components (PCs) of the influence matrix and construct the following forget sets:
* Forget set 1: 10 random samples
* Forget set 2: 100 random samples
* Forget set 3: 500 random samples
* Forget set 4: 10 samples with the highest projection onto the 1st PC
* Forget set 5: 100 samples with the highest projection onto the 1st PC
* Forget set 6: 250 samples with the highest projection onto the 1st PC and 250 with lowest projection
* Forget set 7: 10 samples with the highest projection onto the 2nd PC
* Forget set 8: 100 samples with the highest projection onto the 2nd PC
* Forget set 9: 250 samples with the highest projection onto the 2nd PC and 250 with the lowest projection.
* Forget set 10: 100 samples closest in CLIP image space to training example 6 (a cassowary)
\paragraph{ImageNet Living-17.} We use three different forget sets:
* Forget set 1 is random of size 500;
* Forget sets 2 and 3 correspond to 200 examples from a certain subpopulation (corresponding to a single original ImageNet class) within the Living-17 superclass.
| Dataset for the evaluation of data-unlearning techniques using KLOM (KL-divergence of Margins).
# How KLOM works:
KLOM works by:
1. training N models (original models)
2. Training N fully-retrained models (oracles) on forget set F
3. unlearning forget set F from the original models
4. Comparing the outputs of the unlearned models from the retrained models on different points
(specifically, computing the KL divergence between the distribution of _margins_ of oracle models and distribution of _margins_ of the unlearned models)
Originally proposed in the work Attribute-to-Delete: Machine Unlearning via Datamodel Matching (https://arxiv.org/abs/2410.23232), described in detail in E.1.
**Outline of how KLOM works:**

**Algorithm Description:**

# Structure of Data
The overal structure is as follows:
```
full_models
├── CIFAR10
├── CIFAR10_augmented
└── LIVING17
oracles
└── CIFAR10
├── forget_set_1
├── forget_set_2
├── forget_set_3
├── forget_set_4
├── forget_set_5
├── forget_set_6
├── forget_set_7
├── forget_set_8
├── forget_set_9
└── forget_set_10
```
Each folder has
* train_logits_##.pt - logits at the end of training for model `##` for validation points
* val_logits_##.pt - logits at the end of training for model `##` for train points
* `##__val_margins_#.npy` - margins of model `##` at epoch `#` (this is derived from logits)
* `sd_##____epoch_#.pt` - model `##` checkpoint at epoch `#`
# How to download
Create script `download_folder.sh`
```
#!/bin/bash
REPO_URL=https://huggingface.co/datasets/royrin/KLOM-models
TARGET_DIR=KLOM-models # name it what you wish
FOLDER=$1 # e.g., "oracles/CIFAR10/forget_set_3"
mkdir -p $TARGET_DIR
git clone --filter=blob:none --no-checkout $REPO_URL $TARGET_DIR
cd $TARGET_DIR
git sparse-checkout init --cone
git sparse-checkout set $FOLDER
git checkout main
```
Example how to run script:
```
bash download_folder.sh oracles/CIFAR10/forget_set_3
```
## How forget sets generated
We have 10 different forget sets: sets 1,2,3 are random forget sets of sizes 10,100,1000 respectively; sets 4-9 correspond to semantically coherent subpopulations of examples (e.g., all dogs facing a similar direction) identified using clustering methods.
Specifically, we take a $n \times n$ datamodel matrix constructed by concatenating ``train x train`` datamodels ($n=50,000$). Next, we compute the top principal components (PCs) of the influence matrix and construct the following forget sets:
* Forget set 1: 10 random samples
* Forget set 2: 100 random samples
* Forget set 3: 500 random samples
* Forget set 4: 10 samples with the highest projection onto the 1st PC
* Forget set 5: 100 samples with the highest projection onto the 1st PC
* Forget set 6: 250 samples with the highest projection onto the 1st PC and 250 with lowest projection
* Forget set 7: 10 samples with the highest projection onto the 2nd PC
* Forget set 8: 100 samples with the highest projection onto the 2nd PC
* Forget set 9: 250 samples with the highest projection onto the 2nd PC and 250 with the lowest projection.
* Forget set 10: 100 samples closest in CLIP image space to training example 6 (a cassowary)
\paragraph{ImageNet Living-17.} We use three different forget sets:
* Forget set 1 is random of size 500;
* Forget sets 2 and 3 correspond to 200 examples from a certain subpopulation (corresponding to a single original ImageNet class) within the Living-17 superclass.
| 2,203 | 3 | [
"license:mit",
"size_categories:10K<n<100K",
"arxiv:2410.23232",
"region:us"
] | 2025-05-04T19:36:56+00:00 | 2025-11-12T15:35:09+00:00 | 0 |
TheFactoryX/edition_0340_lavita-medical-qa-shared-task-v1-toy-readymade |
# edition_0340_lavita-medical-qa-shared-task-v1-toy-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[lavita/medical-qa-shared-task-v1-toy](https://huggingface.co/datasets/lavita/medical-qa-shared-task-v1-toy)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
|
# edition_0340_lavita-medical-qa-shared-task-v1-toy-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[lavita/medical-qa-shared-task-v1-toy](https://huggingface.co/datasets/lavita/medical-qa-shared-task-v1-toy)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
| 0 | 0 | [
"license:other",
"region:us",
"readymades",
"art",
"shuffled",
"duchamp"
] | 2025-11-12T15:36:25+00:00 | 2025-11-12T15:36:27+00:00 | 0 |
asterism45/bi_openarm_dataset_truetest_2 |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "bi_openarm",
"total_episodes": 1,
"total_frames": 1769,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"left_shoulder_pan.pos",
"left_shoulder_lift.pos",
"left_elbow.pos",
"left_wrist_pitch.pos",
"left_wrist_roll.pos",
"left_wrist_yaw.pos",
"left_tool.pos",
"left_gripper.pos",
"right_shoulder_pan.pos",
"right_shoulder_lift.pos",
"right_elbow.pos",
"right_wrist_pitch.pos",
"right_wrist_roll.pos",
"right_wrist_yaw.pos",
"right_tool.pos",
"right_gripper.pos"
],
"shape": [
16
]
},
"observation.state": {
"dtype": "float32",
"names": [
"left_shoulder_pan.pos",
"left_shoulder_pan.vel",
"left_shoulder_lift.pos",
"left_shoulder_lift.vel",
"left_elbow.pos",
"left_elbow.vel",
"left_wrist_pitch.pos",
"left_wrist_pitch.vel",
"left_wrist_roll.pos",
"left_wrist_roll.vel",
"left_wrist_yaw.pos",
"left_wrist_yaw.vel",
"left_tool.pos",
"left_tool.vel",
"left_gripper.pos",
"left_gripper.vel",
"right_shoulder_pan.pos",
"right_shoulder_pan.vel",
"right_shoulder_lift.pos",
"right_shoulder_lift.vel",
"right_elbow.pos",
"right_elbow.vel",
"right_wrist_pitch.pos",
"right_wrist_pitch.vel",
"right_wrist_roll.pos",
"right_wrist_roll.vel",
"right_wrist_yaw.pos",
"right_wrist_yaw.vel",
"right_tool.pos",
"right_tool.vel",
"right_gripper.pos",
"right_gripper.vel"
],
"shape": [
32
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "bi_openarm",
"total_episodes": 1,
"total_frames": 1769,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"left_shoulder_pan.pos",
"left_shoulder_lift.pos",
"left_elbow.pos",
"left_wrist_pitch.pos",
"left_wrist_roll.pos",
"left_wrist_yaw.pos",
"left_tool.pos",
"left_gripper.pos",
"right_shoulder_pan.pos",
"right_shoulder_lift.pos",
"right_elbow.pos",
"right_wrist_pitch.pos",
"right_wrist_roll.pos",
"right_wrist_yaw.pos",
"right_tool.pos",
"right_gripper.pos"
],
"shape": [
16
]
},
"observation.state": {
"dtype": "float32",
"names": [
"left_shoulder_pan.pos",
"left_shoulder_pan.vel",
"left_shoulder_lift.pos",
"left_shoulder_lift.vel",
"left_elbow.pos",
"left_elbow.vel",
"left_wrist_pitch.pos",
"left_wrist_pitch.vel",
"left_wrist_roll.pos",
"left_wrist_roll.vel",
"left_wrist_yaw.pos",
"left_wrist_yaw.vel",
"left_tool.pos",
"left_tool.vel",
"left_gripper.pos",
"left_gripper.vel",
"right_shoulder_pan.pos",
"right_shoulder_pan.vel",
"right_shoulder_lift.pos",
"right_shoulder_lift.vel",
"right_elbow.pos",
"right_elbow.vel",
"right_wrist_pitch.pos",
"right_wrist_pitch.vel",
"right_wrist_roll.pos",
"right_wrist_roll.vel",
"right_wrist_yaw.pos",
"right_wrist_yaw.vel",
"right_tool.pos",
"right_tool.vel",
"right_gripper.pos",
"right_gripper.vel"
],
"shape": [
32
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T15:33:47+00:00 | 2025-11-12T15:35:22+00:00 | 0 |
msmandelbrot/eval_act_green_cube_black_box_ood |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 1,
"total_frames": 540,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.up": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 1,
"total_frames": 540,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.up": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T15:28:02+00:00 | 2025-11-12T15:28:14+00:00 | 0 |
Chuntianli/CrossVid |
# CrossVid: A Comprehensive Benchmark for Evaluating Cross-Video Reasoning in Multimodal Large Language Models
## Dataset Description
**CrossVid** is a large-scale, multi-task dataset designed to advance cross-video understanding capabilities in vision-language models. The dataset encompasses **10 diverse task types** that require models to reason across multiple videos, understand temporal dynamics, spatial relationships, and complex narrative structures.
### Key Features
- 🎥 **Multi-Domain Videos**: Includes assembly tutorials, animal behaviors, cooking demonstrations, movie scenes, and UAV footage
- 🎯 **10 Challenging Tasks**: Covering behavioral analysis, content comparison, temporal reasoning, spatial understanding, and more
- 📊 **Rich Annotations**: Question-answer pairs with temporal segments, spatial object tracking, and procedural step sequences
- 🌐 **Cross-Video Reasoning**: Tasks explicitly require understanding relationships and patterns across multiple video clips
## Task Types
| Task Code | Task Name | Dimension | #QA Pairs | #Videos per QA | Video Sources |
|-----------|-----------|-----------|-----------|----------------|---------------|
| **BU** | Behavioral Understanding | Comparative Analysis | 848 | 3-4 | Charades & Animal Kingdom |
| **NC** | Narrative Comprehension | Comparative Analysis | 1,221 | 4 | MovieChat-1K |
| **CC** | Culinary Comparison | Comparative Analysis | 798 | 4 | YouCook2 |
| **PEA** | Procedural Error Analysis | Comparative Analysis | 953 | 3 | Assembly101 |
| **PI** | Plot Inference | Temporal Understanding | 251 | 2 | MovieChat-1K |
| **FSA** | Functional Step Alignment | Temporal Understanding | 2,248 | 2 | YouCook2 |
| **PSS** | Procedural Step Sequencing | Temporal Understanding | 664 | 3-6 | YouCook2 |
| **MSR** | Multi-view Spatial Reasoning | Multi-view Reasoning | 594 | 2 | VisDrone |
| **MOC** | Multi-view Object Counting | Multi-view Reasoning | 566 | 2 | VisDrone |
| **CCQA** | Comparative Culinary QA | Free-form QA | 872 | 2 | YouCook2 |
| | | **Total** | **9,015** | | |
## Dataset Structure
CrossVid/
├── data/
│ ├── uav/
│ │ ├── bbox/
│ │ └── frames/
│ ├── videos/
│ │ ├── assembly/
│ │ ├── behavior/
│ │ ├── cook/
│ │ └── movie/
│ └── QA/
│ ├── BU.json
│ ├── CC.json
│ ├── CCQA.json
│ ├── FSA.json
│ ├── MOC.json
│ ├── MSR.json
│ ├── NC.json
│ ├── PEA.json
│ ├── PI.json
│ └── PSS.json
└── README.md
## 📧 Contact
For questions or issues, please:
- Open an issue on [GitHub](https://github.com/chuntianli666/CrossVid/issues)
- Contact us at: chuntianli666666@gmail.com
## 🙏 Acknowledgements
We thank the creators of the following datasets that made CrossVid possible:
- [Animal Kingdom](https://github.com/SUTDCV/Animal-Kingdom)
- [MovieChat-1K](https://github.com/rese1f/MovieChat)
- [YouCook2](http://youcook2.eecs.umich.edu/)
- [VisDrone](https://github.com/VisDrone/VisDrone-Dataset)
- [Charades](https://prior.allenai.org/projects/charades)
- [Assembly101](https://assembly-101.github.io/)
## 📝 Citation
If you find CrossVid useful for your research, please cite our paper:
```bibtex
@inproceedings{li2025crossvid,
title={CrossVid: A Comprehensive Benchmark for Evaluating Cross-Video Reasoning in Multimodal Large Language Models},
author={Li, Jingyao and Wang, Jingyun and Tan, Molin and Wang, Haochen and Yan, Cilin and Shi, Likun and Cai, Jiayin and Jiang, Xiaolong and Hu, Yao},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2026}
}
```
|
# CrossVid: A Comprehensive Benchmark for Evaluating Cross-Video Reasoning in Multimodal Large Language Models
## Dataset Description
**CrossVid** is a large-scale, multi-task dataset designed to advance cross-video understanding capabilities in vision-language models. The dataset encompasses **10 diverse task types** that require models to reason across multiple videos, understand temporal dynamics, spatial relationships, and complex narrative structures.
### Key Features
- 🎥 **Multi-Domain Videos**: Includes assembly tutorials, animal behaviors, cooking demonstrations, movie scenes, and UAV footage
- 🎯 **10 Challenging Tasks**: Covering behavioral analysis, content comparison, temporal reasoning, spatial understanding, and more
- 📊 **Rich Annotations**: Question-answer pairs with temporal segments, spatial object tracking, and procedural step sequences
- 🌐 **Cross-Video Reasoning**: Tasks explicitly require understanding relationships and patterns across multiple video clips
## Task Types
| Task Code | Task Name | Dimension | #QA Pairs | #Videos per QA | Video Sources |
|-----------|-----------|-----------|-----------|----------------|---------------|
| **BU** | Behavioral Understanding | Comparative Analysis | 848 | 3-4 | Charades & Animal Kingdom |
| **NC** | Narrative Comprehension | Comparative Analysis | 1,221 | 4 | MovieChat-1K |
| **CC** | Culinary Comparison | Comparative Analysis | 798 | 4 | YouCook2 |
| **PEA** | Procedural Error Analysis | Comparative Analysis | 953 | 3 | Assembly101 |
| **PI** | Plot Inference | Temporal Understanding | 251 | 2 | MovieChat-1K |
| **FSA** | Functional Step Alignment | Temporal Understanding | 2,248 | 2 | YouCook2 |
| **PSS** | Procedural Step Sequencing | Temporal Understanding | 664 | 3-6 | YouCook2 |
| **MSR** | Multi-view Spatial Reasoning | Multi-view Reasoning | 594 | 2 | VisDrone |
| **MOC** | Multi-view Object Counting | Multi-view Reasoning | 566 | 2 | VisDrone |
| **CCQA** | Comparative Culinary QA | Free-form QA | 872 | 2 | YouCook2 |
| | | **Total** | **9,015** | | |
## Dataset Structure
CrossVid/
├── data/
│ ├── uav/
│ │ ├── bbox/
│ │ └── frames/
│ ├── videos/
│ │ ├── assembly/
│ │ ├── behavior/
│ │ ├── cook/
│ │ └── movie/
│ └── QA/
│ ├── BU.json
│ ├── CC.json
│ ├── CCQA.json
│ ├── FSA.json
│ ├── MOC.json
│ ├── MSR.json
│ ├── NC.json
│ ├── PEA.json
│ ├── PI.json
│ └── PSS.json
└── README.md
## 📧 Contact
For questions or issues, please:
- Open an issue on [GitHub](https://github.com/chuntianli666/CrossVid/issues)
- Contact us at: chuntianli666666@gmail.com
## 🙏 Acknowledgements
We thank the creators of the following datasets that made CrossVid possible:
- [Animal Kingdom](https://github.com/SUTDCV/Animal-Kingdom)
- [MovieChat-1K](https://github.com/rese1f/MovieChat)
- [YouCook2](http://youcook2.eecs.umich.edu/)
- [VisDrone](https://github.com/VisDrone/VisDrone-Dataset)
- [Charades](https://prior.allenai.org/projects/charades)
- [Assembly101](https://assembly-101.github.io/)
## 📝 Citation
If you find CrossVid useful for your research, please cite our paper:
```bibtex
@inproceedings{li2025crossvid,
title={CrossVid: A Comprehensive Benchmark for Evaluating Cross-Video Reasoning in Multimodal Large Language Models},
author={Li, Jingyao and Wang, Jingyun and Tan, Molin and Wang, Haochen and Yan, Cilin and Shi, Likun and Cai, Jiayin and Jiang, Xiaolong and Hu, Yao},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2026}
}
```
| 0 | 0 | [
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"region:us",
"video-understanding",
"cross-video-reasoning",
"multimodal",
"temporal-reasoning",
"spatial-reasoning"
] | 2025-11-12T14:53:24+00:00 | 2025-11-12T15:24:43+00:00 | 0 |
tomneutens/lerobot-cup-grab-test_2025-11-12_16.23.23 |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 1,
"total_frames": 971,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 1,
"total_frames": 971,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T15:24:09+00:00 | 2025-11-12T15:24:18+00:00 | 0 |
spaicom-lab/latents-cifar10 |
# Latents for cifar10 (timm)
This repository hosts **precomputed embeddings** (float16) for `cifar10` across many `timm` models.
Each dataset **config** corresponds to a single model; only that model’s Parquet files are read on `load_dataset`.
## Usage
```python
from datasets import load_dataset
ds_train = load_dataset("{REPO_ID}", "aimv2_1b_patch14_224.apple_pt", split="train")
# switch model by changing the config name
ds_test = load_dataset("{REPO_ID}", "aimv2_1b_patch14_224.apple_pt", split="test")
```
## Schema
- `example_id: int64`
- `label: int64`
- `model_name: string`
- `embedding: fixed_size_list<float16>[D]` (D varies by model)
## Notes
- Embeddings are produced with `timm.resolve_data_config` + `create_transform`.
- Sharded by size for efficient streaming and downloads.
|
# Latents for cifar10 (timm)
This repository hosts **precomputed embeddings** (float16) for `cifar10` across many `timm` models.
Each dataset **config** corresponds to a single model; only that model’s Parquet files are read on `load_dataset`.
## Usage
```python
from datasets import load_dataset
ds_train = load_dataset("{REPO_ID}", "aimv2_1b_patch14_224.apple_pt", split="train")
# switch model by changing the config name
ds_test = load_dataset("{REPO_ID}", "aimv2_1b_patch14_224.apple_pt", split="test")
```
## Schema
- `example_id: int64`
- `label: int64`
- `model_name: string`
- `embedding: fixed_size_list<float16>[D]` (D varies by model)
## Notes
- Embeddings are produced with `timm.resolve_data_config` + `create_transform`.
- Sharded by size for efficient streaming and downloads.
| 0 | 0 | [
"region:us"
] | 2025-11-12T15:30:54+00:00 | 2025-11-12T15:31:11+00:00 | 0 |
msmandelbrot/eval_act_green_cube_black_box |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 32,
"total_frames": 22984,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:32"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.up": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 32,
"total_frames": 22984,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:32"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.up": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T15:04:40+00:00 | 2025-11-12T15:26:49+00:00 | 0 |
jniimi/weather_forecast_japan | ## Overview
This dataset contains daily-collected weather forecasts for up to seven days ahead for all the meteorological observatories in Japan, published by the Japan Meteorological Agency (JMA) at [https://www.jma.go.jp/bosai/forecast/](https://www.jma.go.jp/bosai/forecast/).
We collect, structurize, and accumulate the prediction since the page is overridden whenever the information is updated.
The data is automatically updated daily using GitHub Actions. Since the actual forecasts are published multiple times a day, we set up the column `from_hour` which represetnts the announced time.
Further details are also available at [note.com/jniimi/n/n06d3423bbbbf](https://note.com/jniimi/n/n06d3423bbbbf) (Japanese only).
## Usage
This dataset can be utilized across a wide range of research fields. While it is primarily valuable in the natural sciences, it can also be applied to social sciences, such as the behavioral modeling and prediction.
You can refer the sample usage with following `open in colab` link: [weather_forecast_example.ipynb](https://colab.research.google.com/gist/jniimi/aaf3542f348ae1d2a94df62b7badff50/weather_forecast_example.ipynb), hosted on Gist.
## Notes
- No train-test split: All the data is contained in `train` section.
- datetime in JST: All date and time variables are displayed in Japan Standard Time (JST: UTC+9).
- To account for potential outliers (like disasters), all columns are stored as strings.
- Please adhere to the original source's rules when using this data.
- We do not take responsibility for any missing data or inaccuracies.
## Citation
If you use this dataset for anything, please consider citing (or displaying) the following reference. Don't forget to mention JMA, also.
```
@misc{jniimi2024weather,
title = "7days Weather Forecast in Japan (Dataset)",
author = "Junichiro Niimi",
year = {2024},
howpublished = {\url{https://huggingface.co/datasets/jniimi/weather_forecast_japan}},
}
``` | ## Overview
This dataset contains daily-collected weather forecasts for up to seven days ahead for all the meteorological observatories in Japan, published by the Japan Meteorological Agency (JMA) at [https://www.jma.go.jp/bosai/forecast/](https://www.jma.go.jp/bosai/forecast/).
We collect, structurize, and accumulate the prediction since the page is overridden whenever the information is updated.
The data is automatically updated daily using GitHub Actions. Since the actual forecasts are published multiple times a day, we set up the column `from_hour` which represetnts the announced time.
Further details are also available at [note.com/jniimi/n/n06d3423bbbbf](https://note.com/jniimi/n/n06d3423bbbbf) (Japanese only).
## Usage
This dataset can be utilized across a wide range of research fields. While it is primarily valuable in the natural sciences, it can also be applied to social sciences, such as the behavioral modeling and prediction.
You can refer the sample usage with following `open in colab` link: [weather_forecast_example.ipynb](https://colab.research.google.com/gist/jniimi/aaf3542f348ae1d2a94df62b7badff50/weather_forecast_example.ipynb), hosted on Gist.
## Notes
- No train-test split: All the data is contained in `train` section.
- datetime in JST: All date and time variables are displayed in Japan Standard Time (JST: UTC+9).
- To account for potential outliers (like disasters), all columns are stored as strings.
- Please adhere to the original source's rules when using this data.
- We do not take responsibility for any missing data or inaccuracies.
## Citation
If you use this dataset for anything, please consider citing (or displaying) the following reference. Don't forget to mention JMA, also.
```
@misc{jniimi2024weather,
title = "7days Weather Forecast in Japan (Dataset)",
author = "Junichiro Niimi",
year = {2024},
howpublished = {\url{https://huggingface.co/datasets/jniimi/weather_forecast_japan}},
}
``` | 1,866 | 2 | [
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"language:ja",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"climate"
] | 2024-08-22T11:37:59+00:00 | 2025-11-12T15:20:39+00:00 | 0 |
leledeyuan/takeoff-tshirt |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": null,
"total_episodes": 115,
"total_frames": 84024,
"total_tasks": 3,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:115"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
29
],
"names": [
"ee_x_l",
"ee_y_l",
"ee_z_l",
"ee_qx_l",
"ee_qy_l",
"ee_qz_l",
"ee_qw_l",
"force_x_l",
"force_y_l",
"force_z_l",
"torque_x_l",
"torque_y_l",
"torque_z_l",
"gripper_l",
"ee_x_r",
"ee_y_r",
"ee_z_r",
"ee_qx_r",
"ee_qy_r",
"ee_qz_r",
"ee_qw_r",
"force_x_r",
"force_y_r",
"force_z_r",
"torque_x_r",
"torque_y_r",
"torque_z_r",
"gripper_r",
"stage"
]
},
"action": {
"dtype": "float32",
"shape": [
29
],
"names": [
"ee_x_l",
"ee_y_l",
"ee_z_l",
"ee_qx_l",
"ee_qy_l",
"ee_qz_l",
"ee_qw_l",
"force_x_l",
"force_y_l",
"force_z_l",
"torque_x_l",
"torque_y_l",
"torque_z_l",
"gripper_l",
"ee_x_r",
"ee_y_r",
"ee_z_r",
"ee_qx_r",
"ee_qy_r",
"ee_qz_r",
"ee_qw_r",
"force_x_r",
"force_y_r",
"force_z_r",
"torque_x_r",
"torque_y_r",
"torque_z_r",
"gripper_r",
"stage"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist_l": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist_r": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": null,
"total_episodes": 115,
"total_frames": 84024,
"total_tasks": 3,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:115"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
29
],
"names": [
"ee_x_l",
"ee_y_l",
"ee_z_l",
"ee_qx_l",
"ee_qy_l",
"ee_qz_l",
"ee_qw_l",
"force_x_l",
"force_y_l",
"force_z_l",
"torque_x_l",
"torque_y_l",
"torque_z_l",
"gripper_l",
"ee_x_r",
"ee_y_r",
"ee_z_r",
"ee_qx_r",
"ee_qy_r",
"ee_qz_r",
"ee_qw_r",
"force_x_r",
"force_y_r",
"force_z_r",
"torque_x_r",
"torque_y_r",
"torque_z_r",
"gripper_r",
"stage"
]
},
"action": {
"dtype": "float32",
"shape": [
29
],
"names": [
"ee_x_l",
"ee_y_l",
"ee_z_l",
"ee_qx_l",
"ee_qy_l",
"ee_qz_l",
"ee_qw_l",
"force_x_l",
"force_y_l",
"force_z_l",
"torque_x_l",
"torque_y_l",
"torque_z_l",
"gripper_l",
"ee_x_r",
"ee_y_r",
"ee_z_r",
"ee_qx_r",
"ee_qy_r",
"ee_qz_r",
"ee_qw_r",
"force_x_r",
"force_y_r",
"force_z_r",
"torque_x_r",
"torque_y_r",
"torque_z_r",
"gripper_r",
"stage"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist_l": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist_r": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T15:17:58+00:00 | 2025-11-12T15:19:08+00:00 | 0 |
leledeyuan/hanging-tshirt |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": null,
"total_episodes": 105,
"total_frames": 81435,
"total_tasks": 5,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:105"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
29
],
"names": [
"ee_x_l",
"ee_y_l",
"ee_z_l",
"ee_qx_l",
"ee_qy_l",
"ee_qz_l",
"ee_qw_l",
"force_x_l",
"force_y_l",
"force_z_l",
"torque_x_l",
"torque_y_l",
"torque_z_l",
"gripper_l",
"ee_x_r",
"ee_y_r",
"ee_z_r",
"ee_qx_r",
"ee_qy_r",
"ee_qz_r",
"ee_qw_r",
"force_x_r",
"force_y_r",
"force_z_r",
"torque_x_r",
"torque_y_r",
"torque_z_r",
"gripper_r",
"stage"
]
},
"action": {
"dtype": "float32",
"shape": [
29
],
"names": [
"ee_x_l",
"ee_y_l",
"ee_z_l",
"ee_qx_l",
"ee_qy_l",
"ee_qz_l",
"ee_qw_l",
"force_x_l",
"force_y_l",
"force_z_l",
"torque_x_l",
"torque_y_l",
"torque_z_l",
"gripper_l",
"ee_x_r",
"ee_y_r",
"ee_z_r",
"ee_qx_r",
"ee_qy_r",
"ee_qz_r",
"ee_qw_r",
"force_x_r",
"force_y_r",
"force_z_r",
"torque_x_r",
"torque_y_r",
"torque_z_r",
"gripper_r",
"stage"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist_l": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist_r": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": null,
"total_episodes": 105,
"total_frames": 81435,
"total_tasks": 5,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:105"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
29
],
"names": [
"ee_x_l",
"ee_y_l",
"ee_z_l",
"ee_qx_l",
"ee_qy_l",
"ee_qz_l",
"ee_qw_l",
"force_x_l",
"force_y_l",
"force_z_l",
"torque_x_l",
"torque_y_l",
"torque_z_l",
"gripper_l",
"ee_x_r",
"ee_y_r",
"ee_z_r",
"ee_qx_r",
"ee_qy_r",
"ee_qz_r",
"ee_qw_r",
"force_x_r",
"force_y_r",
"force_z_r",
"torque_x_r",
"torque_y_r",
"torque_z_r",
"gripper_r",
"stage"
]
},
"action": {
"dtype": "float32",
"shape": [
29
],
"names": [
"ee_x_l",
"ee_y_l",
"ee_z_l",
"ee_qx_l",
"ee_qy_l",
"ee_qz_l",
"ee_qw_l",
"force_x_l",
"force_y_l",
"force_z_l",
"torque_x_l",
"torque_y_l",
"torque_z_l",
"gripper_l",
"ee_x_r",
"ee_y_r",
"ee_z_r",
"ee_qx_r",
"ee_qy_r",
"ee_qz_r",
"ee_qw_r",
"force_x_r",
"force_y_r",
"force_z_r",
"torque_x_r",
"torque_y_r",
"torque_z_r",
"gripper_r",
"stage"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist_l": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist_r": {
"dtype": "video",
"shape": [
3,
480,
640
],
"names": [
"channels",
"height",
"width"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 42 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | 2025-10-31T14:55:19+00:00 | 2025-11-12T15:18:01+00:00 | 0 |
IVUL-KAUST/AutoThink |
The AutoThink data includes math-reasoning, image-reasoning, and video-reasoning datasets, e.g., DAPO-MATH, ViRL, Video-R1.
We filtered some noisy/low-quality reasoning samples from TVBench, MMR-Vbench, and NExT-GQA. We exclude VideoVista, ShortVid-Bench, and Video-Holmes data due to the presence of many low-quality reasoning samples in these datasets, which are not suitable for AutoThink.
Note that we created this subset for AutoThink. |
The AutoThink data includes math-reasoning, image-reasoning, and video-reasoning datasets, e.g., DAPO-MATH, ViRL, Video-R1.
We filtered some noisy/low-quality reasoning samples from TVBench, MMR-Vbench, and NExT-GQA. We exclude VideoVista, ShortVid-Bench, and Video-Holmes data due to the presence of many low-quality reasoning samples in these datasets, which are not suitable for AutoThink.
Note that we created this subset for AutoThink. | 13 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-11-07T01:49:40+00:00 | 2025-11-12T15:16:43+00:00 | 0 |
TheFactoryX/edition_0339_cornell-movie-review-data-rotten_tomatoes-readymade |
# edition_0339_cornell-movie-review-data-rotten_tomatoes-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[cornell-movie-review-data/rotten_tomatoes](https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
|
# edition_0339_cornell-movie-review-data-rotten_tomatoes-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[cornell-movie-review-data/rotten_tomatoes](https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
| 0 | 0 | [
"license:other",
"region:us",
"readymades",
"art",
"shuffled",
"duchamp"
] | 2025-11-12T15:14:05+00:00 | 2025-11-12T15:14:07+00:00 | 0 |
QShane/CL2GEC |
# CL²GEC:A Multi-Discipline Benchmark for Continual Learning in Chinese Literature Grammatical Error Correction
**CL²GEC** is a benchmark for **Chinese grammatical error correction (GEC)** in **scholarly writing** with a **continual-learning** protocol. The corpus covers **10 first-level disciplines** (Law, Management, Education, Economics, Natural Sciences, History, Agricultural Sciences, Literature, Arts, Philosophy). Each sample contains an errorful sentence (`source`) and one or more corrected references (`references`). Standard **train / validation / test** splits are provided and may be used **per-discipline** to study sequential/continual learning behavior such as forgetting and transfer.
---
## Supported Tasks and Leaderboards
**Grammatical Error Correction (GEC)** / **Text-to-Text Generation**
- **Input**: a Chinese sentence containing grammatical/usage errors.
- **Output**: a semantically equivalent, grammatically correct sentence.
**Recommended Metrics**
- GEC metrics: **Precision / Recall / F0.5** (e.g., via ChERRANT).
- Continual-learning (optional): **Average Performance** and **Backward Transfer (BWT)** computed over task sequences defined by the ordered disciplines.
---
## Dataset Structure
### Data Instances
Below is a recommended public JSON schema:
```json
{
"id": "0",
"source": "总体上看,仍有许多案件以不适用调解制度。",
"references": [
"总体上看,依然有许多案件不适宜使用调解制度来解决。"
],
"category": "法学",
"edits": [
{
"src_interval": [7, 9],
"tgt_interval": [7, 9],
"src_content": ["不", "适", "用"],
"tgt_content": ["不", "适", "宜"]
}
]
}
```
### Data Fields
- **id** *(string)*: unique sample identifier.
- **source** *(string)*: original sentence with errors.
- **references** *(list[string])*: one or more corrected sentences.
- **category** *(string)*: first-level discipline.
- **edits** *(list[object], optional)*: token/character-level edits (if provided).
### Data Splits
| Split | #Samples | Notes |
| ---------- | -------: | ------------------- |
| train | 7,000 | training data |
| validation | 1,000 | development set |
| test | 2,000 | held-out evaluation |
---
## Categories (Disciplines)
Below are the 10 discipline labels (Chinese) with suggested English names:
| Chinese (label in data) | English |
| ----------------------- | ---------- |
| 法学 | Law |
| 管理 | Management |
| 教育 | Education |
| 经济学 | Economics |
| 理学 | Sciences |
| 历史学 | History |
| 农学 | Agronomy |
| 文学 | Literature |
| 哲学 | Philosophy |
| 艺术学 | Arts |
---
## Collection and Annotation
- **Sources**: Extracted from CNKI Academic PDFs, covering 10 first-level disciplines and 100 second-level disciplines; only abstracts and main text are retained; non-linguistic content such as references, acknowledgments, formulas, tables, and figure captions are removed; sentence-level segmentation uses LTP. Anonymization is also performed.
- **Annotation**:
1. Multi-model consistency error detection to screen candidates (e.g., GECToR, Chinese-BART, etc.);
2. LLM pre-rewrite as weak references;
3. Dual independent annotation (by senior annotators with the same subject background), unifying style, revision, and merging;
4. 100% review by domain experts to ensure publication-level quality, supplementing with multiple references when necessary.
---
## Intended Uses
- Research on **Chinese GEC** for scholarly prose.
- Cross-domain robustness and **discipline-aware** modeling.
- **Continual learning** studies focusing on forgetting/transfer across disciplines.
---
## Ethical Considerations & Privacy
- Texts are anonymized and cleaned to remove sensitive information.
- Sentences are taken from academic texts and contain academic terminology; when the model is made available for public use, the risks and scope of application should be declared and misuse should be avoided.
- Ensure that upstream content complies with platform/journal usage policies and your chosen **license** clearly states permitted uses.
---
## Citation
If you use this dataset in your research, please cite (replace with your paper details):
```bibtex
@misc{qin2025cl2gec,
title = {CL$^2$GEC: A Multi-Discipline Benchmark for Continual Learning in Chinese Literature Grammatical Error Correction},
author = {Shang Qin and Jingheng Ye and Yinghui Li and Hai-Tao Zheng and Qi Li and Jinxiao Shan and Zhixing Li and Hong-Gee Kim},
year = {2025},
eprint = {2509.13672},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2509.13672}
}
```
---
## Changelog
- **v1.0.0**: initial public release; includes train/validation/test splits, field schema, usage examples, and evaluation guidance. |
# CL²GEC:A Multi-Discipline Benchmark for Continual Learning in Chinese Literature Grammatical Error Correction
**CL²GEC** is a benchmark for **Chinese grammatical error correction (GEC)** in **scholarly writing** with a **continual-learning** protocol. The corpus covers **10 first-level disciplines** (Law, Management, Education, Economics, Natural Sciences, History, Agricultural Sciences, Literature, Arts, Philosophy). Each sample contains an errorful sentence (`source`) and one or more corrected references (`references`). Standard **train / validation / test** splits are provided and may be used **per-discipline** to study sequential/continual learning behavior such as forgetting and transfer.
---
## Supported Tasks and Leaderboards
**Grammatical Error Correction (GEC)** / **Text-to-Text Generation**
- **Input**: a Chinese sentence containing grammatical/usage errors.
- **Output**: a semantically equivalent, grammatically correct sentence.
**Recommended Metrics**
- GEC metrics: **Precision / Recall / F0.5** (e.g., via ChERRANT).
- Continual-learning (optional): **Average Performance** and **Backward Transfer (BWT)** computed over task sequences defined by the ordered disciplines.
---
## Dataset Structure
### Data Instances
Below is a recommended public JSON schema:
```json
{
"id": "0",
"source": "总体上看,仍有许多案件以不适用调解制度。",
"references": [
"总体上看,依然有许多案件不适宜使用调解制度来解决。"
],
"category": "法学",
"edits": [
{
"src_interval": [7, 9],
"tgt_interval": [7, 9],
"src_content": ["不", "适", "用"],
"tgt_content": ["不", "适", "宜"]
}
]
}
```
### Data Fields
- **id** *(string)*: unique sample identifier.
- **source** *(string)*: original sentence with errors.
- **references** *(list[string])*: one or more corrected sentences.
- **category** *(string)*: first-level discipline.
- **edits** *(list[object], optional)*: token/character-level edits (if provided).
### Data Splits
| Split | #Samples | Notes |
| ---------- | -------: | ------------------- |
| train | 7,000 | training data |
| validation | 1,000 | development set |
| test | 2,000 | held-out evaluation |
---
## Categories (Disciplines)
Below are the 10 discipline labels (Chinese) with suggested English names:
| Chinese (label in data) | English |
| ----------------------- | ---------- |
| 法学 | Law |
| 管理 | Management |
| 教育 | Education |
| 经济学 | Economics |
| 理学 | Sciences |
| 历史学 | History |
| 农学 | Agronomy |
| 文学 | Literature |
| 哲学 | Philosophy |
| 艺术学 | Arts |
---
## Collection and Annotation
- **Sources**: Extracted from CNKI Academic PDFs, covering 10 first-level disciplines and 100 second-level disciplines; only abstracts and main text are retained; non-linguistic content such as references, acknowledgments, formulas, tables, and figure captions are removed; sentence-level segmentation uses LTP. Anonymization is also performed.
- **Annotation**:
1. Multi-model consistency error detection to screen candidates (e.g., GECToR, Chinese-BART, etc.);
2. LLM pre-rewrite as weak references;
3. Dual independent annotation (by senior annotators with the same subject background), unifying style, revision, and merging;
4. 100% review by domain experts to ensure publication-level quality, supplementing with multiple references when necessary.
---
## Intended Uses
- Research on **Chinese GEC** for scholarly prose.
- Cross-domain robustness and **discipline-aware** modeling.
- **Continual learning** studies focusing on forgetting/transfer across disciplines.
---
## Ethical Considerations & Privacy
- Texts are anonymized and cleaned to remove sensitive information.
- Sentences are taken from academic texts and contain academic terminology; when the model is made available for public use, the risks and scope of application should be declared and misuse should be avoided.
- Ensure that upstream content complies with platform/journal usage policies and your chosen **license** clearly states permitted uses.
---
## Citation
If you use this dataset in your research, please cite (replace with your paper details):
```bibtex
@misc{qin2025cl2gec,
title = {CL$^2$GEC: A Multi-Discipline Benchmark for Continual Learning in Chinese Literature Grammatical Error Correction},
author = {Shang Qin and Jingheng Ye and Yinghui Li and Hai-Tao Zheng and Qi Li and Jinxiao Shan and Zhixing Li and Hong-Gee Kim},
year = {2025},
eprint = {2509.13672},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2509.13672}
}
```
---
## Changelog
- **v1.0.0**: initial public release; includes train/validation/test splits, field schema, usage examples, and evaluation guidance. | 23 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2509.13672",
"region:us"
] | 2025-10-31T11:42:54+00:00 | 2025-11-12T15:11:56+00:00 | 0 |
oncollm/cancer-reasoning-traces |
# Cancer Reasoning Traces
**Paper:** *Reasoning with LLMs for Cancer Treatment Outcome Prediction*
**Authors:** Geetha Krishna Guruju, Raghu Vamsi Hemadri et al.
**License:** [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
**Dataset size:** 24,856 samples
**Modality:** Text
**Task:** Clinical reasoning generation (Chain-of-Thought)
**Code:** [OncoReason GitHub Repository](https://github.com/OncoReason/Clinical-Reasoning-LLMs)
## Dataset Overview
The **Cancer Reasoning Traces** dataset contains structured **chain-of-thought (CoT)** reasoning and commentary derived from oncology patient summaries in the **MSK-CHORD** dataset.
Each record corresponds to a single anonymized cancer patient and captures the **step-by-step clinical reasoning process** behind survival and treatment outcome prediction — without including the final prediction itself.
This dataset enables the training and evaluation of **reasoning-aligned large language models (LLMs)** that can articulate **clinically grounded, interpretable reasoning** for oncology tasks.
## Dataset Structure
| Column | Type | Description |
|:--------|:------|:------------|
| **`patient_id`** | `string` | Unique anonymized identifier for each patient (e.g., `"P-0000412"`). |
| **`chain_of_thought`** | `list[string]` | Ordered reasoning steps describing how the model interprets patient attributes, treatment history, and biomarkers to form a prognosis rationale. Each step reflects a clinically meaningful inference. |
| **`comments`** | `string` | Free-text notes describing ambiguities, missing data, or edge cases encountered during reasoning (e.g., “HER2 status missing” or “Incomplete record for immunotherapy duration”). |
## Data Preparation
- Derived from **structured patient summaries** built using the **MSK-CHORD** oncology database, which contains rich clinical, biomarker, and treatment information.
- Each summary was provided as input to the **DeepSeek-R1 reasoning model**, which generated multi-step chain-of-thought explanations using a standardized oncology-specific prompt.
- The responses were parsed into two structured components:
1. `chain_of_thought`: ordered reasoning steps
2. `comments`: concise notes about uncertainty or missing information
- Outputs were validated for schema consistency and formatted into JSON for reproducible downstream use.
## Use Cases
This dataset can be used for:
- Training **reasoning-aligned LLMs** for clinical and biomedical applications
- Evaluating **interpretability and consistency** in medical CoT generation
- Studying how LLMs reason about **prognostic and treatment-related factors**
- Fine-tuning models for **structured CoT distillation** or **reward-based reasoning alignment (e.g., GRPO)**
## License
This dataset is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0).
## Citation
If you use this dataset in your research or project, please cite the following paper:
> **OncoReason: Structuring Clinical Reasoning in LLMs for Robust and Interpretable Survival Prediction**
> Raghu Vamshi Hemadri, Geetha Krishna Guruju, Kristi Topollai, Anna Ewa Choromanska
> *arXiv preprint arXiv:2510.17532*
> [https://arxiv.org/abs/2510.17532](https://arxiv.org/abs/2510.17532)
```bibtex
@misc{hemadri2025oncoreasonstructuringclinicalreasoning,
title={OncoReason: Structuring Clinical Reasoning in LLMs for Robust and Interpretable Survival Prediction},
author={Raghu Vamshi Hemadri and Geetha Krishna Guruju and Kristi Topollai and Anna Ewa Choromanska},
year={2025},
eprint={2510.17532},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2510.17532},
}
|
# Cancer Reasoning Traces
**Paper:** *Reasoning with LLMs for Cancer Treatment Outcome Prediction*
**Authors:** Geetha Krishna Guruju, Raghu Vamsi Hemadri et al.
**License:** [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
**Dataset size:** 24,856 samples
**Modality:** Text
**Task:** Clinical reasoning generation (Chain-of-Thought)
**Code:** [OncoReason GitHub Repository](https://github.com/OncoReason/Clinical-Reasoning-LLMs)
## Dataset Overview
The **Cancer Reasoning Traces** dataset contains structured **chain-of-thought (CoT)** reasoning and commentary derived from oncology patient summaries in the **MSK-CHORD** dataset.
Each record corresponds to a single anonymized cancer patient and captures the **step-by-step clinical reasoning process** behind survival and treatment outcome prediction — without including the final prediction itself.
This dataset enables the training and evaluation of **reasoning-aligned large language models (LLMs)** that can articulate **clinically grounded, interpretable reasoning** for oncology tasks.
## Dataset Structure
| Column | Type | Description |
|:--------|:------|:------------|
| **`patient_id`** | `string` | Unique anonymized identifier for each patient (e.g., `"P-0000412"`). |
| **`chain_of_thought`** | `list[string]` | Ordered reasoning steps describing how the model interprets patient attributes, treatment history, and biomarkers to form a prognosis rationale. Each step reflects a clinically meaningful inference. |
| **`comments`** | `string` | Free-text notes describing ambiguities, missing data, or edge cases encountered during reasoning (e.g., “HER2 status missing” or “Incomplete record for immunotherapy duration”). |
## Data Preparation
- Derived from **structured patient summaries** built using the **MSK-CHORD** oncology database, which contains rich clinical, biomarker, and treatment information.
- Each summary was provided as input to the **DeepSeek-R1 reasoning model**, which generated multi-step chain-of-thought explanations using a standardized oncology-specific prompt.
- The responses were parsed into two structured components:
1. `chain_of_thought`: ordered reasoning steps
2. `comments`: concise notes about uncertainty or missing information
- Outputs were validated for schema consistency and formatted into JSON for reproducible downstream use.
## Use Cases
This dataset can be used for:
- Training **reasoning-aligned LLMs** for clinical and biomedical applications
- Evaluating **interpretability and consistency** in medical CoT generation
- Studying how LLMs reason about **prognostic and treatment-related factors**
- Fine-tuning models for **structured CoT distillation** or **reward-based reasoning alignment (e.g., GRPO)**
## License
This dataset is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0).
## Citation
If you use this dataset in your research or project, please cite the following paper:
> **OncoReason: Structuring Clinical Reasoning in LLMs for Robust and Interpretable Survival Prediction**
> Raghu Vamshi Hemadri, Geetha Krishna Guruju, Kristi Topollai, Anna Ewa Choromanska
> *arXiv preprint arXiv:2510.17532*
> [https://arxiv.org/abs/2510.17532](https://arxiv.org/abs/2510.17532)
```bibtex
@misc{hemadri2025oncoreasonstructuringclinicalreasoning,
title={OncoReason: Structuring Clinical Reasoning in LLMs for Robust and Interpretable Survival Prediction},
author={Raghu Vamshi Hemadri and Geetha Krishna Guruju and Kristi Topollai and Anna Ewa Choromanska},
year={2025},
eprint={2510.17532},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2510.17532},
}
| 50 | 0 | [
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2510.17532",
"region:us",
"cancer",
"oncology",
"clinical",
"reasoning",
"treatment"
] | 2025-07-10T15:44:13+00:00 | 2025-11-12T15:11:40+00:00 | 0 |
TuringRRX/TinyMoves | # TinyMoves Dataset
TinyMoves is a dataset for reasoning on Reactome biological pathways. We introduce two tasks:
- **Task 1 - Reconstruction**: A benchmark for **building an entire pathway**, testing models' ability to generate the ordered sequence of mechanistic steps in a pathway, starting from just an obfuscated title.
- **Task 2 - Corruption**: A benchmark for recovering **corrupted biological pathways**, testing models' ability to refine and correct mechanistic hypotheses.
# Task 1: Reconstruction
The **Reconstruction task** evaluates whether systems can **rebuild an entire pathway** given only an **obfuscated pathway title**.
---
## What's inside
- **Reference pathways (`reference_pathways/`)**
Each pathway sis tored as a `.tsv` file with multiple metadata columns.
The **`name` column** is the supervision target:
- Row 1 = **pathway title** (ignored for reconstruction).
- Rows 2+ = **ordered mechanistic steps** to be predicted.
Other columns (`identifier`, `summation_text`, `references`, `uri`) provide provenance but are not used for evaluation.
- **Obfuscated pathway names (`pathway_name_mapping.tsv`)**
A mapping file providing **Reactome IDs**, **original pathway names**, and **obfuscated names**.
- Input to models = `obfuscated_name`
- Target to reconstruct = `reference_pathway.name[1:]`
---
## Folder layout
```
R-HSA-77288.tsv
R-HSA-418457.tsv
...
pathway_name_mapping.tsv
```
- `R-HSA-*.tsv`: gold-standard canonical pathways with metadata.
- `pathway_name_mapping.tsv`: maps each pathway ID to its original and obfuscated names.
---
## Conventions
- **Title vs. steps**: The first row of the `name` column is the pathway title.
- Do **not** include the title as part of reconstruction targets.
- Only rows 2+ are evaluated.
- **One-to-one mapping**: Each obfuscated pathway name in `pathway_name_mapping.tsv` corresponds to exactly one reference `.tsv` in `reference_pathways/`.
- **Free text**: Step descriptions in the `name` column are free text, not tokenized actions or reactions. Exact match is required for strict evaluation.
---
## File formats
### 1) Reference pathway (`R-HSA-*.tsv`)
- **Delimiter:** tab
- **Columns:**
| Column | Meaning |
|-----------------|----------|
| `name` | Title (row 1) and ordered step names (rows 2+) |
| `identifier` | Reactome stable IDs for steps (e.g., `R-HSA-109339`) |
| `summation_text`| Narrative description of the step |
| `references` | Literature references (PubMed IDs, ISBN, etc.) |
| `uri` | Reactome URI pointing to the biochemical reaction |
> **Evaluation only uses the `name` column.**
---
### 2) Pathway name mapping (`pathway_name_mapping.tsv`)
- **Delimiter:** tab
- **Columns:**
| Column | Meaning |
|------------------|----------|
| `R-HSA-ID` | Reactome pathway identifier |
| `Original_Name` | Canonical pathway title |
| `Obfuscated_Name`| Obfuscated title used as model input |
---
## Task definition
- **Input**: Obfuscated pathway title (`Obfuscated_Name`).
- **Output**: Ordered mechanistic steps from `reference_pathways/*.tsv` (`name[1:]`).
## Accessing Corpora
Below we describe the process for creating pathway-specific corpora, which models can query.
- Each pathway-specific TSV file includes a `references' column with listed PubMed IDs.
- For each pathway, the PubMed IDs across TSVs may be used to query PubMed's E-utilities API (Entrez Programming Utilities) to download the corresponding abstracts and, when available, full texts.
- Since access is limited to open-access articles, in this set approximately 3% of abstracts could not be retrieved, and only ~12% of articles provided open-access full text.
---
# Task 2: Corruption
Starting from Reactome pathways, we inject labelled errors to evaluate whether systems can **remove the errors while preserving the correct elements**.
## What’s inside
- **Reference pathways (`reference_pathways/`)**
Minimal `.tsv` files with canonical ordered steps for index alignment.
**We only use the `name` column**; the **first row** is the **title**; subsequent rows are the **steps**.
The "name" column contains the pathway **title** (first row), followed by the ordered **steps** (one per row).
- **Corrupted pathways (`.tsv`)**
Each file contains the "name" column, containing the pathway **title** (first row), followed by the ordered **steps** (one per row).
Some steps are **replaced**; others have **inserted** spurious steps.
- **Corruption metadata (`.metadata.tsv`)**
One row **per applied corruption** describing:
- where it was applied (**reference index**),
- how it was applied (`replace`, `insert_after`),
- what type (`wrong_entity`, `wrong_direction`, `add_unsupported_step`),
- **difficulty** (1 = easy, 2 = hard),
- original and corrupted statements,
- the **final index** in the corrupted pathway.
---
## Folder layout
```
corrupted_pathways/
wrong_entity_difficulty_1_fraction_0.2/
R-HSA-418597.tsv
R-HSA-418597.metadata.tsv
wrong_direction_difficulty_2_fraction_0.4/
...
reference_pathways/
R-HSA-418597.tsv
...
```
Each folder name encodes the corruption setting:
**`{error_type}_difficulty_{d}_fraction_{f}`**, where:
- `error_type ∈ {add_unsupported_step, wrong_direction, wrong_entity}`
- `difficulty ∈ {1, 2}`
- `fraction ∈ {0.1, 0.2, 0.3, 0.4}` (share of steps to corrupt)
---
## Conventions
- **Title vs. steps:** The **first row** in each pathway `.tsv` is the **title**; **do not** treat it as a step.
- **Indices:** All step indices are **0-based** and **exclude** the title row.
- **Single-error guarantee:** At most **one corruption per original step** (no compounding on the same step).
- **Insertions shift indices:** `insert_before/after` increase downstream indices in the **corrupted** pathway; we provide both **reference** and **final** indices.
---
## File formats
### 1) Corrupted pathway (`*.tsv`)
- **Delimiter:** tab
- **Columns:** `name` (only)
- **Semantics:**
- Row 1: **pathway title**
- Rows 2+: **ordered steps** (free text)
> Insertions **increase** the number of step rows; replacements **preserve** row count.
---
### 2) Corruption metadata (`*.metadata.tsv`)
- **Delimiter:** tab
- **One row per applied corruption.**
- **Schema**
| Column | Type | Meaning |
|---------------------------|---------------------|---------|
| `corruption_id` | `str` (UUID) | Unique identifier of this corruption instance. |
| `created_at` | `str` (ISO 8601) | Timestamp when the corruption was created. |
| `model_name` | `str` | Generator used for synthetic text (e.g., `gpt-4o`). |
| `seed` | `int` | RNG seed used in synthesis. |
| `pathway_id` | `str` | Reactome ID (e.g., `R-HSA-418597`). |
| `pathway_title` | `str` | Pathway title (matches the first `name` row). |
| `pathway_step_count` | `int` | Number of **reference** steps (excl. title). |
| **`anchor_step_index`** | **`int (0-based)`** | **Index in the reference pathway** used as the anchor for this corruption. For `replace`, it’s the step replaced; for insertions, it’s the neighbor step defining the insertion position. |
| `operation` | `str` | One of `replace`, `insert_before`, `insert_after`. |
| `error_type` | `str` | One of `wrong_entity`, `wrong_direction`, `add_unsupported_step`. |
| `difficulty` | `int` | `1` = easy, `2` = hard. |
| `original_statement` | `str or empty` | Reference step text for `replace`; empty for insertions. |
| `corrupted_statement` | `str` | Replacement or inserted text. |
| `category_rationale` | `str` | Brief justification for the corruption category. |
| `corrupted_step_index` | `int (0-based)` | **Index in the final corrupted pathway** after all insertions (excl. title). |
| `original_ref_step_index` | `int or empty` | Original reference index (for `replace`); empty for insertions. |
| `original_ref_step_text` | `str or empty` | Original step text (for `replace`); empty for insertions. |
| `sampling_seed` | `int` | Seed used by the deterministic sampler assembling the dataset. |
> **Important Indexing Fields**
> - `anchor_step_index`: position of the reference statement in **reference** coordinates (no insertions).
> - `corrupted_step_index`: position of the corrupted statement **in the corrupted pathway** after applying all insertions.
---
### 3) Reference pathway (`reference_pathways/*.tsv`)
- **Delimiter:** tab
- **Columns:** `name`, `identifier`, `summation_text`, `references`, `uri`
- **Used fields:** **only `name`** (title + ordered steps).
- Extra fields are included for provenance; evaluation here relies solely on the ordered `name` strings.
---
# Acknowledgements
We thank the Reactome team for making the pathway knowledgebase openly available. | # TinyMoves Dataset
TinyMoves is a dataset for reasoning on Reactome biological pathways. We introduce two tasks:
- **Task 1 - Reconstruction**: A benchmark for **building an entire pathway**, testing models' ability to generate the ordered sequence of mechanistic steps in a pathway, starting from just an obfuscated title.
- **Task 2 - Corruption**: A benchmark for recovering **corrupted biological pathways**, testing models' ability to refine and correct mechanistic hypotheses.
# Task 1: Reconstruction
The **Reconstruction task** evaluates whether systems can **rebuild an entire pathway** given only an **obfuscated pathway title**.
---
## What's inside
- **Reference pathways (`reference_pathways/`)**
Each pathway sis tored as a `.tsv` file with multiple metadata columns.
The **`name` column** is the supervision target:
- Row 1 = **pathway title** (ignored for reconstruction).
- Rows 2+ = **ordered mechanistic steps** to be predicted.
Other columns (`identifier`, `summation_text`, `references`, `uri`) provide provenance but are not used for evaluation.
- **Obfuscated pathway names (`pathway_name_mapping.tsv`)**
A mapping file providing **Reactome IDs**, **original pathway names**, and **obfuscated names**.
- Input to models = `obfuscated_name`
- Target to reconstruct = `reference_pathway.name[1:]`
---
## Folder layout
```
R-HSA-77288.tsv
R-HSA-418457.tsv
...
pathway_name_mapping.tsv
```
- `R-HSA-*.tsv`: gold-standard canonical pathways with metadata.
- `pathway_name_mapping.tsv`: maps each pathway ID to its original and obfuscated names.
---
## Conventions
- **Title vs. steps**: The first row of the `name` column is the pathway title.
- Do **not** include the title as part of reconstruction targets.
- Only rows 2+ are evaluated.
- **One-to-one mapping**: Each obfuscated pathway name in `pathway_name_mapping.tsv` corresponds to exactly one reference `.tsv` in `reference_pathways/`.
- **Free text**: Step descriptions in the `name` column are free text, not tokenized actions or reactions. Exact match is required for strict evaluation.
---
## File formats
### 1) Reference pathway (`R-HSA-*.tsv`)
- **Delimiter:** tab
- **Columns:**
| Column | Meaning |
|-----------------|----------|
| `name` | Title (row 1) and ordered step names (rows 2+) |
| `identifier` | Reactome stable IDs for steps (e.g., `R-HSA-109339`) |
| `summation_text`| Narrative description of the step |
| `references` | Literature references (PubMed IDs, ISBN, etc.) |
| `uri` | Reactome URI pointing to the biochemical reaction |
> **Evaluation only uses the `name` column.**
---
### 2) Pathway name mapping (`pathway_name_mapping.tsv`)
- **Delimiter:** tab
- **Columns:**
| Column | Meaning |
|------------------|----------|
| `R-HSA-ID` | Reactome pathway identifier |
| `Original_Name` | Canonical pathway title |
| `Obfuscated_Name`| Obfuscated title used as model input |
---
## Task definition
- **Input**: Obfuscated pathway title (`Obfuscated_Name`).
- **Output**: Ordered mechanistic steps from `reference_pathways/*.tsv` (`name[1:]`).
## Accessing Corpora
Below we describe the process for creating pathway-specific corpora, which models can query.
- Each pathway-specific TSV file includes a `references' column with listed PubMed IDs.
- For each pathway, the PubMed IDs across TSVs may be used to query PubMed's E-utilities API (Entrez Programming Utilities) to download the corresponding abstracts and, when available, full texts.
- Since access is limited to open-access articles, in this set approximately 3% of abstracts could not be retrieved, and only ~12% of articles provided open-access full text.
---
# Task 2: Corruption
Starting from Reactome pathways, we inject labelled errors to evaluate whether systems can **remove the errors while preserving the correct elements**.
## What’s inside
- **Reference pathways (`reference_pathways/`)**
Minimal `.tsv` files with canonical ordered steps for index alignment.
**We only use the `name` column**; the **first row** is the **title**; subsequent rows are the **steps**.
The "name" column contains the pathway **title** (first row), followed by the ordered **steps** (one per row).
- **Corrupted pathways (`.tsv`)**
Each file contains the "name" column, containing the pathway **title** (first row), followed by the ordered **steps** (one per row).
Some steps are **replaced**; others have **inserted** spurious steps.
- **Corruption metadata (`.metadata.tsv`)**
One row **per applied corruption** describing:
- where it was applied (**reference index**),
- how it was applied (`replace`, `insert_after`),
- what type (`wrong_entity`, `wrong_direction`, `add_unsupported_step`),
- **difficulty** (1 = easy, 2 = hard),
- original and corrupted statements,
- the **final index** in the corrupted pathway.
---
## Folder layout
```
corrupted_pathways/
wrong_entity_difficulty_1_fraction_0.2/
R-HSA-418597.tsv
R-HSA-418597.metadata.tsv
wrong_direction_difficulty_2_fraction_0.4/
...
reference_pathways/
R-HSA-418597.tsv
...
```
Each folder name encodes the corruption setting:
**`{error_type}_difficulty_{d}_fraction_{f}`**, where:
- `error_type ∈ {add_unsupported_step, wrong_direction, wrong_entity}`
- `difficulty ∈ {1, 2}`
- `fraction ∈ {0.1, 0.2, 0.3, 0.4}` (share of steps to corrupt)
---
## Conventions
- **Title vs. steps:** The **first row** in each pathway `.tsv` is the **title**; **do not** treat it as a step.
- **Indices:** All step indices are **0-based** and **exclude** the title row.
- **Single-error guarantee:** At most **one corruption per original step** (no compounding on the same step).
- **Insertions shift indices:** `insert_before/after` increase downstream indices in the **corrupted** pathway; we provide both **reference** and **final** indices.
---
## File formats
### 1) Corrupted pathway (`*.tsv`)
- **Delimiter:** tab
- **Columns:** `name` (only)
- **Semantics:**
- Row 1: **pathway title**
- Rows 2+: **ordered steps** (free text)
> Insertions **increase** the number of step rows; replacements **preserve** row count.
---
### 2) Corruption metadata (`*.metadata.tsv`)
- **Delimiter:** tab
- **One row per applied corruption.**
- **Schema**
| Column | Type | Meaning |
|---------------------------|---------------------|---------|
| `corruption_id` | `str` (UUID) | Unique identifier of this corruption instance. |
| `created_at` | `str` (ISO 8601) | Timestamp when the corruption was created. |
| `model_name` | `str` | Generator used for synthetic text (e.g., `gpt-4o`). |
| `seed` | `int` | RNG seed used in synthesis. |
| `pathway_id` | `str` | Reactome ID (e.g., `R-HSA-418597`). |
| `pathway_title` | `str` | Pathway title (matches the first `name` row). |
| `pathway_step_count` | `int` | Number of **reference** steps (excl. title). |
| **`anchor_step_index`** | **`int (0-based)`** | **Index in the reference pathway** used as the anchor for this corruption. For `replace`, it’s the step replaced; for insertions, it’s the neighbor step defining the insertion position. |
| `operation` | `str` | One of `replace`, `insert_before`, `insert_after`. |
| `error_type` | `str` | One of `wrong_entity`, `wrong_direction`, `add_unsupported_step`. |
| `difficulty` | `int` | `1` = easy, `2` = hard. |
| `original_statement` | `str or empty` | Reference step text for `replace`; empty for insertions. |
| `corrupted_statement` | `str` | Replacement or inserted text. |
| `category_rationale` | `str` | Brief justification for the corruption category. |
| `corrupted_step_index` | `int (0-based)` | **Index in the final corrupted pathway** after all insertions (excl. title). |
| `original_ref_step_index` | `int or empty` | Original reference index (for `replace`); empty for insertions. |
| `original_ref_step_text` | `str or empty` | Original step text (for `replace`); empty for insertions. |
| `sampling_seed` | `int` | Seed used by the deterministic sampler assembling the dataset. |
> **Important Indexing Fields**
> - `anchor_step_index`: position of the reference statement in **reference** coordinates (no insertions).
> - `corrupted_step_index`: position of the corrupted statement **in the corrupted pathway** after applying all insertions.
---
### 3) Reference pathway (`reference_pathways/*.tsv`)
- **Delimiter:** tab
- **Columns:** `name`, `identifier`, `summation_text`, `references`, `uri`
- **Used fields:** **only `name`** (title + ordered steps).
- Extra fields are included for provenance; evaluation here relies solely on the ordered `name` strings.
---
# Acknowledgements
We thank the Reactome team for making the pathway knowledgebase openly available. | 23 | 0 | [
"language:en",
"license:mit",
"region:us",
"biology"
] | 2025-08-22T08:31:04+00:00 | 2025-11-12T15:05:37+00:00 | 0 |
AppThreat/vdb |
This dataset comprises application and OS vulnerabilities aggregated from multiple sources, including OSV, GitHub, NVD, and Linux vendor feeds, in the form of SQLite data files (.vdb6).
## Vulnerability Data sources
- Linux [vuln-list](https://github.com/appthreat/vuln-list)
- OSV (1)
- NVD
- GitHub
## Linux distros
- AlmaLinux
- Debian
- Alpine
- Amazon Linux
- Arch Linux
- RHEL/CentOS
- Rocky Linux
- Ubuntu
- OpenSUSE
- Photon
- Chainguard
- Wolfi OS
## Database files
The vulnerability database comprises two SQLite database files.
- data.index.vdb6 - A smaller index database optimized for quick purl or cpe string searches and vers-based range comparisons.
- data.vdb6 - Full CVE source database containing normalized data in CVE 5.1 specification formation and purl prefix.
### cve_index schema
```sql
CREATE TABLE if not exists cve_index(
cve_id TEXT NOT NULL,
type TEXT NOT NULL,
namespace TEXT,
name TEXT NOT NULL,
vers TEXT NOT NULL,
purl_prefix TEXT NOT NULL
)
```
### cve_data schema
```sql
CREATE TABLE if not exists cve_data(
cve_id TEXT NOT NULL,
type TEXT NOT NULL,
namespace TEXT,
name TEXT NOT NULL,
source_data BLOB NOT NULL,
override_data BLOB,
source_data_hash TEXT NOT NULL,
vers TEXT NOT NULL,
purl_prefix TEXT NOT NULL
)
```
## Folders
- app - Application vulnerabilities from 2018. Useful for secure code reviews.
- app-2y - Application vulnerabilities from 2024. Useful to check for the latest vulnerabilities quickly.
- app-10y - Application vulnerabilities from 2014.
- app-os - Application and OS vulnerabilities from 2018. Useful for lifecycle analysis and container SBOM scans.
- app-os-10y - Application and OS vulnerabilities from 2014.
Download data.vdb6 and data.index.vdb6 files from a single folder of your choice.
## Searching for CVEs
Use the smaller index database for all search operations.
### Searching by purl
Given a purl string (`purl_str`), perform the following steps to convert this into a suitable purl prefix (`purl_prefix`) string:
In most cases, a purl prefix is a substring at index 0 after a split by "@". Eg: `purl_prefix = purl_str.split("@")[0]`.
A more robust approach:
- Parse and validate the string using a suitable [library](https://github.com/package-url/). Retain the parsed purl object (`purl_obj`)
- Construct a purl prefix string with the following logic:
- Set the value for `purl_prefix` to `"pkg:" + purl_obj["type"]`
- If there is a namespace, append it to purl_prefix after the slash character. Eg: `purl_prefix = purl_prefix + "/" + purl_obj['namespace']`
- Optional for Linux distros: If there is a qualifier string with the name `distro_name`, append it to the purl_prefix after the slash character. Eg: `purl_prefix = purl_prefix + "/" + purl_obj['qualifiers']['distro_name']`
- Append the name after the slash character. Eg: `purl_prefix = purl_prefix + "/" + purl_obj['name']`
Use the below SQL query to search by purl_prefix:
```
SELECT DISTINCT cve_id, type, namespace, name, vers, purl_prefix FROM cve_index where purl_prefix = ?;
```
### Searching by cpe
Parse the cpe string to extract the vendor, product, and version. The regex for python is shown below:
```python
import re
CPE_FULL_REGEX = re.compile(
"cpe:?:[^:]+:(?P<cve_type>[^:]+):(?P<vendor>[^:]+):(?P<package>[^:]+):(?P<version>[^:]+):(?P<update>[^:]+):(?P<edition>[^:]+):(?P<lang>[^:]+):(?P<sw_edition>[^:]+):(?P<target_sw>[^:]+):(?P<target_hw>[^:]+):(?P<other>[^:]+)"
)
```
In the `cve_index` table, vendor maps to namespace and package maps to name. The SQL query is below:
```sql
SELECT DISTINCT cve_id, type, namespace, name, vers, purl_prefix FROM cve_index where namespace = ? AND name = ?;
```
### Comparing version ranges using vers
Refer to the vers [documentation](https://github.com/package-url/purl-spec/blob/version-range-spec/VERSION-RANGE-SPEC.rst) for information regarding vers and a logic to parse and check if a version is within a range. To simplify the logic, a value from the vers column in `cve_index` would contain only a maximum of two constraints (one greater than and one lesser than).
## Combining data
Search the `cve_index` table in the index database first to retrieve any matching cve_id and purl_prefix values. Use these two column values to retrieve the full CVE source information from the `cve_data` table. An example query is shown below:
```sql
SELECT DISTINCT cve_id, type, namespace, name, source_data_hash, json(source_data), json(override_data), vers, purl_prefix FROM cve_data
WHERE cve_id = ? AND vers = ? AND purl_prefix = ?
GROUP BY purl_prefix
ORDER BY cve_id DESC;
```
Use the `source_data_hash` values to filter out any duplicate results for the same CVE. Duplicate results are possible when multiple vers match the same CVE and purl prefixes.
## Citation
Use the below citation in your research.
```text
@misc{vdb,
author = {Team AppThreat},
month = May,
title = {{AppThreat vulnerability-db}},
howpublished = {{https://huggingface.co/datasets/AppThreat/vdb}},
year = {2025}
}
```
|
This dataset comprises application and OS vulnerabilities aggregated from multiple sources, including OSV, GitHub, NVD, and Linux vendor feeds, in the form of SQLite data files (.vdb6).
## Vulnerability Data sources
- Linux [vuln-list](https://github.com/appthreat/vuln-list)
- OSV (1)
- NVD
- GitHub
## Linux distros
- AlmaLinux
- Debian
- Alpine
- Amazon Linux
- Arch Linux
- RHEL/CentOS
- Rocky Linux
- Ubuntu
- OpenSUSE
- Photon
- Chainguard
- Wolfi OS
## Database files
The vulnerability database comprises two SQLite database files.
- data.index.vdb6 - A smaller index database optimized for quick purl or cpe string searches and vers-based range comparisons.
- data.vdb6 - Full CVE source database containing normalized data in CVE 5.1 specification formation and purl prefix.
### cve_index schema
```sql
CREATE TABLE if not exists cve_index(
cve_id TEXT NOT NULL,
type TEXT NOT NULL,
namespace TEXT,
name TEXT NOT NULL,
vers TEXT NOT NULL,
purl_prefix TEXT NOT NULL
)
```
### cve_data schema
```sql
CREATE TABLE if not exists cve_data(
cve_id TEXT NOT NULL,
type TEXT NOT NULL,
namespace TEXT,
name TEXT NOT NULL,
source_data BLOB NOT NULL,
override_data BLOB,
source_data_hash TEXT NOT NULL,
vers TEXT NOT NULL,
purl_prefix TEXT NOT NULL
)
```
## Folders
- app - Application vulnerabilities from 2018. Useful for secure code reviews.
- app-2y - Application vulnerabilities from 2024. Useful to check for the latest vulnerabilities quickly.
- app-10y - Application vulnerabilities from 2014.
- app-os - Application and OS vulnerabilities from 2018. Useful for lifecycle analysis and container SBOM scans.
- app-os-10y - Application and OS vulnerabilities from 2014.
Download data.vdb6 and data.index.vdb6 files from a single folder of your choice.
## Searching for CVEs
Use the smaller index database for all search operations.
### Searching by purl
Given a purl string (`purl_str`), perform the following steps to convert this into a suitable purl prefix (`purl_prefix`) string:
In most cases, a purl prefix is a substring at index 0 after a split by "@". Eg: `purl_prefix = purl_str.split("@")[0]`.
A more robust approach:
- Parse and validate the string using a suitable [library](https://github.com/package-url/). Retain the parsed purl object (`purl_obj`)
- Construct a purl prefix string with the following logic:
- Set the value for `purl_prefix` to `"pkg:" + purl_obj["type"]`
- If there is a namespace, append it to purl_prefix after the slash character. Eg: `purl_prefix = purl_prefix + "/" + purl_obj['namespace']`
- Optional for Linux distros: If there is a qualifier string with the name `distro_name`, append it to the purl_prefix after the slash character. Eg: `purl_prefix = purl_prefix + "/" + purl_obj['qualifiers']['distro_name']`
- Append the name after the slash character. Eg: `purl_prefix = purl_prefix + "/" + purl_obj['name']`
Use the below SQL query to search by purl_prefix:
```
SELECT DISTINCT cve_id, type, namespace, name, vers, purl_prefix FROM cve_index where purl_prefix = ?;
```
### Searching by cpe
Parse the cpe string to extract the vendor, product, and version. The regex for python is shown below:
```python
import re
CPE_FULL_REGEX = re.compile(
"cpe:?:[^:]+:(?P<cve_type>[^:]+):(?P<vendor>[^:]+):(?P<package>[^:]+):(?P<version>[^:]+):(?P<update>[^:]+):(?P<edition>[^:]+):(?P<lang>[^:]+):(?P<sw_edition>[^:]+):(?P<target_sw>[^:]+):(?P<target_hw>[^:]+):(?P<other>[^:]+)"
)
```
In the `cve_index` table, vendor maps to namespace and package maps to name. The SQL query is below:
```sql
SELECT DISTINCT cve_id, type, namespace, name, vers, purl_prefix FROM cve_index where namespace = ? AND name = ?;
```
### Comparing version ranges using vers
Refer to the vers [documentation](https://github.com/package-url/purl-spec/blob/version-range-spec/VERSION-RANGE-SPEC.rst) for information regarding vers and a logic to parse and check if a version is within a range. To simplify the logic, a value from the vers column in `cve_index` would contain only a maximum of two constraints (one greater than and one lesser than).
## Combining data
Search the `cve_index` table in the index database first to retrieve any matching cve_id and purl_prefix values. Use these two column values to retrieve the full CVE source information from the `cve_data` table. An example query is shown below:
```sql
SELECT DISTINCT cve_id, type, namespace, name, source_data_hash, json(source_data), json(override_data), vers, purl_prefix FROM cve_data
WHERE cve_id = ? AND vers = ? AND purl_prefix = ?
GROUP BY purl_prefix
ORDER BY cve_id DESC;
```
Use the `source_data_hash` values to filter out any duplicate results for the same CVE. Duplicate results are possible when multiple vers match the same CVE and purl prefixes.
## Citation
Use the below citation in your research.
```text
@misc{vdb,
author = {Team AppThreat},
month = May,
title = {{AppThreat vulnerability-db}},
howpublished = {{https://huggingface.co/datasets/AppThreat/vdb}},
year = {2025}
}
```
| 2,422 | 1 | [
"language:en",
"license:mit",
"region:us",
"vulnerabilities",
"vdb",
"sca",
"osv",
"nvd",
"ghsa",
"vers",
"purl"
] | 2025-02-17T23:35:01+00:00 | 2025-11-12T15:02:53+00:00 | 0 |
HSP-IIT/hri2 |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "ergocub",
"total_episodes": 2,
"total_frames": 5801,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 10,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"left_hand.position.x",
"left_hand.position.y",
"left_hand.position.z",
"left_hand.orientation.d1",
"left_hand.orientation.d2",
"left_hand.orientation.d3",
"left_hand.orientation.d4",
"left_hand.orientation.d5",
"left_hand.orientation.d6",
"right_hand.position.x",
"right_hand.position.y",
"right_hand.position.z",
"right_hand.orientation.d1",
"right_hand.orientation.d2",
"right_hand.orientation.d3",
"right_hand.orientation.d4",
"right_hand.orientation.d5",
"right_hand.orientation.d6",
"head.orientation.qw",
"head.orientation.qx",
"head.orientation.qy",
"head.orientation.qz",
"left_fingers.thumb_add",
"left_fingers.thumb_oc",
"left_fingers.index_add",
"left_fingers.index_oc",
"left_fingers.middle_oc",
"left_fingers.ring_pinky_oc",
"right_fingers.thumb_add",
"right_fingers.thumb_oc",
"right_fingers.index_add",
"right_fingers.index_oc",
"right_fingers.middle_oc",
"right_fingers.ring_pinky_oc"
],
"shape": [
34
]
},
"observation.state": {
"dtype": "float32",
"names": [
"left_hand.position.x",
"left_hand.position.y",
"left_hand.position.z",
"left_hand.orientation.d1",
"left_hand.orientation.d2",
"left_hand.orientation.d3",
"left_hand.orientation.d4",
"left_hand.orientation.d5",
"left_hand.orientation.d6",
"right_hand.position.x",
"right_hand.position.y",
"right_hand.position.z",
"right_hand.orientation.d1",
"right_hand.orientation.d2",
"right_hand.orientation.d3",
"right_hand.orientation.d4",
"right_hand.orientation.d5",
"right_hand.orientation.d6",
"head.orientation.qw",
"head.orientation.qx",
"head.orientation.qy",
"head.orientation.qz",
"left_fingers.thumb_add",
"left_fingers.thumb_oc",
"left_fingers.index_add",
"left_fingers.index_oc",
"left_fingers.middle_oc",
"left_fingers.ring_pinky_oc",
"right_fingers.thumb_add",
"right_fingers.thumb_oc",
"right_fingers.index_add",
"right_fingers.index_oc",
"right_fingers.middle_oc",
"right_fingers.ring_pinky_oc"
],
"shape": [
34
]
},
"observation.images.egocentric": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "ergocub",
"total_episodes": 2,
"total_frames": 5801,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 10,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"left_hand.position.x",
"left_hand.position.y",
"left_hand.position.z",
"left_hand.orientation.d1",
"left_hand.orientation.d2",
"left_hand.orientation.d3",
"left_hand.orientation.d4",
"left_hand.orientation.d5",
"left_hand.orientation.d6",
"right_hand.position.x",
"right_hand.position.y",
"right_hand.position.z",
"right_hand.orientation.d1",
"right_hand.orientation.d2",
"right_hand.orientation.d3",
"right_hand.orientation.d4",
"right_hand.orientation.d5",
"right_hand.orientation.d6",
"head.orientation.qw",
"head.orientation.qx",
"head.orientation.qy",
"head.orientation.qz",
"left_fingers.thumb_add",
"left_fingers.thumb_oc",
"left_fingers.index_add",
"left_fingers.index_oc",
"left_fingers.middle_oc",
"left_fingers.ring_pinky_oc",
"right_fingers.thumb_add",
"right_fingers.thumb_oc",
"right_fingers.index_add",
"right_fingers.index_oc",
"right_fingers.middle_oc",
"right_fingers.ring_pinky_oc"
],
"shape": [
34
]
},
"observation.state": {
"dtype": "float32",
"names": [
"left_hand.position.x",
"left_hand.position.y",
"left_hand.position.z",
"left_hand.orientation.d1",
"left_hand.orientation.d2",
"left_hand.orientation.d3",
"left_hand.orientation.d4",
"left_hand.orientation.d5",
"left_hand.orientation.d6",
"right_hand.position.x",
"right_hand.position.y",
"right_hand.position.z",
"right_hand.orientation.d1",
"right_hand.orientation.d2",
"right_hand.orientation.d3",
"right_hand.orientation.d4",
"right_hand.orientation.d5",
"right_hand.orientation.d6",
"head.orientation.qw",
"head.orientation.qx",
"head.orientation.qy",
"head.orientation.qz",
"left_fingers.thumb_add",
"left_fingers.thumb_oc",
"left_fingers.index_add",
"left_fingers.index_oc",
"left_fingers.middle_oc",
"left_fingers.ring_pinky_oc",
"right_fingers.thumb_add",
"right_fingers.thumb_oc",
"right_fingers.index_add",
"right_fingers.index_oc",
"right_fingers.middle_oc",
"right_fingers.ring_pinky_oc"
],
"shape": [
34
]
},
"observation.images.egocentric": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T14:53:26+00:00 | 2025-11-12T15:01:46+00:00 | 0 |
yaisa5ramriez/good_data |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "rover",
"total_episodes": 1,
"total_frames": 385,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 10,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.image.main": {
"dtype": "video",
"shape": [
512,
512,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 512,
"video.width": 512,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"observation.state.imu": {
"dtype": "float32",
"shape": [
10
],
"names": [
"orientation.x",
"orientation.y",
"orientation.z",
"orientation.w",
"angular_velocity.x",
"angular_velocity.y",
"angular_velocity.z",
"linear_acceleration.x",
"linear_acceleration.y",
"linear_acceleration.z"
]
},
"observation.state.odometry_local": {
"dtype": "float32",
"shape": [
5
],
"names": [
"pose.pose.position.x",
"pose.pose.position.y",
"pose.pose.position.z",
"twist.twist.linear.x",
"twist.twist.angular.z"
]
},
"observation.state.odometry_global": {
"dtype": "float32",
"shape": [
5
],
"names": [
"pose.pose.position.x",
"pose.pose.position.y",
"pose.pose.position.z",
"twist.twist.linear.x",
"twist.twist.angular.z"
]
},
"action": {
"dtype": "float32",
"shape": [
3
],
"names": [
"linear.x",
"linear.y",
"linear.z"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
},
"rosetta_fingerprint": "ef354f37c89b2ab3"
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "rover",
"total_episodes": 1,
"total_frames": 385,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 10,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.image.main": {
"dtype": "video",
"shape": [
512,
512,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 512,
"video.width": 512,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"observation.state.imu": {
"dtype": "float32",
"shape": [
10
],
"names": [
"orientation.x",
"orientation.y",
"orientation.z",
"orientation.w",
"angular_velocity.x",
"angular_velocity.y",
"angular_velocity.z",
"linear_acceleration.x",
"linear_acceleration.y",
"linear_acceleration.z"
]
},
"observation.state.odometry_local": {
"dtype": "float32",
"shape": [
5
],
"names": [
"pose.pose.position.x",
"pose.pose.position.y",
"pose.pose.position.z",
"twist.twist.linear.x",
"twist.twist.angular.z"
]
},
"observation.state.odometry_global": {
"dtype": "float32",
"shape": [
5
],
"names": [
"pose.pose.position.x",
"pose.pose.position.y",
"pose.pose.position.z",
"twist.twist.linear.x",
"twist.twist.angular.z"
]
},
"action": {
"dtype": "float32",
"shape": [
3
],
"names": [
"linear.x",
"linear.y",
"linear.z"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
},
"rosetta_fingerprint": "ef354f37c89b2ab3"
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] | 2025-11-12T14:54:23+00:00 | 2025-11-12T14:58:46+00:00 | 0 |
LSDB/desi-dr1-zcat |
The Dark Energy Spectroscopic Instrument (DESI) is a 5-year spectroscopic redshift survey designed to map the three-dimensional structure of the universe between z=0 and z=4. Data Release 1 includes high-confidence redshifts for 18.7 million objects - 13.1 million galaxies, 1.6 million quasars, and 4 million stars - from observations taken between May 2021 and June 2022.
**Acknowledgments**
This research used data obtained with the Dark Energy Spectroscopic Instrument (DESI). DESI construction and operations is managed by the Lawrence Berkeley National Laboratory. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High-Energy Physics, under Contract No. DE–AC02–05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract. Additional support for DESI was provided by the U.S. National Science Foundation (NSF), Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF’s National Optical-Infrared Astronomy Research Laboratory; the Science and Technology Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Humanities, Science and Technology of Mexico (CONAHCYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: www.desi.lbl.gov/collaborating-institutions. The DESI collaboration is honored to be permitted to conduct scientific research on I’oligam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. National Science Foundation, the U.S. Department of Energy, or any of the listed funding agencies.
[**Offical release**](https://data.desi.lbl.gov/doc/releases/dr1/)
[**Column Description**](https://desidatamodel.readthedocs.io/en/latest/column_descriptions.html)
[**Research Paper**](https://ui.adsabs.harvard.edu/abs/2025arXiv250314745D)
|
The Dark Energy Spectroscopic Instrument (DESI) is a 5-year spectroscopic redshift survey designed to map the three-dimensional structure of the universe between z=0 and z=4. Data Release 1 includes high-confidence redshifts for 18.7 million objects - 13.1 million galaxies, 1.6 million quasars, and 4 million stars - from observations taken between May 2021 and June 2022.
**Acknowledgments**
This research used data obtained with the Dark Energy Spectroscopic Instrument (DESI). DESI construction and operations is managed by the Lawrence Berkeley National Laboratory. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High-Energy Physics, under Contract No. DE–AC02–05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract. Additional support for DESI was provided by the U.S. National Science Foundation (NSF), Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF’s National Optical-Infrared Astronomy Research Laboratory; the Science and Technology Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Humanities, Science and Technology of Mexico (CONAHCYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: www.desi.lbl.gov/collaborating-institutions. The DESI collaboration is honored to be permitted to conduct scientific research on I’oligam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. National Science Foundation, the U.S. Department of Energy, or any of the listed funding agencies.
[**Offical release**](https://data.desi.lbl.gov/doc/releases/dr1/)
[**Column Description**](https://desidatamodel.readthedocs.io/en/latest/column_descriptions.html)
[**Research Paper**](https://ui.adsabs.harvard.edu/abs/2025arXiv250314745D)
| 25 | 0 | [
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-10-27T14:41:18+00:00 | 2025-11-12T14:58:58+00:00 | 0 |
dhvazquez/mtg_synthetic_large_dataset | # Magic The Gatering Synthetic Large Image Dataset.
Synthetic images of MTG cards with the following features:
* Photorealistic rendering using BlenderProc2 and HDRI environments
* Precise card geometry with rounded corners
* Random transformations for data augmentation
* Segmentation masks for semantic segmentation training
for uncompress in ubuntu:
sudo apt install p7zip-full
7z x mtg_synthetic_large_dataset.7z.001
# Dataset Structure
Train Set
/train/image - Rendered Images (jpg)
/train/keypoints - Corner keypoints (json)
/train/mask - Mask (png)
/train/temp_textures - 3d Model textures (reference)
Test Set
/test/image - Rendered Images (jpg)
/test/keypoints - Json w/Corner keypoints
/test/mask - Mask (png)
/test/temp_textures - 3d Model textures (reference)
Complete estructure
<set> - train or test
<data_type> - image / keypoint / mask
<reference> - Directory w/card information
<file> - XXX.jpg for image, XXX.png for mask, XXX.json for keypoint
Reference directory name
00173df7-a584-410c-af1d-ada9c791056a__2cfd365e-34d1-4224-b925-119000311934_woe_205_en_normal
│ ││ │ │ │ │ │
│ ││ │ │ │ │ └─ Layout
│ ││ │ │ │ └─ Language
│ ││ │ │ └─ Colector Number
│ ││ │ └─ Set
│ ││ └─ Card ID (skryfall)
│ └└─ double separator
└─ Oracle ID (skryfall)
| Component | Description | Example |
|------------------|-------------------------------------------------=-------------------------|--------------------------------------|
| oracle_id | Unique card identifier in Scryfall (identifies versions of the same card) | 00173df7-a584-410c-af1d-ada9c791056a |
| card_id | Unique identifier for this specific printing | 2cfd365e-34d1-4224-b925-119000311934 |
| set_code | Set/expansion code | woe (Wilds of Eldraine) |
| collector_number | Collector number in the set | 205 |
| lang | Language code (ISO 639-1) | en, es, ja, fr, pt, etc. |
| layout | Card layout type | normal, transform, split, etc. |
Examples:
Image (.jpg) / Mask (.png) / keypoints (.json) Examples:



Keypoints (001.json):
[[166.02081091300872, 865.1902240268578], [185.66033454854758, 359.2114138980728], [573.4461817935525, 390.0347569164927], [525.5468175245054, 908.0770135128237]]
Temp Texture:

Filename:
test/images/005ee549-1bf5-478f-bc3f-3e791bd7eecf__cd9bbbc9-b6ff-451f-aaa7-773fe42fe54b_clb_81_pt_normal/001.jpg
test/masks/005ee549-1bf5-478f-bc3f-3e791bd7eecf__cd9bbbc9-b6ff-451f-aaa7-773fe42fe54b_clb_81_pt_normal/001.png
test/keypoints/005ee549-1bf5-478f-bc3f-3e791bd7eecf__cd9bbbc9-b6ff-451f-aaa7-773fe42fe54b_clb_81_pt_normal/001.json
test/temp_textures/005ee549-1bf5-478f-bc3f-3e791bd7eecf__cd9bbbc9-b6ff-451f-aaa7-773fe42fe54b_clb_81_pt_normal_texture.jpg
# MTG Dataset Analysis
## Summary
| Metric | Value |
|--------|-------|
| **Total Cards** | 102,644 |
| **Unique Cards** | 7,715 |
| **Sets Represented** | 825 |
| **Languages** | 11 |
| **Full Art Cards** | 2,501 |
## Color Identity Distribution
| Color Identity | Card Count |
|----------------|-----------|
| Colorless | 12,827 |
| B (Black) | 10,780 |
| R (Red) | 10,380 |
| G (Green) | 11,223 |
| U (Blue) | 10,746 |
| W (White) | 10,445 |
| GR (Green/Red) | 3,052 |
| BU (Black/Blue) | 3,171 |
| GU (Green/Blue) | 3,198 |
| UW (Blue/White) | 3,019 |
| BG (Black/Green) | 3,067 |
| GW (Green/White) | 3,238 |
| RW (Red/White) | 2,895 |
| BW (Black/White) | 2,925 |
| BR (Black/Red) | 2,910 |
| RU (Red/Blue) | 2,922 |
| BGR (Black/Green/Red) | 470 |
| BGU (Black/Green/Blue) | 472 |
| BRW (Black/Red/White) | 518 |
| BGW (Black/Green/White) | 467 |
| GRU (Green/Red/Blue) | 474 |
| BRU (Black/Red/Blue) | 565 |
| BUW (Black/Blue/White) | 618 |
| RUW (Red/Blue/White) | 391 |
| GUW (Green/Blue/White) | 512 |
| GRW (Green/Red/White) | 507 |
| BGRUW (All colors) | 722 |
| BRUW (Black/Red/Blue/White) | 25 |
| BGRU (Black/Green/Red/Blue) | 27 |
| BGRW (Black/Green/Red/White) | 12 |
| BGUW (Black/Green/Blue/White) | 30 |
| GRUW (Green/Red/Blue/White) | 36 |
## Border Color Distribution
| Border Color | Card Count |
|--------------|-----------|
| Black | 95,769 |
| Gold | 989 |
| White | 2,226 |
| Yellow | 535 |
| Borderless | 2,545 |
| Silver | 580 |
## Layout Distribution
| Layout | Card Count |
|--------|-----------|
| Normal | 95,545 |
| Adventure | 1,108 |
| Split | 1,224 |
| Saga | 1,279 |
| Planar | 816 |
| Token | 561 |
| Leveler | 309 |
| Mutate | 526 |
| Scheme | 305 |
| Class | 260 |
| Prototype | 198 |
| Meld | 174 |
| Emblem | 133 |
| Case | 112 |
| Flip | 42 |
| Host | 29 |
| Augment | 17 |
| Other | 6 |
## Language Distribution
| Language | Card Count |
|----------|-----------|
| English (en) | 26,934 |
| Japanese (ja) | 11,813 |
| French (fr) | 10,203 |
| German (de) | 10,178 |
| Spanish (es) | 9,276 |
| Italian (it) | 9,105 |
| Chinese Simplified (zhs) | 7,763 |
| Portuguese (pt) | 6,558 |
| Russian (ru) | 4,632 |
| Chinese Traditional (zht) | 3,631 |
| Korean (ko) | 2,539 |
## Full Art Cards by Language
| Language | Full Art Cards |
|----------|---------------|
| English (en) | 872 |
| Japanese (ja) | 253 |
| German (de) | 235 |
| French (fr) | 234 |
| Spanish (es) | 217 |
| Italian (it) | 204 |
| Chinese Simplified (zhs) | 187 |
| Portuguese (pt) | 145 |
| Russian (ru) | 52 |
| Chinese Traditional (zht) | 51 |
| Korean (ko) | 51 |
| # Magic The Gatering Synthetic Large Image Dataset.
Synthetic images of MTG cards with the following features:
* Photorealistic rendering using BlenderProc2 and HDRI environments
* Precise card geometry with rounded corners
* Random transformations for data augmentation
* Segmentation masks for semantic segmentation training
for uncompress in ubuntu:
sudo apt install p7zip-full
7z x mtg_synthetic_large_dataset.7z.001
# Dataset Structure
Train Set
/train/image - Rendered Images (jpg)
/train/keypoints - Corner keypoints (json)
/train/mask - Mask (png)
/train/temp_textures - 3d Model textures (reference)
Test Set
/test/image - Rendered Images (jpg)
/test/keypoints - Json w/Corner keypoints
/test/mask - Mask (png)
/test/temp_textures - 3d Model textures (reference)
Complete estructure
<set> - train or test
<data_type> - image / keypoint / mask
<reference> - Directory w/card information
<file> - XXX.jpg for image, XXX.png for mask, XXX.json for keypoint
Reference directory name
00173df7-a584-410c-af1d-ada9c791056a__2cfd365e-34d1-4224-b925-119000311934_woe_205_en_normal
│ ││ │ │ │ │ │
│ ││ │ │ │ │ └─ Layout
│ ││ │ │ │ └─ Language
│ ││ │ │ └─ Colector Number
│ ││ │ └─ Set
│ ││ └─ Card ID (skryfall)
│ └└─ double separator
└─ Oracle ID (skryfall)
| Component | Description | Example |
|------------------|-------------------------------------------------=-------------------------|--------------------------------------|
| oracle_id | Unique card identifier in Scryfall (identifies versions of the same card) | 00173df7-a584-410c-af1d-ada9c791056a |
| card_id | Unique identifier for this specific printing | 2cfd365e-34d1-4224-b925-119000311934 |
| set_code | Set/expansion code | woe (Wilds of Eldraine) |
| collector_number | Collector number in the set | 205 |
| lang | Language code (ISO 639-1) | en, es, ja, fr, pt, etc. |
| layout | Card layout type | normal, transform, split, etc. |
Examples:
Image (.jpg) / Mask (.png) / keypoints (.json) Examples:



Keypoints (001.json):
[[166.02081091300872, 865.1902240268578], [185.66033454854758, 359.2114138980728], [573.4461817935525, 390.0347569164927], [525.5468175245054, 908.0770135128237]]
Temp Texture:

Filename:
test/images/005ee549-1bf5-478f-bc3f-3e791bd7eecf__cd9bbbc9-b6ff-451f-aaa7-773fe42fe54b_clb_81_pt_normal/001.jpg
test/masks/005ee549-1bf5-478f-bc3f-3e791bd7eecf__cd9bbbc9-b6ff-451f-aaa7-773fe42fe54b_clb_81_pt_normal/001.png
test/keypoints/005ee549-1bf5-478f-bc3f-3e791bd7eecf__cd9bbbc9-b6ff-451f-aaa7-773fe42fe54b_clb_81_pt_normal/001.json
test/temp_textures/005ee549-1bf5-478f-bc3f-3e791bd7eecf__cd9bbbc9-b6ff-451f-aaa7-773fe42fe54b_clb_81_pt_normal_texture.jpg
# MTG Dataset Analysis
## Summary
| Metric | Value |
|--------|-------|
| **Total Cards** | 102,644 |
| **Unique Cards** | 7,715 |
| **Sets Represented** | 825 |
| **Languages** | 11 |
| **Full Art Cards** | 2,501 |
## Color Identity Distribution
| Color Identity | Card Count |
|----------------|-----------|
| Colorless | 12,827 |
| B (Black) | 10,780 |
| R (Red) | 10,380 |
| G (Green) | 11,223 |
| U (Blue) | 10,746 |
| W (White) | 10,445 |
| GR (Green/Red) | 3,052 |
| BU (Black/Blue) | 3,171 |
| GU (Green/Blue) | 3,198 |
| UW (Blue/White) | 3,019 |
| BG (Black/Green) | 3,067 |
| GW (Green/White) | 3,238 |
| RW (Red/White) | 2,895 |
| BW (Black/White) | 2,925 |
| BR (Black/Red) | 2,910 |
| RU (Red/Blue) | 2,922 |
| BGR (Black/Green/Red) | 470 |
| BGU (Black/Green/Blue) | 472 |
| BRW (Black/Red/White) | 518 |
| BGW (Black/Green/White) | 467 |
| GRU (Green/Red/Blue) | 474 |
| BRU (Black/Red/Blue) | 565 |
| BUW (Black/Blue/White) | 618 |
| RUW (Red/Blue/White) | 391 |
| GUW (Green/Blue/White) | 512 |
| GRW (Green/Red/White) | 507 |
| BGRUW (All colors) | 722 |
| BRUW (Black/Red/Blue/White) | 25 |
| BGRU (Black/Green/Red/Blue) | 27 |
| BGRW (Black/Green/Red/White) | 12 |
| BGUW (Black/Green/Blue/White) | 30 |
| GRUW (Green/Red/Blue/White) | 36 |
## Border Color Distribution
| Border Color | Card Count |
|--------------|-----------|
| Black | 95,769 |
| Gold | 989 |
| White | 2,226 |
| Yellow | 535 |
| Borderless | 2,545 |
| Silver | 580 |
## Layout Distribution
| Layout | Card Count |
|--------|-----------|
| Normal | 95,545 |
| Adventure | 1,108 |
| Split | 1,224 |
| Saga | 1,279 |
| Planar | 816 |
| Token | 561 |
| Leveler | 309 |
| Mutate | 526 |
| Scheme | 305 |
| Class | 260 |
| Prototype | 198 |
| Meld | 174 |
| Emblem | 133 |
| Case | 112 |
| Flip | 42 |
| Host | 29 |
| Augment | 17 |
| Other | 6 |
## Language Distribution
| Language | Card Count |
|----------|-----------|
| English (en) | 26,934 |
| Japanese (ja) | 11,813 |
| French (fr) | 10,203 |
| German (de) | 10,178 |
| Spanish (es) | 9,276 |
| Italian (it) | 9,105 |
| Chinese Simplified (zhs) | 7,763 |
| Portuguese (pt) | 6,558 |
| Russian (ru) | 4,632 |
| Chinese Traditional (zht) | 3,631 |
| Korean (ko) | 2,539 |
## Full Art Cards by Language
| Language | Full Art Cards |
|----------|---------------|
| English (en) | 872 |
| Japanese (ja) | 253 |
| German (de) | 235 |
| French (fr) | 234 |
| Spanish (es) | 217 |
| Italian (it) | 204 |
| Chinese Simplified (zhs) | 187 |
| Portuguese (pt) | 145 |
| Russian (ru) | 52 |
| Chinese Traditional (zht) | 51 |
| Korean (ko) | 51 |
| 6 | 0 | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"task_categories:image-feature-extraction",
"language:en",
"license:mit",
"region:us"
] | 2025-11-11T15:11:02+00:00 | 2025-11-12T14:55:45+00:00 | 0 |
swap-uniba/MTVQA_IT |
# Dataset Card for MTVQA_IT
## Dataset description
This is a formatted version of [MTVQA](https://huggingface.co/datasets/ByteDance/MTVQA) including only the Italian data.
## Citation
If you use this dataset in your research you should cite the original publication:
```
@misc{tang2024mtvqa,
title={MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering},
author={Jingqun Tang and Qi Liu and Yongjie Ye and Jinghui Lu and Shu Wei and Chunhui Lin and Wanqing Li and Mohamad Fitri Faiz Bin Mahmood and Hao Feng and Zhen Zhao and Yanjie Wang and Yuliang Liu and Hao Liu and Xiang Bai and Can Huang},
year={2024},
eprint={2405.11985},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
Project page: https://bytedance.github.io/MTVQA/ |
# Dataset Card for MTVQA_IT
## Dataset description
This is a formatted version of [MTVQA](https://huggingface.co/datasets/ByteDance/MTVQA) including only the Italian data.
## Citation
If you use this dataset in your research you should cite the original publication:
```
@misc{tang2024mtvqa,
title={MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering},
author={Jingqun Tang and Qi Liu and Yongjie Ye and Jinghui Lu and Shu Wei and Chunhui Lin and Wanqing Li and Mohamad Fitri Faiz Bin Mahmood and Hao Feng and Zhen Zhao and Yanjie Wang and Yuliang Liu and Hao Liu and Xiang Bai and Can Huang},
year={2024},
eprint={2405.11985},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
Project page: https://bytedance.github.io/MTVQA/ | 26 | 0 | [
"task_categories:image-text-to-text",
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2405.11985",
"region:us"
] | 2024-08-16T21:11:52+00:00 | 2025-11-12T14:55:56+00:00 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.