File size: 2,621 Bytes
d094b05
c1b97eb
 
 
 
 
 
 
 
 
 
 
 
 
d094b05
c1b97eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
license: mit
task_categories:
- visual-question-answering
- image-classification
language:
- en
tags:
- robotics
- condition-checking
- multi-modal
- vision-language
size_categories:
- n<1K
---

# Condition Checking Dataset

This dataset contains condition checking conversations for robotics applications, with embedded base64 images from multiple camera viewpoints.

## Dataset Description

- **Task**: Visual condition checking (True/False questions about robot states)
- **Modality**: Multi-modal (text + images)
- **Domain**: Robotics manipulation tasks
- **Format**: Conversational format suitable for VLM training

## Dataset Structure

### Data Fields

- `id`: Unique identifier for each sample
- `images`: Dictionary containing base64-encoded images from multiple camera viewpoints
- `conversations`: List of conversation turns (human question + assistant answer)

### Camera Viewpoints

The dataset includes images from 5 camera viewpoints:
- `observation_images_chest`
- `observation_images_left_eye`
- `observation_images_left_wrist`
- `observation_images_right_eye`
- `observation_images_right_wrist`

Each sample contains approximately 30 images total (6 per camera).

### Sample Structure

```json
{
  "id": "frame_index_position_part",
  "images": {
    "camera_key": ["base64_image_1", "base64_image_2", ...],
    ...
  },
  "conversations": [
    {
      "from": "human",
      "value": "Here are the observations... condition: (object is grasped) ..."
    },
    {
      "from": "gpt", 
      "value": "True"
    }
  ]
}
```

## Usage

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("jeffshen4011/condition-checking-dataset")

# Access a sample
sample = dataset["train"][0]
print(f"Question: {sample['conversations'][0]['value'][:100]}...")
print(f"Answer: {sample['conversations'][1]['value']}")
print(f"Number of camera views: {len(sample['images'])}")
```

## Dataset Statistics

- **Training samples**: 9
- **Camera viewpoints**: 5
- **Images per sample**: ~30
- **Image format**: Base64-encoded PNG
- **Task type**: Binary classification (True/False)

## Applications

This dataset is designed for:
- Training vision-language models for robotics condition checking
- Multi-modal reasoning tasks
- Robot state verification
- Visual question answering in manipulation contexts

## Citation

If you use this dataset, please cite:

```bibtex
@dataset{condition_checking_dataset,
  title={Condition Checking Dataset for Robotics},
  author={Research Team},
  year={2025},
  url={https://huggingface.co/datasets/jeffshen4011/condition-checking-dataset}
}
```