File size: 3,468 Bytes
fdbf9f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
# WAVE BENDER IDE - Training Dataset

## Dataset Overview
Generated by WAVE BENDER IDE v5.0 - Web-based Drone Telemetry & SLAM Training Dataset Generator

**📊 Dataset Statistics:**
- **Total Telemetry Points**: 12,544
- **Training Duration**: 125.4 seconds
- **Sample Rate**: 100 Hz
- **Training Epochs**: 0
- **Obstacles Detected**: 2
- **Avoidance Maneuvers**: 0
- **Training Progress**: 8%

## 🚀 Loading the Dataset in Hugging Face

### CORRECT WAY - Load each dataset separately:
```python
from datasets import Dataset, DatasetDict
import json

# Load telemetry data (separate dataset)
telemetry_dataset = Dataset.from_json("telemetry/telemetry.jsonl")

# Load SLAM data (separate datasets)
obstacles_dataset = Dataset.from_json("slam/obstacles.json")
detections_dataset = Dataset.from_json("slam/detections.json")
avoidances_dataset = Dataset.from_json("slam/avoidances.json")

# Load training data (separate datasets)
epochs_dataset = Dataset.from_json("statistics/epochs.json")
stats_dataset = Dataset.from_json("statistics/summary.json")

# Create DatasetDict with SEPARATE datasets (no schema conflicts!)
dataset_dict = DatasetDict({
    'telemetry': telemetry_dataset,
    'slam_obstacles': obstacles_dataset,
    'slam_detections': detections_dataset,
    'slam_avoidances': avoidances_dataset,
    'training_epochs': epochs_dataset,
    'statistics': stats_dataset
})
```

## ✅ NO ARROWINVALID ERRORS - Why this works:

1. **Separate Files**: Each data type is in its own file with consistent schema
2. **Separate Datasets**: Each file is loaded as a separate Dataset
3. **No Schema Mixing**: Different schemas don't conflict because they're separate
4. **Always Valid**: Empty arrays still have consistent schemas

## 📁 Directory Structure
```
wave_bender_dataset.zip/
├── telemetry/
│   ├── telemetry.jsonl     # Telemetry data (JSON Lines)
│   ├── telemetry.csv       # Telemetry data (CSV)
│   └── telemetry_schema.json
├── slam/
│   ├── obstacles.json      # Obstacle definitions
│   ├── detections.json     # Detection events
│   ├── avoidances.json     # Avoidance maneuvers
│   └── training_params.json
├── statistics/
│   ├── epochs.json         # Epoch progression
│   └── summary.json        # Statistics summary
├── graphs/
│   └── graph_data.json
├── metadata/
│   ├── dataset_card.json
│   └── README.md
└── huggingface_loader.py
```

## 🎯 Training Configuration
- **Complexity**: 7/10
- **Noise Level**: 2.5/5
- **Frequency**: 1.8 Hz
- **Center Region Training**: Yes
- **Dynamic Obstacles**: Yes
- **Avoidance Training**: Yes

## 🔧 Fixed ArrowInvalid Errors
This export structure completely eliminates ArrowInvalid errors by:

1. **Separating different schema types** into different directories
2. **Never mixing schemas** in the same file
3. **Always providing consistent schemas**, even for empty data
4. **Using unique dataset_id fields** to prevent confusion

## 📝 Citation
```
@dataset{wave_bender_dataset_2024,
  title = {WAVE BENDER IDE v5.0 - Drone Telemetry & SLAM Training Dataset},
  author = {webXOS},
  year = {2024},
  url = {https://huggingface.co/datasets/webxos/wave_bender_dataset},
  note = {Synthetic dataset for drone autonomy training}
}
```

---
**Generated**: 2026-01-08T01:43:26.624Z
**WAVE BENDER IDE v5.0** | **Hugging Face Compatible** | **ArrowInvalid Fixed**