| # WAVE BENDER IDE - Training Dataset | |
| ## Dataset Overview | |
| Generated by WAVE BENDER IDE v5.0 - Web-based Drone Telemetry & SLAM Training Dataset Generator | |
| **📊 Dataset Statistics:** | |
| - **Total Telemetry Points**: 12,544 | |
| - **Training Duration**: 125.4 seconds | |
| - **Sample Rate**: 100 Hz | |
| - **Training Epochs**: 0 | |
| - **Obstacles Detected**: 2 | |
| - **Avoidance Maneuvers**: 0 | |
| - **Training Progress**: 8% | |
| ## 🚀 Loading the Dataset in Hugging Face | |
| ### CORRECT WAY - Load each dataset separately: | |
| ```python | |
| from datasets import Dataset, DatasetDict | |
| import json | |
| # Load telemetry data (separate dataset) | |
| telemetry_dataset = Dataset.from_json("telemetry/telemetry.jsonl") | |
| # Load SLAM data (separate datasets) | |
| obstacles_dataset = Dataset.from_json("slam/obstacles.json") | |
| detections_dataset = Dataset.from_json("slam/detections.json") | |
| avoidances_dataset = Dataset.from_json("slam/avoidances.json") | |
| # Load training data (separate datasets) | |
| epochs_dataset = Dataset.from_json("statistics/epochs.json") | |
| stats_dataset = Dataset.from_json("statistics/summary.json") | |
| # Create DatasetDict with SEPARATE datasets (no schema conflicts!) | |
| dataset_dict = DatasetDict({ | |
| 'telemetry': telemetry_dataset, | |
| 'slam_obstacles': obstacles_dataset, | |
| 'slam_detections': detections_dataset, | |
| 'slam_avoidances': avoidances_dataset, | |
| 'training_epochs': epochs_dataset, | |
| 'statistics': stats_dataset | |
| }) | |
| ``` | |
| ## ✅ NO ARROWINVALID ERRORS - Why this works: | |
| 1. **Separate Files**: Each data type is in its own file with consistent schema | |
| 2. **Separate Datasets**: Each file is loaded as a separate Dataset | |
| 3. **No Schema Mixing**: Different schemas don't conflict because they're separate | |
| 4. **Always Valid**: Empty arrays still have consistent schemas | |
| ## 📁 Directory Structure | |
| ``` | |
| wave_bender_dataset.zip/ | |
| ├── telemetry/ | |
| │ ├── telemetry.jsonl # Telemetry data (JSON Lines) | |
| │ ├── telemetry.csv # Telemetry data (CSV) | |
| │ └── telemetry_schema.json | |
| ├── slam/ | |
| │ ├── obstacles.json # Obstacle definitions | |
| │ ├── detections.json # Detection events | |
| │ ├── avoidances.json # Avoidance maneuvers | |
| │ └── training_params.json | |
| ├── statistics/ | |
| │ ├── epochs.json # Epoch progression | |
| │ └── summary.json # Statistics summary | |
| ├── graphs/ | |
| │ └── graph_data.json | |
| ├── metadata/ | |
| │ ├── dataset_card.json | |
| │ └── README.md | |
| └── huggingface_loader.py | |
| ``` | |
| ## 🎯 Training Configuration | |
| - **Complexity**: 7/10 | |
| - **Noise Level**: 2.5/5 | |
| - **Frequency**: 1.8 Hz | |
| - **Center Region Training**: Yes | |
| - **Dynamic Obstacles**: Yes | |
| - **Avoidance Training**: Yes | |
| ## 🔧 Fixed ArrowInvalid Errors | |
| This export structure completely eliminates ArrowInvalid errors by: | |
| 1. **Separating different schema types** into different directories | |
| 2. **Never mixing schemas** in the same file | |
| 3. **Always providing consistent schemas**, even for empty data | |
| 4. **Using unique dataset_id fields** to prevent confusion | |
| ## 📝 Citation | |
| ``` | |
| @dataset{wave_bender_dataset_2024, | |
| title = {WAVE BENDER IDE v5.0 - Drone Telemetry & SLAM Training Dataset}, | |
| author = {webXOS}, | |
| year = {2024}, | |
| url = {https://huggingface.co/datasets/webxos/wave_bender_dataset}, | |
| note = {Synthetic dataset for drone autonomy training} | |
| } | |
| ``` | |
| --- | |
| **Generated**: 2026-01-08T01:43:26.624Z | |
| **WAVE BENDER IDE v5.0** | **Hugging Face Compatible** | **ArrowInvalid Fixed** |