File size: 2,266 Bytes
5a39067
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
dataset_info:
  features:
  - name: Event_ID
    dtype: timestamp
  - name: Timestamp
    dtype: timestamp
  - name: Vehicle_Type
    dtype: text
  - name: Speed_kmh
    dtype: timestamp
  - name: Latitude
    dtype: text
  - name: Longitude
    dtype: text
  - name: Event_Type
    dtype: text
  - name: Severity
    dtype: text
  - name: Traffic_Density
    dtype: text
  splits:
  - name: default
    num_bytes: 0KB
    num_examples: 5
---

# Flowmatic Cleaned Dataset

## Overview
This dataset was cleaned and exported by **Flowmatic**, an intelligent data preparation platform. 

**Pipeline Run ID**: `cmm1r1vow0004bt40plobjuj2`
**Generated**: 2026-02-25T08:07:49.270Z

## Dataset Statistics

- **Total Records**: 5
- **Total Columns**: 9
- **File**: `cleaned_data.csv`

## Column Information

| Column | Type | Non-Null | Null | Sample Values |
|--------|------|----------|------|---------------|
| Event_ID | timestamp | 5 | 0 | "1", "2", "3" |
| Timestamp | timestamp | 5 | 0 | "2024-09-13 13:10:48", "2024-10-19 14:34:09", "2024-11-24 18:52:58" |
| Vehicle_Type | text | 5 | 0 | "Bus", "Car", "Bus" |
| Speed_kmh | timestamp | 5 | 0 | "72", "4", "36" |
| Latitude | text | 5 | 0 | "51.101003", "51.116445", "51.15613" |
| Longitude | text | 5 | 0 | "71.417789", "71.396492", "71.361778" |
| Event_Type | text | 5 | 0 | "Accident", "Normal", "Sudden De-celeration" |
| Severity | text | 5 | 0 | "High", "Low", "Low" |
| Traffic_Density | text | 5 | 0 | "91.41", "45.52", "21.95" |

## Data Quality

This dataset has been processed through Flowmatic's cleaning pipeline:

- ✅ Duplicates removed
- ✅ Missing values handled (interpolation/forward-fill)
- ✅ Outliers processed (winsorization)
- ✅ Type consistency validated
- ✅ Records exported

## Usage

Load the dataset using Hugging Face `datasets` library:

```python
from datasets import load_dataset

dataset = load_dataset('username/dataset_name')
df = dataset['train'].to_pandas()
```

Or load directly as CSV:

```python
import pandas as pd

df = pd.read_csv('https://huggingface.co/datasets/username/dataset_name/raw/main/cleaned_data.csv')
```

## License

This dataset is released under the CC BY 4.0 license.

---

*Processed with [Flowmatic](https://github.com/flowmatic/flowmatic)*