test / README.md
pushthetempo's picture
Add dataset README with metadata
082d6f9 verified
metadata
dataset_info:
  features:
    - name: Event_ID
      dtype: text
    - name: Timestamp
      dtype: timestamp
    - name: Vehicle_Type
      dtype: text
    - name: Speed_kmh
      dtype: text
    - name: Latitude
      dtype: text
    - name: Longitude
      dtype: text
    - name: Event_Type
      dtype: text
    - name: Severity
      dtype: text
  splits:
    - name: default
      num_bytes: 3031KB
      num_examples: 30000

Flowmatic Cleaned Dataset

Overview

This dataset was cleaned and exported by Flowmatic, an intelligent data preparation platform.

Pipeline Run ID: cmnr643le000ml1do503vfdbn Generated: 2026-04-09T08:14:00.156Z

Dataset Statistics

  • Total Records: 30 000
  • Total Columns: 8
  • File: cleaned_data.csv

Column Information

Column Type Non-Null Null Sample Values
Event_ID text 30000 0 " 1", " 2", " 3"
Timestamp timestamp 30000 0 " 2024-02-01 08:00:00", " 2024-02-01 08:00:00", " 2024-02-01 08:00:00"
Vehicle_Type text 30000 0 " Bus ", " Truck ", " Truck "
Speed_kmh text 30000 0 " 61", " 74", " 84"
Latitude text 30000 0 " 51.130119", " 51.130046", " 51.129601"
Longitude text 30000 0 " 71.43348 ", " 71.434301", " 71.432699"
Event_Type text 30000 0 " Normal ", " Normal ", " Normal "
Severity text 30000 0 " Low", " Low", " Low"

Data Quality

This dataset has been processed through Flowmatic's cleaning pipeline:

  • ✅ Duplicates removed
  • ✅ Missing values handled (interpolation/forward-fill)
  • ✅ Outliers processed (winsorization)
  • ✅ Type consistency validated
  • ✅ Records exported

Usage

Load the dataset using Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset('pushthetempo/test')
df = dataset['train'].to_pandas()

Or load directly as CSV:

import pandas as pd

df = pd.read_csv('https://huggingface.co/datasets/pushthetempo/test/raw/main/cleaned_data.csv')

License

This dataset is released under the CC BY 4.0 license.


Processed with Flowmatic