--- license: apache-2.0 task_categories: - tabular-classification - tabular-regression language: - en size_categories: - 100K= column count - Tables exceeding 1 GB estimated memory are skipped - Duplicate column names are deduplicated ### Stage 2 — Heuristic Classification Rule-based column type classification: | Rule | Classification | |------|----------------| | Uniqueness < 5% | Categorical | | Uniqueness > 30% and numeric | Continuous | | Uniqueness > 95% and non-numeric | Dropped (likely ID/text) | | Everything else | Ambiguous (sent to Stage 3) | String columns with numeric-like values (commas, K/M/B suffixes, scientific notation) are converted. ### Stage 3 — ML Classifier A Random Forest classifier resolves ambiguous columns from Stage 2. It extracts 33 distribution-agnostic features (string length patterns, character distributions, entropy, numeric convertibility) and predicts categorical vs. continuous. Confidence levels: - **High** (>0.75): accepted automatically - **Medium** (0.65-0.75): accepted with lower confidence - **Low** (<0.65): marked for review, dropped in Stage 4 ### Stage 4 — Normalization & Encoding Final transformations to make data ML-ready: - **Continuous columns**: z-score normalization (`(x - mean) / std`) - **Categorical columns**: integer encoding (1, 2, 3, ...) - **Low-confidence columns**: dropped All transformation parameters are stored in `metadata` for reversibility. ## Reversing Transformations Stage 4 metadata contains the parameters needed to undo normalization and encoding: ```python import pyarrow as pa # Load a stage4 example metadata = example['metadata'] reader = pa.RecordBatchStreamReader(example['arrow_bytes']) table = reader.read_all() df = table.to_pandas() # Reverse z-score normalization for a continuous column norm_params = metadata['stage4_normalized_columns']['my_column'] df['my_column'] = df['my_column'] * norm_params['std'] + norm_params['mean'] # Reverse integer encoding for a categorical column enc_params = metadata['stage4_encoded_columns']['my_category'] inverse_map = {v: k for k, v in enc_params['mapping'].items()} df['my_category'] = df['my_category'].map(inverse_map) ``` ## Schema Each row in the parquet files represents one table: | Column | Type | Description | |--------|------|-------------| | `table_id` | string | Unique identifier for the source table | | `arrow_bytes` | binary | Serialized PyArrow table in IPC streaming format | | `metadata` | struct | Processing metadata from all pipeline stages | ## Source Built from [approximatelabs/tablib-v1-full](https://huggingface.co/datasets/approximatelabs/tablib-v1-full). If you use this dataset, please cite the original TabLib work.