Datasets:
metadata
language:
- en
- th
- vi
- zh
pretty_name: HSClassify Benchmark
size_categories:
- n<1K
task_categories:
- text-classification
tags:
- hs-codes
- trade
- customs
- harmonized-system
- benchmark
dataset_info:
features:
- name: text
dtype: string
- name: expected_hs_code
dtype: string
- name: category
dtype: string
- name: language
dtype: string
- name: notes
dtype: string
config_name: default
splits:
- name: test
num_examples: 78
configs:
- config_name: default
data_files:
- split: test
path: benchmark_cases.csv
HSClassify Benchmark
Benchmark suite for evaluating the HSClassify HS code classifier.
Results (latest)
| Metric | All Cases | In-Label-Space |
|---|---|---|
| Top-1 Accuracy | 79.5% | 88.6% |
| Top-3 Accuracy | 82.0% | 91.4% |
| Top-5 Accuracy | 83.3% | 92.9% |
| Chapter Accuracy | 89.7% | 95.7% |
By Category
| Category | N | Top-1 | Top-3 | Top-5 |
|---|---|---|---|---|
| easy | 27 | 96.3% | 100% | 100% |
| edge_case | 21 | 71.4% | 76.2% | 81.0% |
| multilingual | 20 | 100% | 100% | 100% |
| known_failure | 10 | 10.0% | 10.0% | 10.0% |
Test Cases
78 hand-crafted cases in benchmark_cases.csv across four categories:
- easy (27): Common goods the model should classify correctly
- edge_case (21): Ambiguous queries, short text, brand names
- multilingual (20): Thai, Vietnamese, and Chinese queries
- known_failure (10): Documents current blind spots and label-space gaps
Usage
Requires a trained HSClassify_micro model directory as a sibling folder (or pass --model-dir).
# Basic benchmark (~10s)
python benchmark.py
# Custom output path
python benchmark.py --output results/out.json
# With per-class split analysis
python benchmark.py --split-analysis
# Point to model directory explicitly
python benchmark.py --model-dir /path/to/HSClassify_micro
Split Analysis (training data)
Replicates the 80/20 stratified split from model training to report:
- Worst 15 HS codes by F1 score
- Top 20 cross-chapter confusions
- Overall accuracy: 77.2% (matches training baseline)