Mead0w1ark commited on
Commit
e5f0348
·
verified ·
1 Parent(s): e2fe2a5

Initial benchmark suite: 78 test cases, evaluation script, baseline results

Browse files
Files changed (6) hide show
  1. .gitignore +5 -0
  2. README.md +55 -0
  3. benchmark.py +528 -0
  4. benchmark_cases.csv +79 -0
  5. benchmark_results.json +638 -0
  6. requirements.txt +4 -0
.gitignore ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ __pycache__/
2
+ *.py[cod]
3
+ venv/
4
+ .venv/
5
+ .DS_Store
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HSClassify Benchmark
2
+
3
+ Benchmark suite for evaluating the [HSClassify](https://github.com/JamesEBall/HSClassify_micro) HS code classifier.
4
+
5
+ ## Results (latest)
6
+
7
+ | Metric | All Cases | In-Label-Space |
8
+ |--------|-----------|----------------|
9
+ | Top-1 Accuracy | 79.5% | 88.6% |
10
+ | Top-3 Accuracy | 82.0% | 91.4% |
11
+ | Top-5 Accuracy | 83.3% | 92.9% |
12
+ | Chapter Accuracy | 89.7% | 95.7% |
13
+
14
+ ### By Category
15
+
16
+ | Category | N | Top-1 | Top-3 | Top-5 |
17
+ |----------|---|-------|-------|-------|
18
+ | easy | 27 | 96.3% | 100% | 100% |
19
+ | edge_case | 21 | 71.4% | 76.2% | 81.0% |
20
+ | multilingual | 20 | 100% | 100% | 100% |
21
+ | known_failure | 10 | 10.0% | 10.0% | 10.0% |
22
+
23
+ ## Test Cases
24
+
25
+ 78 hand-crafted cases in `benchmark_cases.csv` across four categories:
26
+
27
+ - **easy** (27): Common goods the model should classify correctly
28
+ - **edge_case** (21): Ambiguous queries, short text, brand names
29
+ - **multilingual** (20): Thai, Vietnamese, and Chinese queries
30
+ - **known_failure** (10): Documents current blind spots and label-space gaps
31
+
32
+ ## Usage
33
+
34
+ Requires a trained [HSClassify_micro](https://github.com/JamesEBall/HSClassify_micro) model directory as a sibling folder (or pass `--model-dir`).
35
+
36
+ ```bash
37
+ # Basic benchmark (~10s)
38
+ python benchmark.py
39
+
40
+ # Custom output path
41
+ python benchmark.py --output results/out.json
42
+
43
+ # With per-class split analysis
44
+ python benchmark.py --split-analysis
45
+
46
+ # Point to model directory explicitly
47
+ python benchmark.py --model-dir /path/to/HSClassify_micro
48
+ ```
49
+
50
+ ## Split Analysis (training data)
51
+
52
+ Replicates the 80/20 stratified split from model training to report:
53
+ - Worst 15 HS codes by F1 score
54
+ - Top 20 cross-chapter confusions
55
+ - Overall accuracy: **77.2%** (matches training baseline)
benchmark.py ADDED
@@ -0,0 +1,528 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Benchmark evaluation for the HSClassify HS code classifier.
3
+
4
+ Standalone script that runs hand-crafted test cases through the model and
5
+ reports accuracy metrics. Requires a trained HSClassify_micro model directory.
6
+
7
+ Usage:
8
+ python benchmark.py --model-dir ../HSClassify_micro # basic benchmark
9
+ python benchmark.py --output results.json # custom output path
10
+ python benchmark.py --split-analysis # + per-class analysis
11
+ """
12
+
13
+ import argparse
14
+ import json
15
+ import math
16
+ import os
17
+ import pickle
18
+ import sys
19
+ import time
20
+
21
+ import numpy as np
22
+ import pandas as pd
23
+ from pathlib import Path
24
+
25
+ SCRIPT_DIR = Path(__file__).resolve().parent
26
+
27
+
28
+ def resolve_paths(model_dir):
29
+ """Resolve data and model paths from the HSClassify project directory."""
30
+ model_dir = Path(model_dir).resolve()
31
+ data_dir = model_dir / "data"
32
+ models_dir = model_dir / "models"
33
+
34
+ required = [
35
+ models_dir / "knn_classifier.pkl",
36
+ models_dir / "label_encoder.pkl",
37
+ data_dir / "hs_codes_reference.json",
38
+ ]
39
+ for p in required:
40
+ if not p.exists():
41
+ print(f"ERROR: Required file not found: {p}")
42
+ sys.exit(1)
43
+
44
+ return data_dir, models_dir
45
+
46
+
47
+ # ---------------------------------------------------------------------------
48
+ # Model loading (mirrors app.py:77-97)
49
+ # ---------------------------------------------------------------------------
50
+
51
+ def load_model(data_dir, models_dir):
52
+ """Load sentence transformer, classifier, label encoder, and HS reference."""
53
+ from sentence_transformers import SentenceTransformer
54
+
55
+ # Prefer local bundled model, fall back to Hub
56
+ local_model_dir = models_dir / "sentence_model"
57
+ has_local_weights = (
58
+ (local_model_dir / "model.safetensors").exists()
59
+ or (local_model_dir / "pytorch_model.bin").exists()
60
+ )
61
+ has_local_tokenizer = (local_model_dir / "tokenizer.json").exists()
62
+
63
+ if local_model_dir.exists() and has_local_weights and has_local_tokenizer:
64
+ model = SentenceTransformer(str(local_model_dir))
65
+ print("Loaded local sentence model from models/sentence_model")
66
+ else:
67
+ fallback_model = os.getenv(
68
+ "SENTENCE_MODEL_NAME",
69
+ "intfloat/multilingual-e5-small",
70
+ )
71
+ model = SentenceTransformer(fallback_model)
72
+ print(f"Loaded sentence model from Hugging Face Hub: {fallback_model}")
73
+
74
+ with open(models_dir / "knn_classifier.pkl", "rb") as f:
75
+ classifier = pickle.load(f)
76
+
77
+ with open(models_dir / "label_encoder.pkl", "rb") as f:
78
+ label_encoder = pickle.load(f)
79
+
80
+ with open(data_dir / "hs_codes_reference.json") as f:
81
+ hs_reference = json.load(f)
82
+
83
+ return model, classifier, label_encoder, hs_reference
84
+
85
+
86
+ # ---------------------------------------------------------------------------
87
+ # Prediction (mirrors app.py:346-365)
88
+ # ---------------------------------------------------------------------------
89
+
90
+ def predict_top_k(model, classifier, label_encoder, text, k=5):
91
+ """Return top-k (hs_code, confidence) pairs for a query string."""
92
+ query_emb = model.encode(
93
+ [f"query: {text}"],
94
+ normalize_embeddings=True,
95
+ convert_to_numpy=True,
96
+ )
97
+ probs = classifier.predict_proba(query_emb)[0]
98
+ top_indices = np.argsort(probs)[-k:][::-1]
99
+
100
+ results = []
101
+ for idx in top_indices:
102
+ hs_code = str(label_encoder.classes_[idx]).zfill(6)
103
+ results.append((hs_code, float(probs[idx])))
104
+ return results
105
+
106
+
107
+ # ---------------------------------------------------------------------------
108
+ # Basic benchmark
109
+ # ---------------------------------------------------------------------------
110
+
111
+ def run_benchmark(model, classifier, label_encoder, hs_reference, bench_path):
112
+ """Run benchmark cases and return detailed results."""
113
+ if not bench_path.exists():
114
+ print(f"ERROR: {bench_path} not found")
115
+ sys.exit(1)
116
+
117
+ df = pd.read_csv(bench_path, dtype={"expected_hs_code": str})
118
+ df["expected_hs_code"] = df["expected_hs_code"].str.zfill(6)
119
+
120
+ curated_codes = set(hs_reference.keys())
121
+
122
+ results = []
123
+ for _, row in df.iterrows():
124
+ text = row["text"]
125
+ expected = row["expected_hs_code"]
126
+ category = row["category"]
127
+ language = row.get("language", "en")
128
+ notes = row.get("notes", "")
129
+
130
+ preds = predict_top_k(model, classifier, label_encoder, text, k=5)
131
+ pred_codes = [code for code, _ in preds]
132
+ top1_code = pred_codes[0]
133
+ top1_conf = preds[0][1]
134
+
135
+ hit_at_1 = top1_code == expected
136
+ hit_at_3 = expected in pred_codes[:3]
137
+ hit_at_5 = expected in pred_codes[:5]
138
+
139
+ # Chapter-level accuracy (first 2 digits)
140
+ chapter_hit = top1_code[:2] == expected[:2]
141
+
142
+ # Is the expected code in our label space?
143
+ in_label_space = expected in curated_codes
144
+
145
+ results.append({
146
+ "text": text,
147
+ "expected": expected,
148
+ "predicted": top1_code,
149
+ "confidence": top1_conf,
150
+ "hit_at_1": hit_at_1,
151
+ "hit_at_3": hit_at_3,
152
+ "hit_at_5": hit_at_5,
153
+ "chapter_hit": chapter_hit,
154
+ "in_label_space": in_label_space,
155
+ "category": category,
156
+ "language": language,
157
+ "notes": notes,
158
+ "top5": pred_codes,
159
+ })
160
+
161
+ return results
162
+
163
+
164
+ def compute_metrics(results):
165
+ """Compute aggregate metrics from benchmark results."""
166
+ total = len(results)
167
+ in_space = [r for r in results if r["in_label_space"]]
168
+ n_in_space = len(in_space)
169
+
170
+ # Overall (all cases)
171
+ top1 = sum(r["hit_at_1"] for r in results) / total if total else 0
172
+ top3 = sum(r["hit_at_3"] for r in results) / total if total else 0
173
+ top5 = sum(r["hit_at_5"] for r in results) / total if total else 0
174
+ chapter = sum(r["chapter_hit"] for r in results) / total if total else 0
175
+
176
+ # In-label-space only (excludes known_failure with out-of-space codes)
177
+ top1_ls = sum(r["hit_at_1"] for r in in_space) / n_in_space if n_in_space else 0
178
+ top3_ls = sum(r["hit_at_3"] for r in in_space) / n_in_space if n_in_space else 0
179
+ top5_ls = sum(r["hit_at_5"] for r in in_space) / n_in_space if n_in_space else 0
180
+ chapter_ls = sum(r["chapter_hit"] for r in in_space) / n_in_space if n_in_space else 0
181
+
182
+ # Per-category breakdown
183
+ categories = {}
184
+ for r in results:
185
+ cat = r["category"]
186
+ if cat not in categories:
187
+ categories[cat] = {"total": 0, "top1": 0, "top3": 0, "top5": 0, "chapter": 0}
188
+ categories[cat]["total"] += 1
189
+ categories[cat]["top1"] += r["hit_at_1"]
190
+ categories[cat]["top3"] += r["hit_at_3"]
191
+ categories[cat]["top5"] += r["hit_at_5"]
192
+ categories[cat]["chapter"] += r["chapter_hit"]
193
+
194
+ for cat in categories:
195
+ n = categories[cat]["total"]
196
+ categories[cat]["top1_acc"] = categories[cat]["top1"] / n
197
+ categories[cat]["top3_acc"] = categories[cat]["top3"] / n
198
+ categories[cat]["top5_acc"] = categories[cat]["top5"] / n
199
+ categories[cat]["chapter_acc"] = categories[cat]["chapter"] / n
200
+
201
+ # Per-language breakdown
202
+ languages = {}
203
+ for r in results:
204
+ lang = r["language"]
205
+ if lang not in languages:
206
+ languages[lang] = {"total": 0, "top1": 0, "top3": 0, "top5": 0}
207
+ languages[lang]["total"] += 1
208
+ languages[lang]["top1"] += r["hit_at_1"]
209
+ languages[lang]["top3"] += r["hit_at_3"]
210
+ languages[lang]["top5"] += r["hit_at_5"]
211
+
212
+ for lang in languages:
213
+ n = languages[lang]["total"]
214
+ languages[lang]["top1_acc"] = languages[lang]["top1"] / n
215
+ languages[lang]["top3_acc"] = languages[lang]["top3"] / n
216
+ languages[lang]["top5_acc"] = languages[lang]["top5"] / n
217
+
218
+ # Failures list
219
+ failures = [
220
+ {
221
+ "text": r["text"],
222
+ "expected": r["expected"],
223
+ "predicted": r["predicted"],
224
+ "confidence": round(r["confidence"], 4),
225
+ "category": r["category"],
226
+ "language": r["language"],
227
+ "top5": r["top5"],
228
+ "notes": r["notes"],
229
+ }
230
+ for r in results
231
+ if not r["hit_at_1"]
232
+ ]
233
+
234
+ return {
235
+ "total_cases": total,
236
+ "in_label_space_cases": n_in_space,
237
+ "overall": {
238
+ "top1_accuracy": round(top1, 4),
239
+ "top3_accuracy": round(top3, 4),
240
+ "top5_accuracy": round(top5, 4),
241
+ "chapter_accuracy": round(chapter, 4),
242
+ },
243
+ "in_label_space": {
244
+ "top1_accuracy": round(top1_ls, 4),
245
+ "top3_accuracy": round(top3_ls, 4),
246
+ "top5_accuracy": round(top5_ls, 4),
247
+ "chapter_accuracy": round(chapter_ls, 4),
248
+ },
249
+ "by_category": categories,
250
+ "by_language": languages,
251
+ "failures": failures,
252
+ "n_failures": len(failures),
253
+ }
254
+
255
+
256
+ # ---------------------------------------------------------------------------
257
+ # Split analysis (mirrors train_model.py:98-136)
258
+ # ---------------------------------------------------------------------------
259
+
260
+ def run_split_analysis(model, hs_reference, data_dir, models_dir):
261
+ """Replicate 80/20 stratified split and report per-class metrics."""
262
+ from sklearn.model_selection import train_test_split
263
+ from sklearn.preprocessing import LabelEncoder
264
+ from sklearn.neighbors import KNeighborsClassifier
265
+ from sklearn.metrics import classification_report, confusion_matrix
266
+
267
+ print("\n" + "=" * 60)
268
+ print("Split Analysis (replicating training 80/20 split)")
269
+ print("=" * 60)
270
+
271
+ # Load training data
272
+ training_csv = data_dir / "training_data.csv"
273
+ if not training_csv.exists():
274
+ print(f"ERROR: {training_csv} not found")
275
+ return None
276
+
277
+ df = pd.read_csv(training_csv, dtype={"hs_code": str})
278
+ df["hs_code"] = df["hs_code"].astype(str).str.zfill(6)
279
+
280
+ # Filter to curated codes
281
+ curated_codes = {str(c).zfill(6) for c in hs_reference.keys()}
282
+ df = df[df["hs_code"].isin(curated_codes)].copy()
283
+ print(f"Training data: {len(df)} rows, {df['hs_code'].nunique()} codes")
284
+
285
+ # Load pre-computed embeddings
286
+ embeddings_path = models_dir / "embeddings.npy"
287
+ if not embeddings_path.exists():
288
+ print(f"ERROR: {embeddings_path} not found. Run train_model.py first.")
289
+ return None
290
+
291
+ embeddings_full = np.load(embeddings_path)
292
+ embeddings = embeddings_full[df.index.to_numpy()]
293
+ labels = df["hs_code"].values
294
+
295
+ # Encode labels
296
+ le = LabelEncoder()
297
+ y = le.fit_transform(labels)
298
+ n_samples = len(y)
299
+ n_classes = len(le.classes_)
300
+ class_counts = np.bincount(y)
301
+ min_class_count = int(class_counts.min()) if len(class_counts) else 0
302
+
303
+ # Replicate exact split from train_model.py:98-109
304
+ test_size = 0.2
305
+ n_test = math.ceil(n_samples * test_size)
306
+ n_train = n_samples - n_test
307
+ can_stratify = (
308
+ min_class_count >= 2
309
+ and n_test >= n_classes
310
+ and n_train >= n_classes
311
+ )
312
+ if can_stratify:
313
+ X_train, X_test, y_train, y_test = train_test_split(
314
+ embeddings, y, test_size=test_size, random_state=42, stratify=y
315
+ )
316
+ else:
317
+ X_train, X_test, y_train, y_test = train_test_split(
318
+ embeddings, y, test_size=test_size, random_state=42, stratify=None
319
+ )
320
+
321
+ print(f"Train: {len(X_train)}, Test: {len(X_test)}")
322
+
323
+ # Train fresh KNN (same params as train_model.py:114)
324
+ clf = KNeighborsClassifier(n_neighbors=5, metric="cosine", weights="distance")
325
+ clf.fit(X_train, y_train)
326
+ y_pred = clf.predict(X_test)
327
+
328
+ # Classification report
329
+ report = classification_report(
330
+ y_test,
331
+ y_pred,
332
+ labels=np.arange(n_classes),
333
+ target_names=le.classes_,
334
+ output_dict=True,
335
+ zero_division=0,
336
+ )
337
+
338
+ overall_acc = float(np.mean(y_test == y_pred))
339
+ print(f"\nTest accuracy: {overall_acc:.4f} ({overall_acc * 100:.1f}%)")
340
+ print(f"Weighted F1: {report['weighted avg']['f1-score']:.4f}")
341
+ print(f"Macro F1: {report['macro avg']['f1-score']:.4f}")
342
+
343
+ # Worst 15 codes by F1
344
+ code_metrics = []
345
+ for code in le.classes_:
346
+ if code in report and isinstance(report[code], dict):
347
+ m = report[code]
348
+ code_metrics.append({
349
+ "hs_code": code,
350
+ "desc": hs_reference.get(code, {}).get("desc", ""),
351
+ "precision": round(m["precision"], 4),
352
+ "recall": round(m["recall"], 4),
353
+ "f1": round(m["f1-score"], 4),
354
+ "support": int(m["support"]),
355
+ })
356
+
357
+ code_metrics.sort(key=lambda x: x["f1"])
358
+ worst_15 = code_metrics[:15]
359
+
360
+ print("\nWorst 15 codes by F1:")
361
+ print(f"{'HS Code':<10} {'F1':>6} {'Prec':>6} {'Rec':>6} {'Sup':>5} Description")
362
+ print("-" * 75)
363
+ for m in worst_15:
364
+ print(f"{m['hs_code']:<10} {m['f1']:>6.3f} {m['precision']:>6.3f} {m['recall']:>6.3f} {m['support']:>5} {m['desc'][:40]}")
365
+
366
+ # Top 20 cross-chapter confusions
367
+ cm = confusion_matrix(y_test, y_pred, labels=np.arange(n_classes))
368
+ confusions = []
369
+ for true_idx in range(n_classes):
370
+ for pred_idx in range(n_classes):
371
+ if true_idx == pred_idx:
372
+ continue
373
+ count = int(cm[true_idx, pred_idx])
374
+ if count == 0:
375
+ continue
376
+ true_code = le.classes_[true_idx]
377
+ pred_code = le.classes_[pred_idx]
378
+ true_chapter = true_code[:2]
379
+ pred_chapter = pred_code[:2]
380
+ if true_chapter == pred_chapter:
381
+ continue
382
+ confusions.append({
383
+ "true_code": true_code,
384
+ "pred_code": pred_code,
385
+ "true_chapter": hs_reference.get(true_code, {}).get("chapter", true_chapter),
386
+ "pred_chapter": hs_reference.get(pred_code, {}).get("chapter", pred_chapter),
387
+ "count": count,
388
+ })
389
+
390
+ confusions.sort(key=lambda x: x["count"], reverse=True)
391
+ top_20_confusions = confusions[:20]
392
+
393
+ print(f"\nTop 20 cross-chapter confusions:")
394
+ print(f"{'True Code':<10} {'Pred Code':<10} {'Count':>5} True Chapter -> Pred Chapter")
395
+ print("-" * 70)
396
+ for c in top_20_confusions:
397
+ print(f"{c['true_code']:<10} {c['pred_code']:<10} {c['count']:>5} {c['true_chapter']} -> {c['pred_chapter']}")
398
+
399
+ return {
400
+ "test_accuracy": round(overall_acc, 4),
401
+ "weighted_f1": round(report["weighted avg"]["f1-score"], 4),
402
+ "macro_f1": round(report["macro avg"]["f1-score"], 4),
403
+ "n_train": len(X_train),
404
+ "n_test": len(X_test),
405
+ "worst_15_by_f1": worst_15,
406
+ "top_20_cross_chapter_confusions": top_20_confusions,
407
+ }
408
+
409
+
410
+ # ---------------------------------------------------------------------------
411
+ # Reporting
412
+ # ---------------------------------------------------------------------------
413
+
414
+ def print_report(metrics):
415
+ """Print a human-readable benchmark report."""
416
+ print("\n" + "=" * 60)
417
+ print("BENCHMARK REPORT")
418
+ print("=" * 60)
419
+
420
+ o = metrics["overall"]
421
+ print(f"\nTotal cases: {metrics['total_cases']} (in label space: {metrics['in_label_space_cases']})")
422
+ print(f"\n{'Metric':<25} {'All Cases':>10} {'In-Space':>10}")
423
+ print("-" * 47)
424
+ ls = metrics["in_label_space"]
425
+ print(f"{'Top-1 Accuracy':<25} {o['top1_accuracy']:>10.1%} {ls['top1_accuracy']:>10.1%}")
426
+ print(f"{'Top-3 Accuracy':<25} {o['top3_accuracy']:>10.1%} {ls['top3_accuracy']:>10.1%}")
427
+ print(f"{'Top-5 Accuracy':<25} {o['top5_accuracy']:>10.1%} {ls['top5_accuracy']:>10.1%}")
428
+ print(f"{'Chapter Accuracy':<25} {o['chapter_accuracy']:>10.1%} {ls['chapter_accuracy']:>10.1%}")
429
+
430
+ print(f"\nPer-Category Breakdown:")
431
+ print(f"{'Category':<15} {'N':>4} {'Top-1':>7} {'Top-3':>7} {'Top-5':>7} {'Chapter':>8}")
432
+ print("-" * 52)
433
+ for cat, m in sorted(metrics["by_category"].items()):
434
+ print(f"{cat:<15} {m['total']:>4} {m['top1_acc']:>7.1%} {m['top3_acc']:>7.1%} {m['top5_acc']:>7.1%} {m['chapter_acc']:>8.1%}")
435
+
436
+ print(f"\nPer-Language Breakdown:")
437
+ print(f"{'Language':<10} {'N':>4} {'Top-1':>7} {'Top-3':>7} {'Top-5':>7}")
438
+ print("-" * 40)
439
+ for lang, m in sorted(metrics["by_language"].items()):
440
+ print(f"{lang:<10} {m['total']:>4} {m['top1_acc']:>7.1%} {m['top3_acc']:>7.1%} {m['top5_acc']:>7.1%}")
441
+
442
+ n_fail = metrics["n_failures"]
443
+ print(f"\nFailures ({n_fail}):")
444
+ print("-" * 90)
445
+ for f in metrics["failures"]:
446
+ print(f" {f['text'][:45]:<45} expected={f['expected']} got={f['predicted']} conf={f['confidence']:.2f} [{f['category']}]")
447
+
448
+
449
+ # ---------------------------------------------------------------------------
450
+ # Main
451
+ # ---------------------------------------------------------------------------
452
+
453
+ def main():
454
+ parser = argparse.ArgumentParser(description="Benchmark HSClassify HS code classifier")
455
+ parser.add_argument("--model-dir", default=None,
456
+ help="Path to HSClassify_micro project root (default: auto-detect sibling dir)")
457
+ parser.add_argument("--output", "-o", default="benchmark_results.json",
458
+ help="Path for JSON results (default: benchmark_results.json)")
459
+ parser.add_argument("--split-analysis", action="store_true",
460
+ help="Run per-class analysis on 80/20 training split")
461
+ args = parser.parse_args()
462
+
463
+ # Auto-detect model dir: look for sibling HSClassify_micro
464
+ model_dir = args.model_dir
465
+ if model_dir is None:
466
+ sibling = SCRIPT_DIR.parent / "HSClassify_micro"
467
+ if sibling.exists():
468
+ model_dir = str(sibling)
469
+ else:
470
+ # Also check same-level with different casing
471
+ for candidate in SCRIPT_DIR.parent.iterdir():
472
+ if candidate.is_dir() and "hsclassify" in candidate.name.lower() and "benchmark" not in candidate.name.lower():
473
+ model_dir = str(candidate)
474
+ break
475
+ if model_dir is None:
476
+ print("ERROR: Could not auto-detect HSClassify_micro directory.")
477
+ print(" Use --model-dir /path/to/HSClassify_micro")
478
+ sys.exit(1)
479
+
480
+ print(f"Using model dir: {model_dir}")
481
+ data_dir, models_dir = resolve_paths(model_dir)
482
+
483
+ start = time.time()
484
+
485
+ print("Loading models...")
486
+ model, classifier, label_encoder, hs_reference = load_model(data_dir, models_dir)
487
+ load_time = time.time() - start
488
+ print(f"Models loaded in {load_time:.1f}s")
489
+
490
+ # Basic benchmark — use local CSV from this repo
491
+ bench_path = SCRIPT_DIR / "benchmark_cases.csv"
492
+ print(f"\nRunning benchmark cases from {bench_path}...")
493
+ bench_start = time.time()
494
+ results = run_benchmark(model, classifier, label_encoder, hs_reference, bench_path)
495
+ metrics = compute_metrics(results)
496
+ bench_time = time.time() - bench_start
497
+ print(f"Benchmark completed in {bench_time:.1f}s")
498
+
499
+ print_report(metrics)
500
+
501
+ # Optional split analysis
502
+ split_metrics = None
503
+ if args.split_analysis:
504
+ split_metrics = run_split_analysis(model, hs_reference, data_dir, models_dir)
505
+
506
+ # Save JSON report
507
+ report = {
508
+ "timestamp": time.strftime("%Y-%m-%dT%H:%M:%S"),
509
+ "model_dir": str(model_dir),
510
+ "benchmark": metrics,
511
+ "timing": {
512
+ "model_load_s": round(load_time, 2),
513
+ "benchmark_s": round(bench_time, 2),
514
+ "total_s": round(time.time() - start, 2),
515
+ },
516
+ }
517
+ if split_metrics:
518
+ report["split_analysis"] = split_metrics
519
+
520
+ output_path = Path(args.output)
521
+ output_path.parent.mkdir(parents=True, exist_ok=True)
522
+ with open(output_path, "w") as f:
523
+ json.dump(report, f, indent=2)
524
+ print(f"\nResults saved to {output_path}")
525
+
526
+
527
+ if __name__ == "__main__":
528
+ main()
benchmark_cases.csv ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ text,expected_hs_code,category,language,notes
2
+ fresh boneless beef,020130,easy,en,common meat product
3
+ frozen boneless bovine meat for export,020230,easy,en,frozen meat variant
4
+ frozen shrimp 500g bag,030617,easy,en,common seafood
5
+ whole milk 3.5% fat,040120,easy,en,standard dairy
6
+ cheddar cheese block,040690,easy,en,common cheese
7
+ fresh tomatoes,070200,easy,en,basic vegetable
8
+ fresh red apples,080810,easy,en,common fruit
9
+ bananas fresh,080300,easy,en,top traded fruit
10
+ raw coffee beans unroasted,090111,easy,en,major commodity
11
+ white rice 25kg bag,100630,easy,en,staple grain
12
+ palm oil refined,151190,easy,en,major edible oil
13
+ cane sugar raw,170199,easy,en,basic commodity
14
+ sweet biscuits assorted,190531,easy,en,packaged food
15
+ bottled sparkling water flavored,220210,easy,en,common beverage
16
+ beer lager 330ml bottles,220300,easy,en,common alcohol
17
+ scotch whisky 700ml,220830,easy,en,spirits
18
+ crude petroleum oil,270900,easy,en,major commodity
19
+ polyethylene pellets LDPE,390110,easy,en,common plastic resin
20
+ car tyre 205/55R16 new,401110,easy,en,auto consumable
21
+ cotton t-shirts mens,610910,easy,en,basic garment
22
+ hot rolled steel coil 600mm,720839,easy,en,industrial steel
23
+ copper cathodes 99.99% purity,740311,easy,en,refined metal
24
+ laptop computer 14 inch,847130,easy,en,common electronics
25
+ smartphone Samsung Galaxy,851712,easy,en,ubiquitous device
26
+ lithium ion battery pack 48V,850760,easy,en,EV battery
27
+ sedan car 2000cc petrol engine,870323,easy,en,standard vehicle
28
+ wooden bedroom wardrobe,940350,easy,en,common furniture
29
+ tea,090210,edge_case,en,very short query - ambiguous
30
+ car parts,870899,edge_case,en,vague automotive
31
+ medicine,300490,edge_case,en,extremely generic
32
+ chips,854231,edge_case,en,ambiguous - food or electronics
33
+ oil,270900,edge_case,en,highly ambiguous
34
+ shoes,640399,edge_case,en,generic footwear
35
+ paper,480256,edge_case,en,very generic
36
+ plastic bags for groceries,392321,edge_case,en,everyday item
37
+ Galaxy S24 Ultra,851712,edge_case,en,brand name only
38
+ Nespresso coffee capsules,210111,edge_case,en,branded coffee product
39
+ Goodyear truck tyre 315/80R22.5,401120,edge_case,en,brand + specs
40
+ Jack Daniels Tennessee whiskey 750ml,220830,edge_case,en,brand name spirits
41
+ Nintendo Switch gaming console,950490,edge_case,en,brand - games vs electronics
42
+ surgical masks disposable,901890,edge_case,en,medical supply
43
+ USB-C charging cable,854239,edge_case,en,tech accessory
44
+ frozen cod fish fillet,030389,edge_case,en,specific fish species
45
+ stainless steel bolts M10,730890,edge_case,en,metal hardware
46
+ yoga pants women polyester,620462,edge_case,en,modern clothing description
47
+ aspirin tablets 500mg retail,300490,edge_case,en,OTC pharma
48
+ insecticide spray for mosquitoes,380891,edge_case,en,household chemical
49
+ PET bottles preform,390760,edge_case,en,industrial plastic
50
+ ข้าวหอมมะลิ,100630,multilingual,th,jasmine rice
51
+ กุ้งแช่แข็ง,030617,multilingual,th,frozen shrimp
52
+ รถยนต์ไฟฟ้า,870380,multilingual,th,electric car
53
+ โทรศัพท์มือถือ,851712,multilingual,th,mobile phone
54
+ ยางรถยนต์,401110,multilingual,th,car tyre
55
+ น้ำตาลทราย,170199,multilingual,th,granulated sugar
56
+ เสื้อยืดผ้าฝ้าย,610910,multilingual,th,cotton t-shirt
57
+ gạo trắng,100630,multilingual,vi,white rice
58
+ tôm đông lạnh,030617,multilingual,vi,frozen shrimp
59
+ cà phê nhân,090111,multilingual,vi,raw coffee beans
60
+ thép cuộn cán nóng,720839,multilingual,vi,hot rolled steel coil
61
+ điện thoại thông minh,851712,multilingual,vi,smartphone
62
+ xe ô tô điện,870380,multilingual,vi,electric car
63
+ dầu thô,270900,multilingual,vi,crude oil
64
+ 笔记本电脑,847130,multilingual,zh,laptop computer
65
+ 冷冻虾,030617,multilingual,zh,frozen shrimp
66
+ 大米,100630,multilingual,zh,rice
67
+ 锂电池,850760,multilingual,zh,lithium battery
68
+ 棉质T恤,610910,multilingual,zh,cotton t-shirt
69
+ 原油,270900,multilingual,zh,crude oil
70
+ English breakfast tea,090240,known_failure,en,black tea - code 090240 not in label space
71
+ matcha green tea powder,090210,known_failure,en,tea variant - model often confuses with other categories
72
+ oolong tea leaves 100g,090230,known_failure,en,semi-fermented tea - code 090230 not in label space
73
+ chamomile herbal tea bags,121190,known_failure,en,herbal infusion - not tea chapter - not in label space
74
+ fresh avocado,080440,known_failure,en,avocado code 080440 not in label space
75
+ quinoa grain organic,100850,known_failure,en,quinoa code 100850 not in label space
76
+ soy sauce 500ml bottle,210390,known_failure,en,soy sauce code 210390 not in label space
77
+ hand sanitizer gel 70% alcohol,380894,known_failure,en,sanitizer code 380894 not in label space
78
+ drone with 4K camera,880211,known_failure,en,UAV code not in label space
79
+ solar panel 400W monocrystalline,854140,known_failure,en,maps to photosensitive devices - often misclassified
benchmark_results.json ADDED
@@ -0,0 +1,638 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "timestamp": "2026-02-24T20:46:17",
3
+ "benchmark": {
4
+ "total_cases": 78,
5
+ "in_label_space_cases": 70,
6
+ "overall": {
7
+ "top1_accuracy": 0.7949,
8
+ "top3_accuracy": 0.8205,
9
+ "top5_accuracy": 0.8333,
10
+ "chapter_accuracy": 0.8974
11
+ },
12
+ "in_label_space": {
13
+ "top1_accuracy": 0.8857,
14
+ "top3_accuracy": 0.9143,
15
+ "top5_accuracy": 0.9286,
16
+ "chapter_accuracy": 0.9571
17
+ },
18
+ "by_category": {
19
+ "easy": {
20
+ "total": 27,
21
+ "top1": 26,
22
+ "top3": 27,
23
+ "top5": 27,
24
+ "chapter": 27,
25
+ "top1_acc": 0.9629629629629629,
26
+ "top3_acc": 1.0,
27
+ "top5_acc": 1.0,
28
+ "chapter_acc": 1.0
29
+ },
30
+ "edge_case": {
31
+ "total": 21,
32
+ "top1": 15,
33
+ "top3": 16,
34
+ "top5": 17,
35
+ "chapter": 18,
36
+ "top1_acc": 0.7142857142857143,
37
+ "top3_acc": 0.7619047619047619,
38
+ "top5_acc": 0.8095238095238095,
39
+ "chapter_acc": 0.8571428571428571
40
+ },
41
+ "multilingual": {
42
+ "total": 20,
43
+ "top1": 20,
44
+ "top3": 20,
45
+ "top5": 20,
46
+ "chapter": 20,
47
+ "top1_acc": 1.0,
48
+ "top3_acc": 1.0,
49
+ "top5_acc": 1.0,
50
+ "chapter_acc": 1.0
51
+ },
52
+ "known_failure": {
53
+ "total": 10,
54
+ "top1": 1,
55
+ "top3": 1,
56
+ "top5": 1,
57
+ "chapter": 5,
58
+ "top1_acc": 0.1,
59
+ "top3_acc": 0.1,
60
+ "top5_acc": 0.1,
61
+ "chapter_acc": 0.5
62
+ }
63
+ },
64
+ "by_language": {
65
+ "en": {
66
+ "total": 58,
67
+ "top1": 42,
68
+ "top3": 44,
69
+ "top5": 45,
70
+ "top1_acc": 0.7241379310344828,
71
+ "top3_acc": 0.7586206896551724,
72
+ "top5_acc": 0.7758620689655172
73
+ },
74
+ "th": {
75
+ "total": 7,
76
+ "top1": 7,
77
+ "top3": 7,
78
+ "top5": 7,
79
+ "top1_acc": 1.0,
80
+ "top3_acc": 1.0,
81
+ "top5_acc": 1.0
82
+ },
83
+ "vi": {
84
+ "total": 7,
85
+ "top1": 7,
86
+ "top3": 7,
87
+ "top5": 7,
88
+ "top1_acc": 1.0,
89
+ "top3_acc": 1.0,
90
+ "top5_acc": 1.0
91
+ },
92
+ "zh": {
93
+ "total": 6,
94
+ "top1": 6,
95
+ "top3": 6,
96
+ "top5": 6,
97
+ "top1_acc": 1.0,
98
+ "top3_acc": 1.0,
99
+ "top5_acc": 1.0
100
+ }
101
+ },
102
+ "failures": [
103
+ {
104
+ "text": "polyethylene pellets LDPE",
105
+ "expected": "390110",
106
+ "predicted": "392321",
107
+ "confidence": 0.8091,
108
+ "category": "easy",
109
+ "language": "en",
110
+ "top5": [
111
+ "392321",
112
+ "390110",
113
+ "950490",
114
+ "300490",
115
+ "220830"
116
+ ],
117
+ "notes": "common plastic resin"
118
+ },
119
+ {
120
+ "text": "medicine",
121
+ "expected": "300490",
122
+ "predicted": "300220",
123
+ "confidence": 1.0,
124
+ "category": "edge_case",
125
+ "language": "en",
126
+ "top5": [
127
+ "300220",
128
+ "950490",
129
+ "310520",
130
+ "220830",
131
+ "240120"
132
+ ],
133
+ "notes": "extremely generic"
134
+ },
135
+ {
136
+ "text": "paper",
137
+ "expected": "480256",
138
+ "predicted": "481910",
139
+ "confidence": 1.0,
140
+ "category": "edge_case",
141
+ "language": "en",
142
+ "top5": [
143
+ "481910",
144
+ "310520",
145
+ "220830",
146
+ "240120",
147
+ "252329"
148
+ ],
149
+ "notes": "very generic"
150
+ },
151
+ {
152
+ "text": "Nespresso coffee capsules",
153
+ "expected": "210111",
154
+ "predicted": "090111",
155
+ "confidence": 0.4046,
156
+ "category": "edge_case",
157
+ "language": "en",
158
+ "top5": [
159
+ "090111",
160
+ "300220",
161
+ "740311",
162
+ "210111",
163
+ "310520"
164
+ ],
165
+ "notes": "branded coffee product"
166
+ },
167
+ {
168
+ "text": "USB-C charging cable",
169
+ "expected": "854239",
170
+ "predicted": "850760",
171
+ "confidence": 0.2069,
172
+ "category": "edge_case",
173
+ "language": "en",
174
+ "top5": [
175
+ "850760",
176
+ "847170",
177
+ "850440",
178
+ "870899",
179
+ "870332"
180
+ ],
181
+ "notes": "tech accessory"
182
+ },
183
+ {
184
+ "text": "stainless steel bolts M10",
185
+ "expected": "730890",
186
+ "predicted": "760120",
187
+ "confidence": 0.2081,
188
+ "category": "edge_case",
189
+ "language": "en",
190
+ "top5": [
191
+ "760120",
192
+ "901890",
193
+ "730890",
194
+ "220830",
195
+ "720917"
196
+ ],
197
+ "notes": "metal hardware"
198
+ },
199
+ {
200
+ "text": "yoga pants women polyester",
201
+ "expected": "620462",
202
+ "predicted": "611030",
203
+ "confidence": 1.0,
204
+ "category": "edge_case",
205
+ "language": "en",
206
+ "top5": [
207
+ "611030",
208
+ "950490",
209
+ "310520",
210
+ "220830",
211
+ "240120"
212
+ ],
213
+ "notes": "modern clothing description"
214
+ },
215
+ {
216
+ "text": "English breakfast tea",
217
+ "expected": "090240",
218
+ "predicted": "090210",
219
+ "confidence": 0.8007,
220
+ "category": "known_failure",
221
+ "language": "en",
222
+ "top5": [
223
+ "090210",
224
+ "392321",
225
+ "950490",
226
+ "220830",
227
+ "240120"
228
+ ],
229
+ "notes": "black tea - code 090240 not in label space"
230
+ },
231
+ {
232
+ "text": "oolong tea leaves 100g",
233
+ "expected": "090230",
234
+ "predicted": "090210",
235
+ "confidence": 0.6064,
236
+ "category": "known_failure",
237
+ "language": "en",
238
+ "top5": [
239
+ "090210",
240
+ "220300",
241
+ "392321",
242
+ "950490",
243
+ "220830"
244
+ ],
245
+ "notes": "semi-fermented tea - code 090230 not in label space"
246
+ },
247
+ {
248
+ "text": "chamomile herbal tea bags",
249
+ "expected": "121190",
250
+ "predicted": "392321",
251
+ "confidence": 0.6013,
252
+ "category": "known_failure",
253
+ "language": "en",
254
+ "top5": [
255
+ "392321",
256
+ "090210",
257
+ "950490",
258
+ "220830",
259
+ "240120"
260
+ ],
261
+ "notes": "herbal infusion - not tea chapter - not in label space"
262
+ },
263
+ {
264
+ "text": "fresh avocado",
265
+ "expected": "080440",
266
+ "predicted": "080810",
267
+ "confidence": 1.0,
268
+ "category": "known_failure",
269
+ "language": "en",
270
+ "top5": [
271
+ "080810",
272
+ "950490",
273
+ "310520",
274
+ "220830",
275
+ "240120"
276
+ ],
277
+ "notes": "avocado code 080440 not in label space"
278
+ },
279
+ {
280
+ "text": "quinoa grain organic",
281
+ "expected": "100850",
282
+ "predicted": "080810",
283
+ "confidence": 0.3905,
284
+ "category": "known_failure",
285
+ "language": "en",
286
+ "top5": [
287
+ "080810",
288
+ "100630",
289
+ "080510",
290
+ "070200",
291
+ "950490"
292
+ ],
293
+ "notes": "quinoa code 100850 not in label space"
294
+ },
295
+ {
296
+ "text": "soy sauce 500ml bottle",
297
+ "expected": "210390",
298
+ "predicted": "220300",
299
+ "confidence": 0.4237,
300
+ "category": "known_failure",
301
+ "language": "en",
302
+ "top5": [
303
+ "220300",
304
+ "300490",
305
+ "150710",
306
+ "950490",
307
+ "220830"
308
+ ],
309
+ "notes": "soy sauce code 210390 not in label space"
310
+ },
311
+ {
312
+ "text": "hand sanitizer gel 70% alcohol",
313
+ "expected": "380894",
314
+ "predicted": "290531",
315
+ "confidence": 1.0,
316
+ "category": "known_failure",
317
+ "language": "en",
318
+ "top5": [
319
+ "290531",
320
+ "950490",
321
+ "310520",
322
+ "220830",
323
+ "240120"
324
+ ],
325
+ "notes": "sanitizer code 380894 not in label space"
326
+ },
327
+ {
328
+ "text": "drone with 4K camera",
329
+ "expected": "880211",
330
+ "predicted": "852580",
331
+ "confidence": 0.6108,
332
+ "category": "known_failure",
333
+ "language": "en",
334
+ "top5": [
335
+ "852580",
336
+ "852872",
337
+ "870323",
338
+ "950490",
339
+ "290531"
340
+ ],
341
+ "notes": "UAV code not in label space"
342
+ },
343
+ {
344
+ "text": "solar panel 400W monocrystalline",
345
+ "expected": "854140",
346
+ "predicted": "850440",
347
+ "confidence": 0.4031,
348
+ "category": "known_failure",
349
+ "language": "en",
350
+ "top5": [
351
+ "850440",
352
+ "850760",
353
+ "870380",
354
+ "853400",
355
+ "290531"
356
+ ],
357
+ "notes": "maps to photosensitive devices - often misclassified"
358
+ }
359
+ ],
360
+ "n_failures": 16
361
+ },
362
+ "timing": {
363
+ "model_load_s": 3.93,
364
+ "benchmark_s": 1.24,
365
+ "total_s": 6.59
366
+ },
367
+ "split_analysis": {
368
+ "test_accuracy": 0.7721,
369
+ "weighted_f1": 0.7703,
370
+ "macro_f1": 0.8107,
371
+ "n_train": 7863,
372
+ "n_test": 1966,
373
+ "worst_15_by_f1": [
374
+ {
375
+ "hs_code": "870332",
376
+ "desc": "Diesel motor cars, 1500-2500cc",
377
+ "precision": 0.2958,
378
+ "recall": 0.2471,
379
+ "f1": 0.2692,
380
+ "support": 85
381
+ },
382
+ {
383
+ "hs_code": "271019",
384
+ "desc": "Other medium petroleum oils",
385
+ "precision": 0.5,
386
+ "recall": 0.2222,
387
+ "f1": 0.3077,
388
+ "support": 9
389
+ },
390
+ {
391
+ "hs_code": "870323",
392
+ "desc": "Motor cars, 1500-3000cc",
393
+ "precision": 0.2991,
394
+ "recall": 0.3465,
395
+ "f1": 0.3211,
396
+ "support": 101
397
+ },
398
+ {
399
+ "hs_code": "854239",
400
+ "desc": "Other electronic integrated circuits",
401
+ "precision": 0.5,
402
+ "recall": 0.25,
403
+ "f1": 0.3333,
404
+ "support": 12
405
+ },
406
+ {
407
+ "hs_code": "870324",
408
+ "desc": "Motor cars, >3000cc",
409
+ "precision": 0.4839,
410
+ "recall": 0.5263,
411
+ "f1": 0.5042,
412
+ "support": 57
413
+ },
414
+ {
415
+ "hs_code": "890190",
416
+ "desc": "Other cargo vessels",
417
+ "precision": 0.4211,
418
+ "recall": 0.6667,
419
+ "f1": 0.5161,
420
+ "support": 12
421
+ },
422
+ {
423
+ "hs_code": "843149",
424
+ "desc": "Other parts of cranes",
425
+ "precision": 0.6667,
426
+ "recall": 0.4444,
427
+ "f1": 0.5333,
428
+ "support": 18
429
+ },
430
+ {
431
+ "hs_code": "870899",
432
+ "desc": "Other parts for motor vehicles",
433
+ "precision": 0.5333,
434
+ "recall": 0.5333,
435
+ "f1": 0.5333,
436
+ "support": 15
437
+ },
438
+ {
439
+ "hs_code": "271600",
440
+ "desc": "Electrical energy",
441
+ "precision": 0.6,
442
+ "recall": 0.6,
443
+ "f1": 0.6,
444
+ "support": 10
445
+ },
446
+ {
447
+ "hs_code": "271012",
448
+ "desc": "Light petroleum oils",
449
+ "precision": 0.5833,
450
+ "recall": 0.6364,
451
+ "f1": 0.6087,
452
+ "support": 11
453
+ },
454
+ {
455
+ "hs_code": "120190",
456
+ "desc": "Soya beans, other",
457
+ "precision": 0.8571,
458
+ "recall": 0.5,
459
+ "f1": 0.6316,
460
+ "support": 12
461
+ },
462
+ {
463
+ "hs_code": "380891",
464
+ "desc": "Insecticides",
465
+ "precision": 1.0,
466
+ "recall": 0.4615,
467
+ "f1": 0.6316,
468
+ "support": 13
469
+ },
470
+ {
471
+ "hs_code": "901890",
472
+ "desc": "Other medical instruments",
473
+ "precision": 0.7143,
474
+ "recall": 0.5769,
475
+ "f1": 0.6383,
476
+ "support": 26
477
+ },
478
+ {
479
+ "hs_code": "847330",
480
+ "desc": "Parts and accessories for computers",
481
+ "precision": 0.8889,
482
+ "recall": 0.5,
483
+ "f1": 0.64,
484
+ "support": 16
485
+ },
486
+ {
487
+ "hs_code": "940180",
488
+ "desc": "Other seats",
489
+ "precision": 0.6667,
490
+ "recall": 0.6154,
491
+ "f1": 0.64,
492
+ "support": 13
493
+ }
494
+ ],
495
+ "top_20_cross_chapter_confusions": [
496
+ {
497
+ "true_code": "040120",
498
+ "pred_code": "020130",
499
+ "true_chapter": "Dairy",
500
+ "pred_chapter": "Meat",
501
+ "count": 3
502
+ },
503
+ {
504
+ "true_code": "840734",
505
+ "pred_code": "870323",
506
+ "true_chapter": "Machinery",
507
+ "pred_chapter": "Vehicles",
508
+ "count": 3
509
+ },
510
+ {
511
+ "true_code": "901890",
512
+ "pred_code": "843149",
513
+ "true_chapter": "Medical",
514
+ "pred_chapter": "Machinery",
515
+ "count": 3
516
+ },
517
+ {
518
+ "true_code": "120190",
519
+ "pred_code": "100199",
520
+ "true_chapter": "Oil Seeds",
521
+ "pred_chapter": "Cereals",
522
+ "count": 2
523
+ },
524
+ {
525
+ "true_code": "200990",
526
+ "pred_code": "190531",
527
+ "true_chapter": "Food Preparations",
528
+ "pred_chapter": "Food Preparations",
529
+ "count": 2
530
+ },
531
+ {
532
+ "true_code": "270900",
533
+ "pred_code": "070200",
534
+ "true_chapter": "Mineral Fuels",
535
+ "pred_chapter": "Vegetables",
536
+ "count": 2
537
+ },
538
+ {
539
+ "true_code": "271012",
540
+ "pred_code": "870323",
541
+ "true_chapter": "Mineral Fuels",
542
+ "pred_chapter": "Vehicles",
543
+ "count": 2
544
+ },
545
+ {
546
+ "true_code": "271019",
547
+ "pred_code": "151190",
548
+ "true_chapter": "Mineral Fuels",
549
+ "pred_chapter": "Oils",
550
+ "count": 2
551
+ },
552
+ {
553
+ "true_code": "730890",
554
+ "pred_code": "392321",
555
+ "true_chapter": "Steel",
556
+ "pred_chapter": "Plastics",
557
+ "count": 2
558
+ },
559
+ {
560
+ "true_code": "730890",
561
+ "pred_code": "890190",
562
+ "true_chapter": "Steel",
563
+ "pred_chapter": "Ships",
564
+ "count": 2
565
+ },
566
+ {
567
+ "true_code": "840734",
568
+ "pred_code": "870324",
569
+ "true_chapter": "Machinery",
570
+ "pred_chapter": "Vehicles",
571
+ "count": 2
572
+ },
573
+ {
574
+ "true_code": "843149",
575
+ "pred_code": "392321",
576
+ "true_chapter": "Machinery",
577
+ "pred_chapter": "Plastics",
578
+ "count": 2
579
+ },
580
+ {
581
+ "true_code": "843149",
582
+ "pred_code": "730890",
583
+ "true_chapter": "Machinery",
584
+ "pred_chapter": "Steel",
585
+ "count": 2
586
+ },
587
+ {
588
+ "true_code": "850760",
589
+ "pred_code": "870380",
590
+ "true_chapter": "Electronics",
591
+ "pred_chapter": "Vehicles",
592
+ "count": 2
593
+ },
594
+ {
595
+ "true_code": "853400",
596
+ "pred_code": "392010",
597
+ "true_chapter": "Electronics",
598
+ "pred_chapter": "Plastics",
599
+ "count": 2
600
+ },
601
+ {
602
+ "true_code": "870899",
603
+ "pred_code": "842199",
604
+ "true_chapter": "Vehicles",
605
+ "pred_chapter": "Machinery",
606
+ "count": 2
607
+ },
608
+ {
609
+ "true_code": "901890",
610
+ "pred_code": "870899",
611
+ "true_chapter": "Medical",
612
+ "pred_chapter": "Vehicles",
613
+ "count": 2
614
+ },
615
+ {
616
+ "true_code": "030617",
617
+ "pred_code": "847170",
618
+ "true_chapter": "Fish",
619
+ "pred_chapter": "Electronics",
620
+ "count": 1
621
+ },
622
+ {
623
+ "true_code": "040120",
624
+ "pred_code": "340111",
625
+ "true_chapter": "Dairy",
626
+ "pred_chapter": "Cosmetics",
627
+ "count": 1
628
+ },
629
+ {
630
+ "true_code": "040690",
631
+ "pred_code": "220300",
632
+ "true_chapter": "Dairy",
633
+ "pred_chapter": "Beverages",
634
+ "count": 1
635
+ }
636
+ ]
637
+ }
638
+ }
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ sentence-transformers>=2.2.0
2
+ scikit-learn>=1.2.0
3
+ numpy>=1.24.0
4
+ pandas>=2.0.0