File size: 2,184 Bytes
f85306e
8d47d6a
f85306e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3d34a0f
f85306e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: mit
language:
- uk
- en
base_model:
- intfloat/multilingual-e5-base
---
# Model Card for Retail Product Title Classifier (E5 fine-tuned)

## Model Details

### Model Description

A fine-tuned version of `intfloat/multilingual-e5-base`, adapted for the classification of retail product titles in Ukrainian and English.  
The model is optimized for noisy, real-world data (e.g., typos, abbreviations) typically encountered in e-commerce catalogues.

- **Developed by:** Viacheslav Trachov
- **Model type:** Transformer Encoder (E5)
- **Language(s):** Ukrainian, English
- **License:** MIT
- **Finetuned from model:** intfloat/multilingual-e5-base

## Uses

### Direct Use

- Classifying short, noisy product titles into predefined retail categories.
- Designed for retail inventory management, e-commerce catalogues, and internal search optimization.

### Out-of-Scope Use

- Free-text generation or long-form document classification.
- Tasks requiring high performance on languages other than Ukrainian/English.

## Bias, Risks, and Limitations

- Performance may degrade on titles that mix multiple languages or are heavily abbreviated beyond retail-specific contexts.
- Categories must match the domain and fine-tuning setup (i.e., Ukrainian e-commerce retail).

### Recommendations

- Use confidence thresholds to route low-confidence predictions for manual review if critical.
- Test on domain-specific datasets if adapting to new industries.

## Training Details

### Training Data
- ~60,000 real-world Ukrainian product titles from an e-commerce aggregator.
- Titles were preprocessed minimally (lowercasing, space normalization).
- Additional synthetic examples were generated for underrepresented categories using ChatGPT-4.

### Training Procedure
- Finetuned for multi-class classification using Cross-Entropy Loss.
- Max sequence length: 48 tokens
- Learning rate: 5e-5
- Batch size: 64
- Epochs: 15

### Hardware: NVIDIA V100 GPU

## Evaluation

Macro F1-score used due to class imbalance.
Results:
macro-F1 (for clean data) 0.830
macro-F1 (for noisy data) 0.777
Model achieved strong robustness under simulated typographical noise (~6.3% macro-F1 degradation)