Update README.md
Browse files
README.md
CHANGED
|
@@ -1,16 +1,125 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
datasets:
|
| 4 |
- ele-sage/person-company-names-classification
|
| 5 |
language:
|
| 6 |
- fr
|
| 7 |
- en
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
library_name: transformers
|
| 3 |
license: mit
|
| 4 |
+
base_model: almanach/camembertav2-base
|
| 5 |
+
tags:
|
| 6 |
+
- generated_from_trainer
|
| 7 |
+
- name
|
| 8 |
+
- person
|
| 9 |
+
- company
|
| 10 |
+
metrics:
|
| 11 |
+
- accuracy
|
| 12 |
+
- precision
|
| 13 |
+
- recall
|
| 14 |
+
- f1
|
| 15 |
+
model-index:
|
| 16 |
+
- name: camembertav2-base-name-classifier-v2
|
| 17 |
+
results: []
|
| 18 |
datasets:
|
| 19 |
- ele-sage/person-company-names-classification
|
| 20 |
language:
|
| 21 |
- fr
|
| 22 |
- en
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
# camembertav2-base-name-classifier-v2
|
| 26 |
+
|
| 27 |
+
This model is a fine-tuned version of [almanach/camembertav2-base](https://huggingface.co/almanach/camembertav2-base) on [ele-sage/person-company-names-classification](https://huggingface.co/datasets/ele-sage/person-company-names-classification) dataset.
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
It achieves the following results on the evaluation set:
|
| 31 |
+
- Loss: 0.0260
|
| 32 |
+
- Accuracy: 0.9931
|
| 33 |
+
- Precision: 0.9967
|
| 34 |
+
- Recall: 0.9911
|
| 35 |
+
- F1: 0.9939
|
| 36 |
+
|
| 37 |
+
## Model description
|
| 38 |
+
|
| 39 |
+
This model is a high-performance binary text classifier, fine-tuned from `camembertav2-base`.
|
| 40 |
+
Its purpose is to distinguish between a **person's name** and a **company/organization name** with high accuracy.
|
| 41 |
+
|
| 42 |
+
### Direct Use
|
| 43 |
+
|
| 44 |
+
This model is intended to be used for text classification. Given a string, it will return a label indicating whether the string is a `Person` or a `Company`.
|
| 45 |
+
|
| 46 |
+
```python
|
| 47 |
+
from transformers import pipeline
|
| 48 |
+
|
| 49 |
+
classifier = pipeline("text-classification", model="ele-sage/camembertav2-base-name-classifier-v2")
|
| 50 |
+
|
| 51 |
+
results = classifier([
|
| 52 |
+
"Satya Nadella",
|
| 53 |
+
"Global Innovations Inc.",
|
| 54 |
+
"Martinez, Alonso"
|
| 55 |
+
])
|
| 56 |
+
|
| 57 |
+
for result in results:
|
| 58 |
+
print(f"Text: '{result['text']}', Prediction: {result['label']}, Score: {result['score']:.4f}")
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
### Downstream Use
|
| 62 |
+
|
| 63 |
+
This model is a key component of a two-stage name processing pipeline. It is designed to be used as a fast, efficient "gatekeeper" to first identify person names before passing them to a more complex parsing model, such as `ele-sage/distilbert-base-uncased-name-splitter`.
|
| 64 |
+
|
| 65 |
+
### Out-of-Scope Use
|
| 66 |
+
|
| 67 |
+
- This model is not a general-purpose classifier. It is highly specialized for distinguishing persons from companies and will not perform well on other classification tasks (e.g., sentiment analysis).
|
| 68 |
+
|
| 69 |
+
## Bias, Risks, and Limitations
|
| 70 |
+
|
| 71 |
+
- **Geographic & Cultural Bias:** The training data is heavily biased towards North American (Canadian) person names and Quebec-based company names. The model will be less accurate when classifying names from other cultural or geographic origins.
|
| 72 |
+
- **Ambiguity:** Certain names can legitimately be both a person's name and a company's name (e.g., "Ford"). In these cases, the model makes a statistical guess based on its training data, which may not always align with the specific context.
|
| 73 |
+
- **Data Source:** The person name data is derived from a Facebook data leak and contains noise. While a rigorous cleaning process was applied, the model may have learned from some spurious data.
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
## Training procedure
|
| 77 |
+
|
| 78 |
+
### Training hyperparameters
|
| 79 |
+
|
| 80 |
+
The following hyperparameters were used during training:
|
| 81 |
+
- learning_rate: 1e-05
|
| 82 |
+
- train_batch_size: 128
|
| 83 |
+
- eval_batch_size: 128
|
| 84 |
+
- seed: 42
|
| 85 |
+
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 86 |
+
- lr_scheduler_type: linear
|
| 87 |
+
- lr_scheduler_warmup_steps: 1000
|
| 88 |
+
- num_epochs: 1
|
| 89 |
+
|
| 90 |
+
### Training results
|
| 91 |
+
|
| 92 |
+
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|
| 93 |
+
|:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
|
| 94 |
+
| 0.0419 | 0.0411 | 2000 | 0.0383 | 0.9902 | 0.9954 | 0.9873 | 0.9913 |
|
| 95 |
+
| 0.0333 | 0.0821 | 4000 | 0.0351 | 0.9912 | 0.9950 | 0.9895 | 0.9922 |
|
| 96 |
+
| 0.0354 | 0.1232 | 6000 | 0.0340 | 0.9911 | 0.9975 | 0.9869 | 0.9921 |
|
| 97 |
+
| 0.0316 | 0.1642 | 8000 | 0.0321 | 0.9918 | 0.9957 | 0.9897 | 0.9927 |
|
| 98 |
+
| 0.0325 | 0.2053 | 10000 | 0.0299 | 0.9918 | 0.9947 | 0.9908 | 0.9928 |
|
| 99 |
+
| 0.0304 | 0.2464 | 12000 | 0.0301 | 0.9920 | 0.9951 | 0.9908 | 0.9929 |
|
| 100 |
+
| 0.0288 | 0.2874 | 14000 | 0.0301 | 0.9921 | 0.9959 | 0.9902 | 0.9930 |
|
| 101 |
+
| 0.0329 | 0.3285 | 16000 | 0.0283 | 0.9923 | 0.9957 | 0.9907 | 0.9932 |
|
| 102 |
+
| 0.0314 | 0.3696 | 18000 | 0.0276 | 0.9925 | 0.9960 | 0.9907 | 0.9933 |
|
| 103 |
+
| 0.0277 | 0.4106 | 20000 | 0.0277 | 0.9926 | 0.9964 | 0.9905 | 0.9935 |
|
| 104 |
+
| 0.0318 | 0.4517 | 22000 | 0.0279 | 0.9926 | 0.9968 | 0.9902 | 0.9935 |
|
| 105 |
+
| 0.0246 | 0.4927 | 24000 | 0.0284 | 0.9927 | 0.9963 | 0.9908 | 0.9936 |
|
| 106 |
+
| 0.0294 | 0.5338 | 26000 | 0.0276 | 0.9927 | 0.9966 | 0.9904 | 0.9935 |
|
| 107 |
+
| 0.0304 | 0.5749 | 28000 | 0.0275 | 0.9925 | 0.9950 | 0.9918 | 0.9934 |
|
| 108 |
+
| 0.0283 | 0.6159 | 30000 | 0.0268 | 0.9928 | 0.9968 | 0.9904 | 0.9936 |
|
| 109 |
+
| 0.0304 | 0.6570 | 32000 | 0.0276 | 0.9928 | 0.9969 | 0.9904 | 0.9936 |
|
| 110 |
+
| 0.0295 | 0.6981 | 34000 | 0.0274 | 0.9929 | 0.9963 | 0.9912 | 0.9937 |
|
| 111 |
+
| 0.0266 | 0.7391 | 36000 | 0.0271 | 0.9930 | 0.9964 | 0.9912 | 0.9938 |
|
| 112 |
+
| 0.0271 | 0.7802 | 38000 | 0.0270 | 0.9929 | 0.9971 | 0.9904 | 0.9937 |
|
| 113 |
+
| 0.0277 | 0.8212 | 40000 | 0.0266 | 0.9930 | 0.9968 | 0.9909 | 0.9938 |
|
| 114 |
+
| 0.027 | 0.8623 | 42000 | 0.0265 | 0.9931 | 0.9969 | 0.9908 | 0.9939 |
|
| 115 |
+
| 0.0286 | 0.9034 | 44000 | 0.0262 | 0.9931 | 0.9966 | 0.9912 | 0.9939 |
|
| 116 |
+
| 0.0275 | 0.9444 | 46000 | 0.0262 | 0.9931 | 0.9967 | 0.9912 | 0.9939 |
|
| 117 |
+
| 0.0287 | 0.9855 | 48000 | 0.0260 | 0.9931 | 0.9967 | 0.9911 | 0.9939 |
|
| 118 |
+
|
| 119 |
+
|
| 120 |
+
### Framework versions
|
| 121 |
+
|
| 122 |
+
- Transformers 4.57.1
|
| 123 |
+
- Pytorch 2.9.0+cu128
|
| 124 |
+
- Datasets 4.4.1
|
| 125 |
+
- Tokenizers 0.22.1
|