ele-sage commited on
Commit
c308565
·
verified ·
1 Parent(s): 6893421

Upload 10 files

Browse files
README.md CHANGED
@@ -4,83 +4,49 @@ license: mit
4
  base_model: almanach/camembertav2-base
5
  tags:
6
  - generated_from_trainer
7
- - name
8
- - person
9
- - company
10
  metrics:
11
  - accuracy
12
  - precision
13
  - recall
14
  - f1
15
  model-index:
16
- - name: camembertav2-base-name-classifier-v2
17
  results: []
18
- datasets:
19
- - ele-sage/person-company-names-classification
20
- language:
21
- - fr
22
- - en
23
  ---
24
 
 
 
25
 
26
- # camembertav2-base-name-classifier-v2
27
 
28
- This model is a fine-tuned version of [almanach/camembertav2-base](https://huggingface.co/almanach/camembertav2-base) on [ele-sage/person-company-names-classification](https://huggingface.co/datasets/ele-sage/person-company-names-classification) dataset.
29
  It achieves the following results on the evaluation set:
30
- - Loss: 0.0318
31
- - Accuracy: 0.9915
32
  - Precision: 0.9967
33
- - Recall: 0.9883
34
- - F1: 0.9925
35
 
36
  ## Model description
37
 
38
- This model is a high-performance binary text classifier, fine-tuned from `camembertav2-base`.
39
- Its purpose is to distinguish between a **person's name** and a **company/organization name** with high accuracy.
40
 
41
- ### Direct Use
42
 
43
- This model is intended to be used for text classification. Given a string, it will return a label indicating whether the string is a `Person` or a `Company`.
44
-
45
- ```python
46
- from transformers import pipeline
47
-
48
- classifier = pipeline("text-classification", model="ele-sage/camembertav2-base-name-classifier-v2")
49
-
50
- results = classifier([
51
- "Satya Nadella",
52
- "Global Innovations Inc.",
53
- "Martinez, Alonso"
54
- ])
55
-
56
- for result in results:
57
- print(f"Text: '{result['text']}', Prediction: {result['label']}, Score: {result['score']:.4f}")
58
- ```
59
-
60
- ### Downstream Use
61
-
62
- This model is a key component of a two-stage name processing pipeline. It is designed to be used as a fast, efficient "gatekeeper" to first identify person names before passing them to a more complex parsing model, such as `ele-sage/distilbert-base-uncased-name-splitter`.
63
-
64
- ### Out-of-Scope Use
65
-
66
- - This model is not a general-purpose classifier. It is highly specialized for distinguishing persons from companies and will not perform well on other classification tasks (e.g., sentiment analysis).
67
-
68
- ## Bias, Risks, and Limitations
69
-
70
- - **Geographic & Cultural Bias:** The training data is heavily biased towards North American (Canadian) person names and Quebec-based company names. The model will be less accurate when classifying names from other cultural or geographic origins.
71
- - **Ambiguity:** Certain names can legitimately be both a person's name and a company's name (e.g., "Ford"). In these cases, the model makes a statistical guess based on its training data, which may not always align with the specific context.
72
- - **Data Source:** The person name data is derived from a Facebook data leak and contains noise. While a rigorous cleaning process was applied, the model may have learned from some spurious data.
73
 
 
74
 
 
75
 
76
  ## Training procedure
77
 
78
  ### Training hyperparameters
79
 
80
  The following hyperparameters were used during training:
81
- - learning_rate: 2e-05
82
- - train_batch_size: 256
83
- - eval_batch_size: 256
84
  - seed: 42
85
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
86
  - lr_scheduler_type: linear
@@ -91,18 +57,30 @@ The following hyperparameters were used during training:
91
 
92
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
93
  |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
94
- | 0.0407 | 0.0826 | 2000 | 0.0346 | 0.9903 | 0.9939 | 0.9888 | 0.9914 |
95
- | 0.0325 | 0.1652 | 4000 | 0.0333 | 0.9908 | 0.9933 | 0.9903 | 0.9918 |
96
- | 0.0331 | 0.2479 | 6000 | 0.0314 | 0.9913 | 0.9962 | 0.9883 | 0.9923 |
97
- | 0.0345 | 0.3305 | 8000 | 0.0308 | 0.9917 | 0.9946 | 0.9906 | 0.9926 |
98
- | 0.0306 | 0.4131 | 10000 | 0.0318 | 0.9915 | 0.9967 | 0.9883 | 0.9925 |
99
- | 0.03 | 0.4957 | 12000 | 0.0294 | 0.9920 | 0.9954 | 0.9903 | 0.9929 |
100
- | 0.0301 | 0.5783 | 14000 | 0.0291 | 0.9920 | 0.9954 | 0.9904 | 0.9929 |
101
- | 0.0305 | 0.6609 | 16000 | 0.0282 | 0.9922 | 0.9960 | 0.9901 | 0.9930 |
102
- | 0.0282 | 0.7436 | 18000 | 0.0286 | 0.9922 | 0.9947 | 0.9914 | 0.9930 |
103
- | 0.0284 | 0.8262 | 20000 | 0.0284 | 0.9923 | 0.9957 | 0.9907 | 0.9932 |
104
- | 0.027 | 0.9088 | 22000 | 0.0281 | 0.9924 | 0.9959 | 0.9906 | 0.9932 |
105
- | 0.0289 | 0.9914 | 24000 | 0.0279 | 0.9924 | 0.9956 | 0.9909 | 0.9932 |
 
 
 
 
 
 
 
 
 
 
 
 
106
 
107
 
108
  ### Framework versions
@@ -110,4 +88,4 @@ The following hyperparameters were used during training:
110
  - Transformers 4.57.1
111
  - Pytorch 2.9.0+cu128
112
  - Datasets 4.4.1
113
- - Tokenizers 0.22.1
 
4
  base_model: almanach/camembertav2-base
5
  tags:
6
  - generated_from_trainer
 
 
 
7
  metrics:
8
  - accuracy
9
  - precision
10
  - recall
11
  - f1
12
  model-index:
13
+ - name: camembertav2-base-name-classifier-v3
14
  results: []
 
 
 
 
 
15
  ---
16
 
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
 
20
+ # camembertav2-base-name-classifier-v3
21
 
22
+ This model is a fine-tuned version of [almanach/camembertav2-base](https://huggingface.co/almanach/camembertav2-base) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.0260
25
+ - Accuracy: 0.9931
26
  - Precision: 0.9967
27
+ - Recall: 0.9911
28
+ - F1: 0.9939
29
 
30
  ## Model description
31
 
32
+ More information needed
 
33
 
34
+ ## Intended uses & limitations
35
 
36
+ More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
+ ## Training and evaluation data
39
 
40
+ More information needed
41
 
42
  ## Training procedure
43
 
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
+ - learning_rate: 1e-05
48
+ - train_batch_size: 128
49
+ - eval_batch_size: 128
50
  - seed: 42
51
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
52
  - lr_scheduler_type: linear
 
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
59
  |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
60
+ | 0.0419 | 0.0411 | 2000 | 0.0383 | 0.9902 | 0.9954 | 0.9873 | 0.9913 |
61
+ | 0.0333 | 0.0821 | 4000 | 0.0351 | 0.9912 | 0.9950 | 0.9895 | 0.9922 |
62
+ | 0.0354 | 0.1232 | 6000 | 0.0340 | 0.9911 | 0.9975 | 0.9869 | 0.9921 |
63
+ | 0.0316 | 0.1642 | 8000 | 0.0321 | 0.9918 | 0.9957 | 0.9897 | 0.9927 |
64
+ | 0.0325 | 0.2053 | 10000 | 0.0299 | 0.9918 | 0.9947 | 0.9908 | 0.9928 |
65
+ | 0.0304 | 0.2464 | 12000 | 0.0301 | 0.9920 | 0.9951 | 0.9908 | 0.9929 |
66
+ | 0.0288 | 0.2874 | 14000 | 0.0301 | 0.9921 | 0.9959 | 0.9902 | 0.9930 |
67
+ | 0.0329 | 0.3285 | 16000 | 0.0283 | 0.9923 | 0.9957 | 0.9907 | 0.9932 |
68
+ | 0.0314 | 0.3696 | 18000 | 0.0276 | 0.9925 | 0.9960 | 0.9907 | 0.9933 |
69
+ | 0.0277 | 0.4106 | 20000 | 0.0277 | 0.9926 | 0.9964 | 0.9905 | 0.9935 |
70
+ | 0.0318 | 0.4517 | 22000 | 0.0279 | 0.9926 | 0.9968 | 0.9902 | 0.9935 |
71
+ | 0.0246 | 0.4927 | 24000 | 0.0284 | 0.9927 | 0.9963 | 0.9908 | 0.9936 |
72
+ | 0.0294 | 0.5338 | 26000 | 0.0276 | 0.9927 | 0.9966 | 0.9904 | 0.9935 |
73
+ | 0.0304 | 0.5749 | 28000 | 0.0275 | 0.9925 | 0.9950 | 0.9918 | 0.9934 |
74
+ | 0.0283 | 0.6159 | 30000 | 0.0268 | 0.9928 | 0.9968 | 0.9904 | 0.9936 |
75
+ | 0.0304 | 0.6570 | 32000 | 0.0276 | 0.9928 | 0.9969 | 0.9904 | 0.9936 |
76
+ | 0.0295 | 0.6981 | 34000 | 0.0274 | 0.9929 | 0.9963 | 0.9912 | 0.9937 |
77
+ | 0.0266 | 0.7391 | 36000 | 0.0271 | 0.9930 | 0.9964 | 0.9912 | 0.9938 |
78
+ | 0.0271 | 0.7802 | 38000 | 0.0270 | 0.9929 | 0.9971 | 0.9904 | 0.9937 |
79
+ | 0.0277 | 0.8212 | 40000 | 0.0266 | 0.9930 | 0.9968 | 0.9909 | 0.9938 |
80
+ | 0.027 | 0.8623 | 42000 | 0.0265 | 0.9931 | 0.9969 | 0.9908 | 0.9939 |
81
+ | 0.0286 | 0.9034 | 44000 | 0.0262 | 0.9931 | 0.9966 | 0.9912 | 0.9939 |
82
+ | 0.0275 | 0.9444 | 46000 | 0.0262 | 0.9931 | 0.9967 | 0.9912 | 0.9939 |
83
+ | 0.0287 | 0.9855 | 48000 | 0.0260 | 0.9931 | 0.9967 | 0.9911 | 0.9939 |
84
 
85
 
86
  ### Framework versions
 
88
  - Transformers 4.57.1
89
  - Pytorch 2.9.0+cu128
90
  - Datasets 4.4.1
91
+ - Tokenizers 0.22.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:51c296a5d456f6ae829b10bc407a9ed880251d4e527d199216aa4ebf1dd63d60
3
  size 444859368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:330a5ec3af499755ffb8b874f3baa72bd29893196c15af0cd74af4bd7c1a0951
3
  size 444859368
runs/Dec06_21-39-38_elesage-pc/events.out.tfevents.1765075277.elesage-pc.85356.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5fe2d3c2ec7aeb7172997ec91680f5be3a053d32773ae9bd0413950713299f3
3
+ size 69278
runs/Dec06_21-39-38_elesage-pc/events.out.tfevents.1765078141.elesage-pc.85356.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc8806b3ade97f59fbe1198ba57a05977a70517e859f98698f5da40031a9799e
3
+ size 569
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1e55cff1c14414ae4f79537ccb5eaebdbf2648f590ae30b467416ef9ee1779a3
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35ad20024308e12bb5b51ecace2586384e52ac6511fad8342a726a4d497c574f
3
  size 5905