Initial upload: camembert-large fine-tune for French construction matching (v2, 14k pairs)
01590b8 verified metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:14481
- loss:MultipleNegativesRankingLoss
base_model: Lajavaness/sentence-camembert-large
widget:
- source_sentence: Plomberie sanitaire
sentences:
- Semis manuel de pelouses à gazon, mauresques et ordinaires
- interne
- Installation sanitaire
- source_sentence: Charpente bois
sentences:
- Structure charpente
- Équipements sanitaires
- Installation pour le briquetage des garnitures de frein
- source_sentence: >-
Machine à découper pour la découpe de la base des bandes et des plaques
aiguilletées
sentences:
- AVB-915
- Touret d'affûtage pour bandes et plaques à aiguilles
- section 200 x 400 mm
- source_sentence: plus de 32 cm
sentences:
- >-
combustible gaz-mazout, capacité de production de vapeur 35-75 t/h,
pression 3,9 MPa
- plus de 0,2 à 0,35 m3
- à la norme 01-02-104-01
- source_sentence: jusqu'à 25 m
sentences:
- à la norme 33-04-018-02
- 14,2 t
- jusqu'à 50 m
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on Lajavaness/sentence-camembert-large
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: eval
type: eval
metrics:
- type: pearson_cosine
value: .nan
name: Pearson Cosine
- type: spearman_cosine
value: .nan
name: Spearman Cosine
SentenceTransformer based on Lajavaness/sentence-camembert-large
This is a sentence-transformers model finetuned from Lajavaness/sentence-camembert-large. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Lajavaness/sentence-camembert-large
- Maximum Sequence Length: 514 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False, 'architecture': 'CamembertModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
"jusqu'à 25 m",
"jusqu'à 50 m",
'à la norme 33-04-018-02',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.8389, 0.0886],
# [0.8389, 1.0000, 0.1294],
# [0.0886, 0.1294, 1.0000]])
Evaluation
Metrics
Semantic Similarity
- Dataset:
eval - Evaluated with
EmbeddingSimilarityEvaluator
| Metric | Value |
|---|---|
| pearson_cosine | nan |
| spearman_cosine | nan |
Training Details
Training Dataset
Unnamed Dataset
- Size: 14,481 training samples
- Columns:
anchorandpositive - Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 3 tokens
- mean: 13.16 tokens
- max: 59 tokens
- min: 3 tokens
- mean: 13.46 tokens
- max: 61 tokens
- Samples:
anchor positive Balances à plate-forme ; dispositif de recouvrementMachine d'aluminationplus de 18 m², coefficient de résistance des roches 4 - 6plus de 18 m², coefficient de résistance des roches 7 - 20plus de 20 à 30 m dans les sols du groupe 1plus de 20 à 30 m dans les sols du groupe 2 - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
Evaluation Dataset
Unnamed Dataset
- Size: 1,609 evaluation samples
- Columns:
anchorandpositive - Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 3 tokens
- mean: 12.72 tokens
- max: 62 tokens
- min: 3 tokens
- mean: 13.04 tokens
- max: 64 tokens
- Samples:
anchor positive 10 m3, groupe de sols 3 m15 m3, groupe de sols 1 m125-200 mm250-400 mmà la norme 01-01-032-05à la norme 01-01-032-06 - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: epochper_device_train_batch_size: 16learning_rate: 2e-05num_train_epochs: 5warmup_steps: 453load_best_model_at_end: True
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 5max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: Nonewarmup_ratio: 0.0warmup_steps: 453log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss | Validation Loss | eval_spearman_cosine |
|---|---|---|---|---|
| 0.5 | 453 | 0.5925 | - | - |
| 1.0 | 906 | 0.4408 | 0.2765 | nan |
| 1.5 | 1359 | 0.3219 | - | - |
| 2.0 | 1812 | 0.2956 | 0.2330 | nan |
| 2.5 | 2265 | 0.1923 | - | - |
| 3.0 | 2718 | 0.2017 | 0.2032 | nan |
| 3.5 | 3171 | 0.1307 | - | - |
| 4.0 | 3624 | 0.1151 | 0.1981 | nan |
| 4.5 | 4077 | 0.096 | - | - |
| 5.0 | 4530 | 0.0793 | 0.2025 | nan |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.9.6
- Sentence Transformers: 5.1.2
- Transformers: 4.57.6
- PyTorch: 2.8.0
- Accelerate: 1.10.1
- Datasets: 4.5.0
- Tokenizers: 0.22.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}