SentenceTransformer based on TechWolf/JobBERT-v3

This is a sentence-transformers model finetuned from TechWolf/JobBERT-v3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for retrieval.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: TechWolf/JobBERT-v3
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity
  • Supported Modality: Text

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'transformer_task': 'feature-extraction', 'modality_config': {'text': {'method': 'forward', 'method_output_name': 'last_hidden_state'}}, 'module_output_name': 'token_embeddings', 'architecture': 'XLMRobertaModel'})
  (1): Pooling({'embedding_dimension': 768, 'pooling_mode': 'mean', 'include_prompt': True})
  (2): Router(
    default_route='anchor'
    (sub_modules): ModuleDict(
      (anchor): Sequential(
        (0): Dense({'in_features': 768, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh', 'module_input_name': 'sentence_embedding', 'module_output_name': 'sentence_embedding'})
      )
      (positive): Sequential(
        (0): Dense({'in_features': 768, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh', 'module_input_name': 'sentence_embedding', 'module_output_name': 'sentence_embedding'})
      )
    )
  )
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
queries = [
    'cybersecurity risk manager',
]
documents = [
    'cybersecurity risk assurance consultant',
    'konfigurátorka aplikácií',
    'vedúci predajne obuvi a koženej galantérie',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1024] [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities.shape)
# [1, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 93,425 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 3 tokens
    • mean: 8.03 tokens
    • max: 24 tokens
    • min: 4 tokens
    • mean: 30.21 tokens
    • max: 128 tokens
  • Samples:
    anchor positive
    laborant botanik laborantka botanička
    průvodčí vlaků v osobní dopravě průvodčí osobní přepravy
    vývojářka softwaru vestavěných systémů vývojářka softwaru vestavěných zařízení
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false,
        "directions": [
            "query_to_doc"
        ],
        "partition_mode": "joint",
        "hardness_mode": null,
        "hardness_strength": 0.0
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 8,686 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 3 tokens
    • mean: 8.01 tokens
    • max: 24 tokens
    • min: 4 tokens
    • mean: 32.34 tokens
    • max: 128 tokens
  • Samples:
    anchor positive
    instruktorka řízení osobních automobilů instruktor řízení osobních automobilů
    technička námořní mechatroniky technolog námořní mechatroniky
    specialista zahraničního obchodu v oblasti kancelářského nábytku špecialistka v oblasti dovozu a vývozu kancelárskeho nábytku
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false,
        "directions": [
            "query_to_doc"
        ],
        "partition_mode": "joint",
        "hardness_mode": null,
        "hardness_strength": 0.0
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 128
  • learning_rate: 2e-05
  • warmup_steps: 0.1
  • fp16: True
  • per_device_eval_batch_size: 128
  • load_best_model_at_end: True
  • dataloader_drop_last: True
  • dataloader_num_workers: 4
  • router_mapping: {'anchor': 'anchor', 'positive': 'positive'}

All Hyperparameters

Click to expand
  • per_device_train_batch_size: 128
  • num_train_epochs: 3
  • max_steps: -1
  • learning_rate: 2e-05
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: None
  • warmup_steps: 0.1
  • optim: adamw_torch_fused
  • optim_args: None
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • optim_target_modules: None
  • gradient_accumulation_steps: 1
  • average_tokens_across_devices: True
  • max_grad_norm: 1.0
  • label_smoothing_factor: 0.0
  • bf16: False
  • fp16: True
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • use_liger_kernel: False
  • liger_kernel_config: None
  • use_cache: False
  • neftune_noise_alpha: None
  • torch_empty_cache_steps: None
  • auto_find_batch_size: False
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • include_num_input_tokens_seen: no
  • log_level: passive
  • log_level_replica: warning
  • disable_tqdm: False
  • project: huggingface
  • trackio_space_id: trackio
  • per_device_eval_batch_size: 128
  • prediction_loss_only: True
  • eval_on_start: False
  • eval_do_concat_batches: True
  • eval_use_gather_object: False
  • eval_accumulation_steps: None
  • include_for_metrics: []
  • batch_eval_metrics: False
  • save_only_model: False
  • save_on_each_node: False
  • enable_jit_checkpoint: False
  • push_to_hub: False
  • hub_private_repo: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_always_push: False
  • hub_revision: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • restore_callback_states_from_checkpoint: False
  • full_determinism: False
  • seed: 42
  • data_seed: None
  • use_cpu: False
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • dataloader_drop_last: True
  • dataloader_num_workers: 4
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • dataloader_prefetch_factor: None
  • remove_unused_columns: True
  • label_names: None
  • train_sampling_strategy: random
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • ddp_backend: None
  • ddp_timeout: 1800
  • fsdp: []
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • deepspeed: None
  • debug: []
  • skip_memory_metrics: True
  • do_predict: False
  • resume_from_checkpoint: None
  • warmup_ratio: None
  • local_rank: -1
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {'anchor': 'anchor', 'positive': 'positive'}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss
0.0686 50 1.1902 -
0.1372 100 0.9478 -
0.2058 150 0.7830 -
0.2743 200 0.7300 -
0.3429 250 0.6671 0.9028
0.4115 300 0.6276 -
0.4801 350 0.5243 -
0.5487 400 0.5387 -
0.6173 450 0.5103 -
0.6859 500 0.4896 0.8111
0.7545 550 0.4634 -
0.8230 600 0.4549 -
0.8916 650 0.4426 -
0.9602 700 0.4225 -
1.0288 750 0.3886 0.8071
1.0974 800 0.3586 -
1.1660 850 0.3403 -
1.2346 900 0.3548 -
1.3032 950 0.3572 -
1.3717 1000 0.3491 0.7716
1.4403 1050 0.3446 -
1.5089 1100 0.3374 -
1.5775 1150 0.3297 -
1.6461 1200 0.2979 -
1.7147 1250 0.3281 0.7645
1.7833 1300 0.3010 -
1.8519 1350 0.3111 -
1.9204 1400 0.3072 -
1.9890 1450 0.3178 -
2.0576 1500 0.2969 0.7676
2.1262 1550 0.2701 -
2.1948 1600 0.2561 -
2.2634 1650 0.2661 -
2.3320 1700 0.2525 -
2.4005 1750 0.262 0.7628
  • The bold row denotes the saved checkpoint.

Training Time

  • Training: 8.8 minutes

Framework Versions

  • Python: 3.12.13
  • Sentence Transformers: 5.4.1
  • Transformers: 5.5.4
  • PyTorch: 2.11.0+cu130
  • Accelerate: 1.13.0
  • Datasets: 4.8.4
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{oord2019representationlearningcontrastivepredictive,
      title={Representation Learning with Contrastive Predictive Coding},
      author={Aaron van den Oord and Yazhe Li and Oriol Vinyals},
      year={2019},
      eprint={1807.03748},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/1807.03748},
}
Downloads last month
37
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fihus/jobbert-v3-trilingual

Space using fihus/jobbert-v3-trilingual 1

Papers for fihus/jobbert-v3-trilingual