metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:5749
- loss:CosineSimilarityLoss
widget:
- source_sentence: >-
Nterprise Linux Services is expected to be available before then end of
this year.
sentences:
- >-
Beta versions of Nterprise Linux Services are expected to be available
on certain HP ProLiant servers in July.
- Spain turning back the clock on siestas
- I don't like many flavored drinks.
- source_sentence: Iran hopes nuclear talks will yield 'roadmap'
sentences:
- Iran Nuclear Talks in Geneva Spur High Hopes
- A black pet dog runs around in the garden of a house.
- >-
The witness was a 27-year-old Kosovan parking attendant, who was paid by
the News of the World, the court heard.
- source_sentence: Hamas Urges Hizbullah to Pull Fighters Out of Syria
sentences:
- >-
"This was a persistent problem which has not been solved, mechanically
and physically," said board member Steven Wallace.
- A small dog jumps over a yellow beam.
- Hamas calls on Hezbollah to pull forces out of Syria
- source_sentence: Licensing revenue slid 21 percent, however, to $107.6 million.
sentences:
- Britain loses bid to deport radical cleric Abu Qatada
- A man sits on a bed very close to a small television.
- >-
License sales, a key measure of demand, fell 21 percent to $107.6
million.
- source_sentence: >-
Comcast Class A shares were up 8 cents at $30.50 in morning trading on the
Nasdaq Stock Market.
sentences:
- The stock rose 48 cents to $30 yesterday in Nasdaq Stock Market trading.
- 'Malaysia: Chinese satellite found object in ocean'
- A boy in a robe sits in a chair.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer
results:
- task:
type: semantic-similarity
name: 意味的類似性 (Semantic Similarity)
metrics:
- type: pearson_cosine
value: 0.4639747212598005
name: ピアソン相関係数 (コサイン類似度)
- type: spearman_cosine
value: 0.4595105448711385
name: スピアマン相関係数 (コサイン類似度)
license: gemma
SentenceTransformer
これは、訓練済みのsentence-transformersモデルです。このモデルは、文と段落を256次元の密なベクトル空間にマッピングし、意味的テキスト類似性、意味検索、言い換えマイニング、テキスト分類、クラスタリングなどに使用できます。
モデル詳細
モデルの説明
- モデルタイプ: Sentence Transformer
- 最大シーケンス長: 2048トークン
- 出力次元数: 256次元
- 類似度関数: コサイン類似度
モデルのソース
- ドキュメント: Sentence Transformers Documentation
- リポジトリ: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
完全なモデルアーキテクチャ
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
使用方法
直接使用 (Sentence Transformers)
まず、Sentence Transformersライブラリをインストールします:
pip install -U sentence-transformers
次に、このモデルをロードして推論を実行できます。
from sentence_transformers import SentenceTransformer
# 🤗 Hubからダウンロード
model = SentenceTransformer("sentence_transformers_model_id")
# 推論を実行
sentences = [
'Comcast Class A shares were up 8 cents at $30.50 in morning trading on the Nasdaq Stock Market.',
'The stock rose 48 cents to $30 yesterday in Nasdaq Stock Market trading.',
'Malaysia: Chinese satellite found object in ocean',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 256]
# 埋め込みベクトルの類似度スコアを取得
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.5752, 0.2980],
# [0.5752, 1.0000, 0.2161],
# [0.2980, 0.2161, 1.0000]])
評価
メトリクス
意味的類似性
| メトリクス | 値 |
|---|---|
| pearson_cosine | 0.464 |
| spearman_cosine | 0.4595 |
訓練詳細
訓練データセット
名称未設定のデータセット
- サイズ: 5,749 訓練サンプル
- カラム:
sentence_0,sentence_1,label - 最初の1000サンプルに基づくおおよその統計:
sentence_0 sentence_1 label 型 string string float 詳細 - 最小: 6 トークン
- 平均: 14.76 トークン
- 最大: 55 トークン
- 最小: 6 トークン
- 平均: 14.73 トークン
- 最大: 57 トークン
- 最小: 0.0
- 平均: 0.55
- 最大: 1.0
- サンプル:
sentence_0 sentence_1 label Forecasters said warnings might go up for Cuba later Thursday.Watches or warnings could be issued for eastern Cuba later on Thursday.0.8Death toll in Lebanon bombings rises to 471 suspect arrested after Lebanon car bombings kill 450.5599999904632569Three dogs running on a racetrack.Three dogs round a bend at a racetrack.0.9600000381469727 - 損失関数:
CosineSimilarityLoss以下のパラメータを使用:{ "loss_fct": "torch.nn.modules.loss.MSELoss" }
訓練ハイパーパラメータ
デフォルト以外のハイパーパラメータ
eval_strategy: stepsper_device_train_batch_size: 16per_device_eval_batch_size: 16multi_dataset_batch_sampler: round_robin
すべてのハイパーパラメータ
クリックして展開
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 3max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robinrouter_mapping: {}learning_rate_mapping: {}
訓練ログ
| エポック | ステップ | 訓練損失 | spearman_cosine |
|---|---|---|---|
| 1.0 | 360 | - | 0.2967 |
| 1.3889 | 500 | 0.11 | 0.3338 |
| 2.0 | 720 | - | 0.3665 |
| 2.7778 | 1000 | 0.0857 | 0.4101 |
| 3.0 | 1080 | - | 0.4595 |
フレームワークのバージョン
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
引用
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
