--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:50 - loss:MultipleNegativesRankingLoss base_model: BAAI/bge-large-zh-v1.5 widget: - source_sentence: 定期定額投資的優缺點 sentences: - 近年來大型語言模型與擴散模型在圖像與文本生成領域取得突破性進展。 - 國際間的生產與物流體系正在發生重大的組織變革與調整。 - 透過固定金額長期投入,投資者能有效攤平市場波動帶來的成本風險,但可能在強勁牛市中錯失更高的單筆申購報酬。 - source_sentence: 京都最適合賞楓的季節是什麼時候? sentences: - 秋季前往關西地區,十一月中旬到十二月初通常是觀賞紅葉的最佳時機。 - 使用 asyncio 庫可以實現非阻塞的 I/O 操作,顯著提升網路爬蟲或 API 請求的並發性能。 - 在快速變遷的職場環境中,持續獲取新知識與技能是維持個人競爭力與適應力的關鍵。 - source_sentence: 長期失眠該如何改善? sentences: - 建立規律的作息時間、減少睡前使用電子產品,並營造舒適的睡眠環境有助於緩解睡眠障礙。 - 植物透過葉綠體吸收太陽能,將二氧化碳與水轉化為葡萄糖並釋放氧氣,這是地球能量循環的基礎。 - 辦理信用貸款通常要求穩定的收入證明與良好的信用評分。 - source_sentence: 如何減少日常生活中的碳足跡 sentences: - 在推動組織數位化過程中,往往會面臨技術債、員工抗拒改變以及缺乏清晰策略等難題。 - 該行動裝置的電力持久度表現優異,能滿足長時間使用的需求。 - 透過節能家電、搭乘大眾運輸及實踐蔬食生活,能有效降低個人的環境影響。 - source_sentence: 京都最值得造訪的歷史古蹟 sentences: - 這座日本古都擁有眾多世界文化遺產,如清水寺、金閣寺與伏見稻荷大社,是體驗傳統文化的必經之地。 - 患者通常會感到胸口灼熱(俗稱火燒心)、胃酸逆流,有時還會伴隨慢性咳嗽或喉嚨發炎。 - 這種以植物性食物、橄欖油和適量深海魚為主的飲食模式,被證實能有效預防心血管疾病。 datasets: - yenstdi/embbedding_text_1111 pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on BAAI/bge-large-zh-v1.5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) on the [embbedding_text_1111](https://huggingface.co/datasets/yenstdi/embbedding_text_1111) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [embbedding_text_1111](https://huggingface.co/datasets/yenstdi/embbedding_text_1111) ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True, 'architecture': 'BertModel'}) (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ '京都最值得造訪的歷史古蹟', '這座日本古都擁有眾多世界文化遺產,如清水寺、金閣寺與伏見稻荷大社,是體驗傳統文化的必經之地。', '患者通常會感到胸口灼熱(俗稱火燒心)、胃酸逆流,有時還會伴隨慢性咳嗽或喉嚨發炎。', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 0.6704, 0.1423], # [0.6704, 1.0000, 0.1834], # [0.1423, 0.1834, 1.0000]]) ``` ## Training Details ### Training Dataset #### embbedding_text_1111 * Dataset: [embbedding_text_1111](https://huggingface.co/datasets/yenstdi/embbedding_text_1111) at [610ac14](https://huggingface.co/datasets/yenstdi/embbedding_text_1111/tree/610ac1456cc501416303e62f7813f2ee87ee95e3) * Size: 50 training samples * Columns: anchor and positive * Approximate statistics based on the first 50 samples: | | anchor | positive | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | positive | |:-----------------------------------|:---------------------------------------------------------------------------| | 尋找熟悉 React 生態系統的前端開發者 | 應徵者需具備 React, Redux 及 Next.js 的實作經驗,並能運用 TypeScript 撰寫高品質程式碼。 | | 後端 Python 工程師職缺要求 | 精通 Django 或 FastAPI 框架,並有使用 Celery 處理非同步任務與分散式隊列的經驗。 | | 雲端架構師 (AWS 專長) 招募中 | 負責維運 EC2、S3 與 Lambda 等雲端資源,並能有效配置 RDS 資料庫以確保系統效能。 | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Evaluation Dataset #### embbedding_text_1111 * Dataset: [embbedding_text_1111](https://huggingface.co/datasets/yenstdi/embbedding_text_1111) at [610ac14](https://huggingface.co/datasets/yenstdi/embbedding_text_1111/tree/610ac1456cc501416303e62f7813f2ee87ee95e3) * Size: 25 evaluation samples * Columns: anchor and positive * Approximate statistics based on the first 25 samples: | | anchor | positive | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | positive | |:-------------------------------|:----------------------------------------------------------------------| | 這款手機的電池續航力令人印象深刻。 | 該行動裝置的電力持久度表現優異,能滿足長時間使用的需求。 | | 什麼是機器學習中的過擬合現象? | 當模型在訓練數據上表現極佳,但在未見過的測試數據上預測準確率大幅下降時,通常就是發生了 Overfitting。 | | 2024年全球永續能源趨勢報告 | 隨著各國減碳政策的推進,太陽能與離岸風電在未來幾年將成為再生能源成長的核心動力。 | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `lr_scheduler_type`: cosine - `warmup_steps`: 100 #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: None - `warmup_ratio`: 0.0 - `warmup_steps`: 100 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `parallelism_config`: None - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `project`: huggingface - `trackio_space_id`: trackio - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: no - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: True - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {}
### Training Logs | Epoch | Step | Training Loss | Validation Loss | |:-----:|:----:|:-------------:|:---------------:| | 0.25 | 1 | 1.3902 | - | | 0.5 | 2 | 1.6712 | - | | 0.75 | 3 | 1.2991 | - | | 1.0 | 4 | 1.3125 | 0.1941 | | 1.25 | 5 | 1.6758 | - | | 1.5 | 6 | 1.5893 | - | | 1.75 | 7 | 1.2746 | - | | 2.0 | 8 | 0.0071 | 0.1854 | | 2.25 | 9 | 1.236 | - | | 2.5 | 10 | 1.0984 | - | | 2.75 | 11 | 1.208 | - | | 3.0 | 12 | 0.3278 | 0.1744 | ### Framework Versions - Python: 3.12.12 - Sentence Transformers: 5.2.0 - Transformers: 4.57.6 - PyTorch: 2.9.0+cu126 - Accelerate: 1.12.0 - Datasets: 4.0.0 - Tokenizers: 0.22.2 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```