SentenceTransformer
This model was finetuned with Unsloth.
based on unsloth/Qwen3-Embedding-0.6B
This is a sentence-transformers model finetuned from unsloth/Qwen3-Embedding-0.6B on the huatuo_encyclopedia_qa dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: unsloth/Qwen3-Embedding-0.6B
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: zh
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'PeftModelForFeatureExtraction'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'脂溢性皮炎脱发会长吗',
'脂溢性皮炎是好发在皮脂溢出部位的一种皮炎,就是指面的中部、躯干的中部,会出现红斑、黄红色的斑,有些油腻的鳞屑,会有一点瘙痒,患者会感到不舒服。那么,脂溢性皮炎脱发会长吗,一起来了解一下吧。脂溢皮炎出现脱发后头发会油腻发亮,头皮屑慢慢增多,经常出现奇痒,有时头发干枯无光泽,只要用手抓一抓,头发就会脱落,特别是两侧额角还会发生慢性弥漫性脱发。如果及时治疗控制好了脱发,没损伤到正常的毛囊,头发以后会慢慢长出来,而如果是造成毛囊坏死了,那就不会再长出来了。只要头皮有茸毛就会有新的生长的,关键要先治疗好脂溢性皮炎脱发,现代医学将脂溢性脱发的病因大致分为两大类:一是头皮局部微循环障碍造成的头皮供血不足,形成头发营养供给不足而导致脱发;二是由于内分泌失调,特别是雄性激素分泌过旺造成毛囊口畸变,角化过度,阻塞营养供应通道而造成脱发.另外,遗传因素,高寒,微量元素缺乏,毒性元素,放射性元素以及长期神经精神紧张等因素,也被认为是造成脱发的病因.所以西医主要采用改善头皮供血不足,补充维生素,镇静等方法治疗,但目前尚无特效疗法。脂溢性皮炎导致的脱发是可以恢复的,建议这类患者平时可以多吃一些清淡类的食物,不要吃过于油腻、辛辣的食物,最重要的是先治愈脂溢性皮炎,宜补充植物蛋白,多食大豆、黑芝麻、玉米等食品。宜补充铁质,多食黄豆、黑豆、蛋类、禽类、带鱼、虾、熟花生、菠菜、鲤鱼、香蕉、胡萝卜、马铃薯等。忌过食糖和脂肪丰富的食物,如肝类、肉类、洋葱等酸性食物。可以有效的缓解脂溢性皮炎。',
'为缓激肽拮抗剂,是一种抗动脉粥样硬化药.本品抗动脉粥样硬化作用、抗炎作用和抗凝血作用均与抗缓激肽作用有关.它能使动脉硬化过程的进展明显减慢,并能产生预防作用,能使主动脉和粗大血管内动脉硬化斑数量和大小均有所减少.尚能降低二磷酸腺苷引起的血小板聚集,作用可维持3小时,抗凝血作用比双香豆素等弱,但在纤维蛋白熔解过程中能加速凝块的熔解.据认为,本品对与人体激肽系统激活和激肽过度形成有关的微循环障碍所致疾病最为有效,而对心、脑血管疾病疗效较差,因为后者不发生激肽代谢方面的重要变化。',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.8805, 0.0158],
# [ 0.8805, 1.0000, -0.0810],
# [ 0.0158, -0.0810, 1.0000]])
Training Details
Training Dataset
huatuo_encyclopedia_qa
- Dataset: huatuo_encyclopedia_qa at 2989899
- Size: 362,420 training samples
- Columns:
anchorandpositive - Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 4 tokens
- mean: 8.63 tokens
- max: 21 tokens
- min: 4 tokens
- mean: 343.58 tokens
- max: 512 tokens
- Samples:
anchor positive 曲匹地尔片的用法用量注意:同种药品可由于不同的包装规格有不同的用法或用量。本文只供参考。如果不确定,请参看药品随带的说明书或向医生询问。口服。一次50~100mg(1-2片),3次/日,或遵医嘱。三期梅毒多久能治愈吗梅毒是一种进展十分缓慢的疾病,包括早期梅毒,晚期梅毒等等,一般来说,梅毒病程超过两年的就是晚期梅毒了,而晚期梅毒的治疗难度是比较大的,三期梅毒能完全治愈吗?该如何进行治疗?一、对于三期梅毒的治疗,中西各有妙招,希望患者在医生的指导下选择适合自己的治疗方法。被确诊为三期梅毒的患者很关心这个问题,梅毒需要讲究科学的治疗方法,三期梅毒患者如果有一个良好的心态,积极配合医生治疗,还有很有可能治愈的,具体治疗方法我们看看下文的介绍。二、三期梅毒又称晚期梅毒,是因梅毒初期没有进行治疗或者治疗不彻底所造成。好发于40-50岁之间。该期梅毒不仅局限于皮肤和粘膜,也可侵犯任何内脏器官和组织。三期梅毒可发生在感染后2年以上,一般多发于感染后3~4年。病程漫长,可持续10~30年。未经治愈的二期梅毒中约有1/3的病人可发展为晚期活动性梅毒;另有一部分患者不出现晚期梅毒症状,只是梅毒血清反应持续阳性,为晚期潜伏梅毒;也有一部分患者可以自愈。三、晚期梅毒可以通过药物治疗控制病情,但是治愈的可能性是比较小的,治疗还是有必要的,通过有效的治疗可以缓解患者的症状,杀灭一部分的梅毒螺旋体。虽然治愈的可能性不高,但是患者还是应当坚持治疗。通过上文的简单介绍,相信大家对于三期梅毒多久能治愈这一方面的问题已经有了自己的答案。对于梅毒患者来说,无论患病多少年,都应在确诊后及时进行正规的治疗处理。正规的治疗可以缓解患者的病情,患者还是应当坚持进行,并积极配合医生的治疗,说不定治愈的可能就发生了。肝癌术后饮食肝脏恶性肿瘤可分为原发性和继发性。原发性肝脏恶性肿瘤源于肝脏的上皮或间质组织。前者被称为原发性肝癌,是我国发病率高、危害大的恶性肿瘤。后者被称为肉瘤,与原发性肝癌相比相对较少。肝癌术后饮食注意什么呢?肝脏手术给人体带来巨大的创伤。剩余肝细胞的再生、增值和伤口愈合需要消耗大量的能量和蛋白质。建议肝癌术后患者适当补充高质量的蛋白质,以满足身体的需要。患者身体状况的改善也有利于进一步的后续抗肿瘤治疗。多吃营养。吃花椰菜、芝麻、洋葱和其他可以提高免疫功能的食物。。肝癌术后饮食要以高蛋白、高维生素饮食为主,比如牛奶、鸡蛋、猪肝、鸡肝、羊肝、香蕉、石榴、西瓜等等。肝癌患者术后一定要注意饮食的卫生,在清洁食物的时候,一定尽量多洗几次,在食物的选择上,也要适当的限制。肝脏手术后最常见的不适症状是食欲不振、腹胀和饱腹感。尽管随着时间的推移,这种情况会继续改善,但在开始时注意少吃多吃是很重要的。与此同时,我们应该停止吸烟、饮酒、咖啡、浓茶、碳酸饮料、酸辣食品和其他刺激性食物。吃饭时,我们应该慢慢吃,避免吃油腻的食物。我们应该限制光、可消化和脂肪的摄入。尤其是,我们不应该吃太多的动物脂肪,同时避免冷食。,餐后不宜过量的运动,肝癌术后的患者可以吃一些小米、玉米、土豆、赤豆、薏仁,可以做成粥或者是汤食用。肝癌患者禁忌油炸、坚硬的食物,不要抽烟,不要饮酒。在饮食的数量上面也要加以限制。意补硒,硒可以改善人体的防癌的功能,及含硒的药片、口服液。 - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
The following tables demonstrate the model's capability to distinguish between treatment intent (correct answers) and usage/side-effect distractors (hard negatives), even when the distractors share high keyword overlap (e.g., entity name "阿莫西林").
Test Case 1: Intent Recognition (Medical Query)
Query: 阿莫西林胶囊主要用于治疗哪些疾病?
| Similarity Score | Candidate Text | Result Type |
|---|---|---|
| 0.8155 | 阿莫西林适用于敏感菌(不产β内酰胺酶菌株)所致的呼吸道感染、泌尿生殖道感染、皮肤软组织感染等。 | ✅ Correct Answer |
| 0.4845 | 用于缓解轻至中度疼痛如头痛、关节痛、偏头痛、牙痛、肌肉痛、神经痛、痛经。 | ❌ Hard Negative (Wrong Disease) |
| 0.3753 | 用于治疗高血压、心绞痛。口服,起始剂量10mg,每日一次。 | ❌ Hard Negative (Wrong Disease) |
| 0.2545 | 口服。成人一次0.5g,每6~8小时1次,一日剂量不超过4g。 | ❌ Distractor (Usage Info) |
| 0.1304 | 恶心、呕吐、腹泻及假膜性肠炎等胃肠道反应。皮疹、药物热和哮喘等过敏反应。 | ❌ Distractor (Side Effect) |
Test Case 2: Entity Confusion Test (Hard Negative)
Query: 阿莫西林治什么?
| Similarity Score | Candidate Text | Result Type |
|---|---|---|
| 0.7735 | 阿莫西林适用于呼吸道感染。 | ✅ Correct Answer (Treatment) |
| 0.7088 | 阿莫西林口服,一次0.5g。 | ❌ Hard Negative (Usage + Entity) |
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size: 64gradient_accumulation_steps: 2learning_rate: 3e-05max_steps: 120lr_scheduler_type: constant_with_warmupwarmup_ratio: 0.03fp16: Truebatch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 64per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 2eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 3e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 3.0max_steps: 120lr_scheduler_type: constant_with_warmuplr_scheduler_kwargs: {}warmup_ratio: 0.03warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss |
|---|---|---|
| 0.0004 | 1 | 0.2458 |
| 0.0007 | 2 | 0.3754 |
| 0.0011 | 3 | 0.2256 |
| 0.0014 | 4 | 0.2025 |
| 0.0018 | 5 | 0.1641 |
| 0.0021 | 6 | 0.3087 |
| 0.0025 | 7 | 0.189 |
| 0.0028 | 8 | 0.1877 |
| 0.0032 | 9 | 0.1253 |
| 0.0035 | 10 | 0.24 |
| 0.0039 | 11 | 0.2242 |
| 0.0042 | 12 | 0.1313 |
| 0.0046 | 13 | 0.2015 |
| 0.0049 | 14 | 0.1898 |
| 0.0053 | 15 | 0.1771 |
| 0.0057 | 16 | 0.1523 |
| 0.0060 | 17 | 0.0779 |
| 0.0064 | 18 | 0.1579 |
| 0.0067 | 19 | 0.0988 |
| 0.0071 | 20 | 0.3388 |
| 0.0074 | 21 | 0.244 |
| 0.0078 | 22 | 0.1848 |
| 0.0081 | 23 | 0.2059 |
| 0.0085 | 24 | 0.2416 |
| 0.0088 | 25 | 0.1501 |
| 0.0092 | 26 | 0.192 |
| 0.0095 | 27 | 0.2614 |
| 0.0099 | 28 | 0.1959 |
| 0.0102 | 29 | 0.2134 |
| 0.0106 | 30 | 0.1626 |
| 0.0109 | 31 | 0.148 |
| 0.0113 | 32 | 0.2126 |
| 0.0117 | 33 | 0.0645 |
| 0.0120 | 34 | 0.1181 |
| 0.0124 | 35 | 0.1188 |
| 0.0127 | 36 | 0.1432 |
| 0.0131 | 37 | 0.1408 |
| 0.0134 | 38 | 0.1192 |
| 0.0138 | 39 | 0.1981 |
| 0.0141 | 40 | 0.108 |
| 0.0145 | 41 | 0.1279 |
| 0.0148 | 42 | 0.0626 |
| 0.0152 | 43 | 0.1279 |
| 0.0155 | 44 | 0.081 |
| etc... |
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.2.0
- Transformers: 4.56.2
- PyTorch: 2.9.0+cu126
- Accelerate: 1.12.0
- Datasets: 4.3.0
- Tokenizers: 0.22.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
GGUF Quantization
This model contains GGUF quantized versions in: Qwen3-Embedding-0.6B.F16.gguf
- Downloads last month
- 247