SentenceTransformer based on intfloat/multilingual-e5-small

This is arkts model for Edge device.

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-small. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for retrieval. docstring <--> passage:\npath: ...\nidentifier: ...\ncode: ...

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-small
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity
  • Supported Modality: Text

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'transformer_task': 'feature-extraction', 'modality_config': {'text': {'method': 'forward', 'method_output_name': 'last_hidden_state'}}, 'module_output_name': 'token_embeddings', 'architecture': 'BertModel'})
  (1): Pooling({'embedding_dimension': 384, 'pooling_mode': 'mean', 'include_prompt': True})
  (2): Normalize({})
)

Evaluation Results

On arkts-code-docstring dataset split test

Model Params MRR NDCG@5 Recall@1 Recall@5
multilingual-e5-small-arkts 117.7M 0.6849 0.7078 0.6030 0.7952

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'query: Persistent geographical location, re-enter to determine if the switch is turned on',
    'passage:\npath: code/BasicFeature/Media/Camera/entry/src/main/ets/Dialog/SettingDialog.ets\nidentifier: getLocationBol\ncode: getLocationBol(bol: boolean) {\n    this.settingDataObj.locationBol = bol;\n  }',
    'passage:\npath: custom_dialog/src/main/ets/model/modifier/TextAreaInputFilterModifier.ets\nidentifier: \ncode: export class TextAreaInputFilterModifier implements AttributeModifier<TextAreaAttribute> {\n  inputFilter?: InputFilter;\n\n  applyNormalAttribute(instance: TextAreaAttribute): void {\n    if (this.inputFilter) {\n      instance.inputFilter(this.inputFilter.value, this.inputFilter.error);\n    }\n  }\n}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4037, 0.0834],
#         [0.4037, 1.0000, 0.0600],
#         [0.0834, 0.0600, 1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 19,561 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 7 tokens
    • mean: 31.93 tokens
    • max: 256 tokens
    • min: 34 tokens
    • mean: 157.48 tokens
    • max: 256 tokens
  • Samples:
sentence_0 sentence_1
query: 通过picker拉起图库并选择图片,并调用图片识码
@param options
@returns
passage:
path: libs/core/src/main/ets/utils/ScanUtils.ets
identifier: onPickerScanForResult
code: static async onPickerScanForResult(options?: scanBarcode.ScanOptions): Promise<Array<scanBarcode.ScanResult>> {
  try {
    let photoOption = new picker.PhotoSelectOptions()
    photoOption.MIMEType = picker.PhotoViewMIMETypes.IMAGE_TYPE
    photoOption.maxSelectNumber = 1
    let photoPicker = new picker.PhotoViewPicker()
    let uris = await photoPicker.select(photoOption)
    return await ScanUtils.onDetectBarCode(uris[0], options)
  } catch (err) {
    let error = err as BusinessError;
    LogUtils.debug('ScanUtils-onPickerScanForResult err', code: ${error.code} -·- message: ${error.message})
    return [];
  }
}
query: 启动对话流程 passage:
path: entry/src/main/ets/services/ai/ChatbotEngine.ets
identifier: startConversation
Flowcode: async startConversationFlow(
  flowId: string,
  sessionId: string,
  initialContext: Record<string, any> = {}
): Promise<ConversationFlow | null> {
  try {
    const flowTemplate = this.getFlowTemplate(flowId);
    if (!flowTemplate) {
      throw new Error(Flow template not found: ${flowId});
    }

    const flow: ConversationFlow = {
      id: ${flowId}_${Date.now()},
      name: flowTemplate.name,
      description: flowTemplate.description,
      steps: [...flowTemplate.steps],
      currentStep: 0,
      isActive: true,
      context: { ...initialContext }
    };

    this.activeFlows.set(sessionId, flow);

    hilog.info(LogConstants.DOMAIN_APP, LogConstants.TAG_APP,
      Started conversation flow: ${flowId} for session: ${sessionId});

    return flow;
  } catch (error) {
    hilog.error(LogConstants.DOMAI...
query: 文本颜色接口 passage:
path: entry/src/main/ets/common/types/CommonTypes.ets
identifier:
code: export interface TextColorConfig {
  primary: string;
  secondary: string;
  disabled: string;
}
{
    "scale": 20.0,
    "similarity_fct": "cos_sim",
    "gather_across_devices": false,
    "directions": [
        "query_to_doc"
    ],
    "partition_mode": "joint",
    "hardness_mode": null,
    "hardness_strength": 0.0
}

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 32
  • num_train_epochs: 1
  • per_device_eval_batch_size: 32
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • per_device_train_batch_size: 32
  • num_train_epochs: 1
  • max_steps: -1
  • learning_rate: 5e-05
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: None
  • warmup_steps: 0
  • optim: adamw_torch_fused
  • optim_args: None
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • optim_target_modules: None
  • gradient_accumulation_steps: 1
  • average_tokens_across_devices: True
  • max_grad_norm: 1
  • label_smoothing_factor: 0.0
  • bf16: False
  • fp16: False
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • use_liger_kernel: False
  • liger_kernel_config: None
  • use_cache: False
  • neftune_noise_alpha: None
  • torch_empty_cache_steps: None
  • auto_find_batch_size: False
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • include_num_input_tokens_seen: no
  • log_level: passive
  • log_level_replica: warning
  • disable_tqdm: False
  • project: huggingface
  • trackio_space_id: trackio
  • per_device_eval_batch_size: 32
  • prediction_loss_only: True
  • eval_on_start: False
  • eval_do_concat_batches: True
  • eval_use_gather_object: False
  • eval_accumulation_steps: None
  • include_for_metrics: []
  • batch_eval_metrics: False
  • save_only_model: False
  • save_on_each_node: False
  • enable_jit_checkpoint: False
  • push_to_hub: False
  • hub_private_repo: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_always_push: False
  • hub_revision: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • restore_callback_states_from_checkpoint: False
  • full_determinism: False
  • seed: 42
  • data_seed: None
  • use_cpu: False
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • dataloader_prefetch_factor: None
  • remove_unused_columns: True
  • label_names: None
  • train_sampling_strategy: random
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • ddp_backend: None
  • ddp_timeout: 1800
  • fsdp: []
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • deepspeed: None
  • debug: []
  • skip_memory_metrics: True
  • do_predict: False
  • resume_from_checkpoint: None
  • warmup_ratio: None
  • local_rank: -1
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.8170 500 0.6130

Training Time

  • Training: 2.6 hours

Framework Versions

  • Python: 3.14.3
  • Sentence Transformers: 5.4.1
  • Transformers: 5.5.4
  • PyTorch: 2.11.0+cpu
  • Accelerate: 1.13.0
  • Datasets: 4.8.4
  • Tokenizers: 0.22.2

Citation

BibTeX

ArkTS-CodeSearch

@misc{he2026arktscodesearchopensourcearktsdataset,
      title={ArkTS-CodeSearch: A Open-Source ArkTS Dataset for Code Retrieval}, 
      author={Yulong He and Artem Ermakov and Sergey Kovalchuk and Artem Aliev and Dmitry Shalymov},
      year={2026},
      eprint={2602.05550},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2602.05550}, 
}
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hreyulog/multilingual-e5-small-arkts

Finetuned
(161)
this model

Paper for hreyulog/multilingual-e5-small-arkts