fine-tuned-cosqa / README.md
Narekatsy's picture
Add new SentenceTransformer model
38f82e5 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dense
  - generated_from_trainer
  - dataset_size:9984
  - loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
  - source_sentence: python to dict if only one item
    sentences:
      - |-
        def get_from_gnucash26_date(date_str: str) -> date:
            """ Creates a datetime from GnuCash 2.6 date string """
            date_format = "%Y%m%d"
            result = datetime.strptime(date_str, date_format).date()
            return result
      - |-
        def multidict_to_dict(d):
            """
            Turns a werkzeug.MultiDict or django.MultiValueDict into a dict with
            list values
            :param d: a MultiDict or MultiValueDict instance
            :return: a dict instance
            """
            return dict((k, v[0] if len(v) == 1 else v) for k, v in iterlists(d))
      - |-
        def wipe_table(self, table: str) -> int:
                """Delete all records from a table. Use caution!"""
                sql = "DELETE FROM " + self.delimit(table)
                return self.db_exec(sql)
  - source_sentence: how to add a string to a filename in python
    sentences:
      - |-
        def html_to_text(content):
            """ Converts html content to plain text """
            text = None
            h2t = html2text.HTML2Text()
            h2t.ignore_links = False
            text = h2t.handle(content)
            return text
      - |-
        def _get_column_by_db_name(cls, name):
                """
                Returns the column, mapped by db_field name
                """
                return cls._columns.get(cls._db_map.get(name, name))
      - |-
        def add_suffix(fullname, suffix):
            """ Add suffix to a full file name"""
            name, ext = os.path.splitext(fullname)
            return name + '_' + suffix + ext
  - source_sentence: human readable string of object python
    sentences:
      - |-
        def pretty(obj, verbose=False, max_width=79, newline='\n'):
            """
            Pretty print the object's representation.
            """
            stream = StringIO()
            printer = RepresentationPrinter(stream, verbose, max_width, newline)
            printer.pretty(obj)
            printer.flush()
            return stream.getvalue()
      - |-
        def asMaskedArray(self):
                """ Creates converts to a masked array
                """
                return ma.masked_array(data=self.data, mask=self.mask, fill_value=self.fill_value)
      - |-
        def list_depth(list_, func=max, _depth=0):
            """
            Returns the deepest level of nesting within a list of lists

            Args:
               list_  : a nested listlike object
               func   : depth aggregation strategy (defaults to max)
               _depth : internal var

            Example:
                >>> # ENABLE_DOCTEST
                >>> from utool.util_list import *  # NOQA
                >>> list_ = [[[[[1]]], [3]], [[1], [3]], [[1], [3]]]
                >>> result = (list_depth(list_, _depth=0))
                >>> print(result)

            """
            depth_list = [list_depth(item, func=func, _depth=_depth + 1)
                          for item in  list_ if util_type.is_listlike(item)]
            if len(depth_list) > 0:
                return func(depth_list)
            else:
                return _depth
  - source_sentence: python parse query param
    sentences:
      - |-
        def read_las(source, closefd=True):
            """ Entry point for reading las data in pylas

            Reads the whole file into memory.

            >>> las = read_las("pylastests/simple.las")
            >>> las.classification
            array([1, 1, 1, ..., 1, 1, 1], dtype=uint8)

            Parameters
            ----------
            source : str or io.BytesIO
                The source to read data from

            closefd: bool
                    if True and the source is a stream, the function will close it
                    after it is done reading


            Returns
            -------
            pylas.lasdatas.base.LasBase
                The object you can interact with to get access to the LAS points & VLRs
            """
            with open_las(source, closefd=closefd) as reader:
                return reader.read()
      - |-
        def parse_query_string(query):
            """
            parse_query_string:
            very simplistic. won't do the right thing with list values
            """
            result = {}
            qparts = query.split('&')
            for item in qparts:
                key, value = item.split('=')
                key = key.strip()
                value = value.strip()
                result[key] = unquote_plus(value)
            return result
      - |-
        def _clean_dict(target_dict, whitelist=None):
            """ Convenience function that removes a dicts keys that have falsy values
            """
            assert isinstance(target_dict, dict)
            return {
                ustr(k).strip(): ustr(v).strip()
                for k, v in target_dict.items()
                if v not in (None, Ellipsis, [], (), "")
                and (not whitelist or k in whitelist)
            }
  - source_sentence: python automatic figure out encoding
    sentences:
      - |-
        def get_best_encoding(stream):
            """Returns the default stream encoding if not found."""
            rv = getattr(stream, 'encoding', None) or sys.getdefaultencoding()
            if is_ascii_encoding(rv):
                return 'utf-8'
            return rv
      - |-
        def is_natural(x):
            """A non-negative integer."""
            try:
                is_integer = int(x) == x
            except (TypeError, ValueError):
                return False
            return is_integer and x >= 0
      - |-
        def _tool_to_dict(tool):
            """Parse a tool definition into a cwl2wdl style dictionary.
            """
            out = {"name": _id_to_name(tool.tool["id"]),
                   "baseCommand": " ".join(tool.tool["baseCommand"]),
                   "arguments": [],
                   "inputs": [_input_to_dict(i) for i in tool.tool["inputs"]],
                   "outputs": [_output_to_dict(o) for o in tool.tool["outputs"]],
                   "requirements": _requirements_to_dict(tool.requirements + tool.hints),
                   "stdin": None, "stdout": None}
            return out
pipeline_tag: sentence-similarity
library_name: sentence-transformers

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Narekatsy/fine-tuned-cosqa")
# Run inference
sentences = [
    'python automatic figure out encoding',
    'def get_best_encoding(stream):\n    """Returns the default stream encoding if not found."""\n    rv = getattr(stream, \'encoding\', None) or sys.getdefaultencoding()\n    if is_ascii_encoding(rv):\n        return \'utf-8\'\n    return rv',
    'def _tool_to_dict(tool):\n    """Parse a tool definition into a cwl2wdl style dictionary.\n    """\n    out = {"name": _id_to_name(tool.tool["id"]),\n           "baseCommand": " ".join(tool.tool["baseCommand"]),\n           "arguments": [],\n           "inputs": [_input_to_dict(i) for i in tool.tool["inputs"]],\n           "outputs": [_output_to_dict(o) for o in tool.tool["outputs"]],\n           "requirements": _requirements_to_dict(tool.requirements + tool.hints),\n           "stdin": None, "stdout": None}\n    return out',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.6173,  0.1376],
#         [ 0.6173,  1.0000, -0.0456],
#         [ 0.1376, -0.0456,  1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 9,984 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 6 tokens
    • mean: 9.69 tokens
    • max: 24 tokens
    • min: 39 tokens
    • mean: 87.33 tokens
    • max: 256 tokens
  • Samples:
    sentence_0 sentence_1
    how to zip files to directory in python def unzip_file_to_dir(path_to_zip, output_directory):
    """
    Extract a ZIP archive to a directory
    """
    z = ZipFile(path_to_zip, 'r')
    z.extractall(output_directory)
    z.close()
    mnist multi gpu training python tensorflow def transformer_tall_pretrain_lm_tpu_adafactor():
    """Hparams for transformer on LM pretraining (with 64k vocab) on TPU."""
    hparams = transformer_tall_pretrain_lm()
    update_hparams_for_tpu(hparams)
    hparams.max_length = 1024
    # For multi-problem on TPU we need it in absolute examples.
    hparams.batch_size = 8
    hparams.multiproblem_vocab_size = 2**16
    return hparams
    get file name without extension in python def remove_ext(fname):
    """Removes the extension from a filename
    """
    bn = os.path.basename(fname)
    return os.path.splitext(bn)[0]
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 2
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
1.6026 500 0.1512

Framework Versions

  • Python: 3.11.3
  • Sentence Transformers: 5.1.2
  • Transformers: 4.57.1
  • PyTorch: 2.9.0+cpu
  • Accelerate: 1.11.0
  • Datasets: 4.4.1
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}