--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:9984 - loss:MultipleNegativesRankingLoss base_model: sentence-transformers/all-MiniLM-L6-v2 widget: - source_sentence: python to dict if only one item sentences: - "def get_from_gnucash26_date(date_str: str) -> date:\n \"\"\" Creates a datetime\ \ from GnuCash 2.6 date string \"\"\"\n date_format = \"%Y%m%d\"\n result\ \ = datetime.strptime(date_str, date_format).date()\n return result" - "def multidict_to_dict(d):\n \"\"\"\n Turns a werkzeug.MultiDict or django.MultiValueDict\ \ into a dict with\n list values\n :param d: a MultiDict or MultiValueDict\ \ instance\n :return: a dict instance\n \"\"\"\n return dict((k, v[0]\ \ if len(v) == 1 else v) for k, v in iterlists(d))" - "def wipe_table(self, table: str) -> int:\n \"\"\"Delete all records from\ \ a table. Use caution!\"\"\"\n sql = \"DELETE FROM \" + self.delimit(table)\n\ \ return self.db_exec(sql)" - source_sentence: how to add a string to a filename in python sentences: - "def html_to_text(content):\n \"\"\" Converts html content to plain text \"\ \"\"\n text = None\n h2t = html2text.HTML2Text()\n h2t.ignore_links =\ \ False\n text = h2t.handle(content)\n return text" - "def _get_column_by_db_name(cls, name):\n \"\"\"\n Returns the column,\ \ mapped by db_field name\n \"\"\"\n return cls._columns.get(cls._db_map.get(name,\ \ name))" - "def add_suffix(fullname, suffix):\n \"\"\" Add suffix to a full file name\"\ \"\"\n name, ext = os.path.splitext(fullname)\n return name + '_' + suffix\ \ + ext" - source_sentence: human readable string of object python sentences: - "def pretty(obj, verbose=False, max_width=79, newline='\\n'):\n \"\"\"\n \ \ Pretty print the object's representation.\n \"\"\"\n stream = StringIO()\n\ \ printer = RepresentationPrinter(stream, verbose, max_width, newline)\n \ \ printer.pretty(obj)\n printer.flush()\n return stream.getvalue()" - "def asMaskedArray(self):\n \"\"\" Creates converts to a masked array\n\ \ \"\"\"\n return ma.masked_array(data=self.data, mask=self.mask,\ \ fill_value=self.fill_value)" - "def list_depth(list_, func=max, _depth=0):\n \"\"\"\n Returns the deepest\ \ level of nesting within a list of lists\n\n Args:\n list_ : a nested\ \ listlike object\n func : depth aggregation strategy (defaults to max)\n\ \ _depth : internal var\n\n Example:\n >>> # ENABLE_DOCTEST\n\ \ >>> from utool.util_list import * # NOQA\n >>> list_ = [[[[[1]]],\ \ [3]], [[1], [3]], [[1], [3]]]\n >>> result = (list_depth(list_, _depth=0))\n\ \ >>> print(result)\n\n \"\"\"\n depth_list = [list_depth(item, func=func,\ \ _depth=_depth + 1)\n for item in list_ if util_type.is_listlike(item)]\n\ \ if len(depth_list) > 0:\n return func(depth_list)\n else:\n \ \ return _depth" - source_sentence: python parse query param sentences: - "def read_las(source, closefd=True):\n \"\"\" Entry point for reading las data\ \ in pylas\n\n Reads the whole file into memory.\n\n >>> las = read_las(\"\ pylastests/simple.las\")\n >>> las.classification\n array([1, 1, 1, ...,\ \ 1, 1, 1], dtype=uint8)\n\n Parameters\n ----------\n source : str or\ \ io.BytesIO\n The source to read data from\n\n closefd: bool\n \ \ if True and the source is a stream, the function will close it\n \ \ after it is done reading\n\n\n Returns\n -------\n pylas.lasdatas.base.LasBase\n\ \ The object you can interact with to get access to the LAS points & VLRs\n\ \ \"\"\"\n with open_las(source, closefd=closefd) as reader:\n return\ \ reader.read()" - "def parse_query_string(query):\n \"\"\"\n parse_query_string:\n very\ \ simplistic. won't do the right thing with list values\n \"\"\"\n result\ \ = {}\n qparts = query.split('&')\n for item in qparts:\n key, value\ \ = item.split('=')\n key = key.strip()\n value = value.strip()\n\ \ result[key] = unquote_plus(value)\n return result" - "def _clean_dict(target_dict, whitelist=None):\n \"\"\" Convenience function\ \ that removes a dicts keys that have falsy values\n \"\"\"\n assert isinstance(target_dict,\ \ dict)\n return {\n ustr(k).strip(): ustr(v).strip()\n for k,\ \ v in target_dict.items()\n if v not in (None, Ellipsis, [], (), \"\"\ )\n and (not whitelist or k in whitelist)\n }" - source_sentence: python automatic figure out encoding sentences: - "def get_best_encoding(stream):\n \"\"\"Returns the default stream encoding\ \ if not found.\"\"\"\n rv = getattr(stream, 'encoding', None) or sys.getdefaultencoding()\n\ \ if is_ascii_encoding(rv):\n return 'utf-8'\n return rv" - "def is_natural(x):\n \"\"\"A non-negative integer.\"\"\"\n try:\n \ \ is_integer = int(x) == x\n except (TypeError, ValueError):\n return\ \ False\n return is_integer and x >= 0" - "def _tool_to_dict(tool):\n \"\"\"Parse a tool definition into a cwl2wdl style\ \ dictionary.\n \"\"\"\n out = {\"name\": _id_to_name(tool.tool[\"id\"]),\n\ \ \"baseCommand\": \" \".join(tool.tool[\"baseCommand\"]),\n \ \ \"arguments\": [],\n \"inputs\": [_input_to_dict(i) for i in tool.tool[\"\ inputs\"]],\n \"outputs\": [_output_to_dict(o) for o in tool.tool[\"\ outputs\"]],\n \"requirements\": _requirements_to_dict(tool.requirements\ \ + tool.hints),\n \"stdin\": None, \"stdout\": None}\n return out" pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'}) (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Narekatsy/fine-tuned-cosqa") # Run inference sentences = [ 'python automatic figure out encoding', 'def get_best_encoding(stream):\n """Returns the default stream encoding if not found."""\n rv = getattr(stream, \'encoding\', None) or sys.getdefaultencoding()\n if is_ascii_encoding(rv):\n return \'utf-8\'\n return rv', 'def _tool_to_dict(tool):\n """Parse a tool definition into a cwl2wdl style dictionary.\n """\n out = {"name": _id_to_name(tool.tool["id"]),\n "baseCommand": " ".join(tool.tool["baseCommand"]),\n "arguments": [],\n "inputs": [_input_to_dict(i) for i in tool.tool["inputs"]],\n "outputs": [_output_to_dict(o) for o in tool.tool["outputs"]],\n "requirements": _requirements_to_dict(tool.requirements + tool.hints),\n "stdin": None, "stdout": None}\n return out', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[ 1.0000, 0.6173, 0.1376], # [ 0.6173, 1.0000, -0.0456], # [ 0.1376, -0.0456, 1.0000]]) ``` ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 9,984 training samples * Columns: sentence_0 and sentence_1 * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | sentence_0 | sentence_1 | |:--------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | how to zip files to directory in python | def unzip_file_to_dir(path_to_zip, output_directory):
"""
Extract a ZIP archive to a directory
"""
z = ZipFile(path_to_zip, 'r')
z.extractall(output_directory)
z.close()
| | mnist multi gpu training python tensorflow | def transformer_tall_pretrain_lm_tpu_adafactor():
"""Hparams for transformer on LM pretraining (with 64k vocab) on TPU."""
hparams = transformer_tall_pretrain_lm()
update_hparams_for_tpu(hparams)
hparams.max_length = 1024
# For multi-problem on TPU we need it in absolute examples.
hparams.batch_size = 8
hparams.multiproblem_vocab_size = 2**16
return hparams
| | get file name without extension in python | def remove_ext(fname):
"""Removes the extension from a filename
"""
bn = os.path.basename(fname)
return os.path.splitext(bn)[0]
| * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 2 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 2 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `parallelism_config`: None - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `project`: huggingface - `trackio_space_id`: trackio - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: no - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: True - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin - `router_mapping`: {} - `learning_rate_mapping`: {}
### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 1.6026 | 500 | 0.1512 | ### Framework Versions - Python: 3.11.3 - Sentence Transformers: 5.1.2 - Transformers: 4.57.1 - PyTorch: 2.9.0+cpu - Accelerate: 1.11.0 - Datasets: 4.4.1 - Tokenizers: 0.22.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```