---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:9984
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: python to dict if only one item
sentences:
- "def get_from_gnucash26_date(date_str: str) -> date:\n \"\"\" Creates a datetime\
\ from GnuCash 2.6 date string \"\"\"\n date_format = \"%Y%m%d\"\n result\
\ = datetime.strptime(date_str, date_format).date()\n return result"
- "def multidict_to_dict(d):\n \"\"\"\n Turns a werkzeug.MultiDict or django.MultiValueDict\
\ into a dict with\n list values\n :param d: a MultiDict or MultiValueDict\
\ instance\n :return: a dict instance\n \"\"\"\n return dict((k, v[0]\
\ if len(v) == 1 else v) for k, v in iterlists(d))"
- "def wipe_table(self, table: str) -> int:\n \"\"\"Delete all records from\
\ a table. Use caution!\"\"\"\n sql = \"DELETE FROM \" + self.delimit(table)\n\
\ return self.db_exec(sql)"
- source_sentence: how to add a string to a filename in python
sentences:
- "def html_to_text(content):\n \"\"\" Converts html content to plain text \"\
\"\"\n text = None\n h2t = html2text.HTML2Text()\n h2t.ignore_links =\
\ False\n text = h2t.handle(content)\n return text"
- "def _get_column_by_db_name(cls, name):\n \"\"\"\n Returns the column,\
\ mapped by db_field name\n \"\"\"\n return cls._columns.get(cls._db_map.get(name,\
\ name))"
- "def add_suffix(fullname, suffix):\n \"\"\" Add suffix to a full file name\"\
\"\"\n name, ext = os.path.splitext(fullname)\n return name + '_' + suffix\
\ + ext"
- source_sentence: human readable string of object python
sentences:
- "def pretty(obj, verbose=False, max_width=79, newline='\\n'):\n \"\"\"\n \
\ Pretty print the object's representation.\n \"\"\"\n stream = StringIO()\n\
\ printer = RepresentationPrinter(stream, verbose, max_width, newline)\n \
\ printer.pretty(obj)\n printer.flush()\n return stream.getvalue()"
- "def asMaskedArray(self):\n \"\"\" Creates converts to a masked array\n\
\ \"\"\"\n return ma.masked_array(data=self.data, mask=self.mask,\
\ fill_value=self.fill_value)"
- "def list_depth(list_, func=max, _depth=0):\n \"\"\"\n Returns the deepest\
\ level of nesting within a list of lists\n\n Args:\n list_ : a nested\
\ listlike object\n func : depth aggregation strategy (defaults to max)\n\
\ _depth : internal var\n\n Example:\n >>> # ENABLE_DOCTEST\n\
\ >>> from utool.util_list import * # NOQA\n >>> list_ = [[[[[1]]],\
\ [3]], [[1], [3]], [[1], [3]]]\n >>> result = (list_depth(list_, _depth=0))\n\
\ >>> print(result)\n\n \"\"\"\n depth_list = [list_depth(item, func=func,\
\ _depth=_depth + 1)\n for item in list_ if util_type.is_listlike(item)]\n\
\ if len(depth_list) > 0:\n return func(depth_list)\n else:\n \
\ return _depth"
- source_sentence: python parse query param
sentences:
- "def read_las(source, closefd=True):\n \"\"\" Entry point for reading las data\
\ in pylas\n\n Reads the whole file into memory.\n\n >>> las = read_las(\"\
pylastests/simple.las\")\n >>> las.classification\n array([1, 1, 1, ...,\
\ 1, 1, 1], dtype=uint8)\n\n Parameters\n ----------\n source : str or\
\ io.BytesIO\n The source to read data from\n\n closefd: bool\n \
\ if True and the source is a stream, the function will close it\n \
\ after it is done reading\n\n\n Returns\n -------\n pylas.lasdatas.base.LasBase\n\
\ The object you can interact with to get access to the LAS points & VLRs\n\
\ \"\"\"\n with open_las(source, closefd=closefd) as reader:\n return\
\ reader.read()"
- "def parse_query_string(query):\n \"\"\"\n parse_query_string:\n very\
\ simplistic. won't do the right thing with list values\n \"\"\"\n result\
\ = {}\n qparts = query.split('&')\n for item in qparts:\n key, value\
\ = item.split('=')\n key = key.strip()\n value = value.strip()\n\
\ result[key] = unquote_plus(value)\n return result"
- "def _clean_dict(target_dict, whitelist=None):\n \"\"\" Convenience function\
\ that removes a dicts keys that have falsy values\n \"\"\"\n assert isinstance(target_dict,\
\ dict)\n return {\n ustr(k).strip(): ustr(v).strip()\n for k,\
\ v in target_dict.items()\n if v not in (None, Ellipsis, [], (), \"\"\
)\n and (not whitelist or k in whitelist)\n }"
- source_sentence: python automatic figure out encoding
sentences:
- "def get_best_encoding(stream):\n \"\"\"Returns the default stream encoding\
\ if not found.\"\"\"\n rv = getattr(stream, 'encoding', None) or sys.getdefaultencoding()\n\
\ if is_ascii_encoding(rv):\n return 'utf-8'\n return rv"
- "def is_natural(x):\n \"\"\"A non-negative integer.\"\"\"\n try:\n \
\ is_integer = int(x) == x\n except (TypeError, ValueError):\n return\
\ False\n return is_integer and x >= 0"
- "def _tool_to_dict(tool):\n \"\"\"Parse a tool definition into a cwl2wdl style\
\ dictionary.\n \"\"\"\n out = {\"name\": _id_to_name(tool.tool[\"id\"]),\n\
\ \"baseCommand\": \" \".join(tool.tool[\"baseCommand\"]),\n \
\ \"arguments\": [],\n \"inputs\": [_input_to_dict(i) for i in tool.tool[\"\
inputs\"]],\n \"outputs\": [_output_to_dict(o) for o in tool.tool[\"\
outputs\"]],\n \"requirements\": _requirements_to_dict(tool.requirements\
\ + tool.hints),\n \"stdin\": None, \"stdout\": None}\n return out"
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Narekatsy/fine-tuned-cosqa")
# Run inference
sentences = [
'python automatic figure out encoding',
'def get_best_encoding(stream):\n """Returns the default stream encoding if not found."""\n rv = getattr(stream, \'encoding\', None) or sys.getdefaultencoding()\n if is_ascii_encoding(rv):\n return \'utf-8\'\n return rv',
'def _tool_to_dict(tool):\n """Parse a tool definition into a cwl2wdl style dictionary.\n """\n out = {"name": _id_to_name(tool.tool["id"]),\n "baseCommand": " ".join(tool.tool["baseCommand"]),\n "arguments": [],\n "inputs": [_input_to_dict(i) for i in tool.tool["inputs"]],\n "outputs": [_output_to_dict(o) for o in tool.tool["outputs"]],\n "requirements": _requirements_to_dict(tool.requirements + tool.hints),\n "stdin": None, "stdout": None}\n return out',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.6173, 0.1376],
# [ 0.6173, 1.0000, -0.0456],
# [ 0.1376, -0.0456, 1.0000]])
```
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 9,984 training samples
* Columns: sentence_0 and sentence_1
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details |
how to zip files to directory in python | def unzip_file_to_dir(path_to_zip, output_directory):
"""
Extract a ZIP archive to a directory
"""
z = ZipFile(path_to_zip, 'r')
z.extractall(output_directory)
z.close() |
| mnist multi gpu training python tensorflow | def transformer_tall_pretrain_lm_tpu_adafactor():
"""Hparams for transformer on LM pretraining (with 64k vocab) on TPU."""
hparams = transformer_tall_pretrain_lm()
update_hparams_for_tpu(hparams)
hparams.max_length = 1024
# For multi-problem on TPU we need it in absolute examples.
hparams.batch_size = 8
hparams.multiproblem_vocab_size = 2**16
return hparams |
| get file name without extension in python | def remove_ext(fname):
"""Removes the extension from a filename
"""
bn = os.path.basename(fname)
return os.path.splitext(bn)[0] |
* Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 2
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters