shubharuidas's picture
Add new SentenceTransformer model
f320c0b verified
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:900
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: shubharuidas/codebert-embed-base-dense-retriever
widget:
- source_sentence: Best practices for __init__
sentences:
- "def close(self) -> None:\n self.sync()\n self.clear()"
- "class MyClass:\n def __call__(self, state):\n return\n\n \
\ def class_method(self, state):\n return"
- "def __init__(self, name: str):\n self.name = name\n self.lock\
\ = threading.Lock()"
- source_sentence: Explain the close logic
sentences:
- "def close(self) -> None:\n self.sync()\n self.clear()"
- "def attach_node(self, key: str, node: StateNodeSpec[Any, ContextT] | None) ->\
\ None:\n if key == START:\n output_keys = [\n \
\ k\n for k, v in self.builder.schemas[self.builder.input_schema].items()\n\
\ if not is_managed_value(v)\n ]\n else:\n \
\ output_keys = list(self.builder.channels) + [\n k for\
\ k, v in self.builder.managed.items()\n ]\n\n def _get_updates(\n\
\ input: None | dict | Any,\n ) -> Sequence[tuple[str, Any]]\
\ | None:\n if input is None:\n return None\n \
\ elif isinstance(input, dict):\n return [(k, v) for k, v\
\ in input.items() if k in output_keys]\n elif isinstance(input, Command):\n\
\ if input.graph == Command.PARENT:\n return\
\ None\n return [\n (k, v) for k, v in input._update_as_tuples()\
\ if k in output_keys\n ]\n elif (\n \
\ isinstance(input, (list, tuple))\n and input\n \
\ and any(isinstance(i, Command) for i in input)\n ):\n \
\ updates: list[tuple[str, Any]] = []\n for i in input:\n\
\ if isinstance(i, Command):\n if i.graph\
\ == Command.PARENT:\n continue\n \
\ updates.extend(\n (k, v) for k, v in i._update_as_tuples()\
\ if k in output_keys\n )\n else:\n\
\ updates.extend(_get_updates(i) or ())\n \
\ return updates\n elif (t := type(input)) and get_cached_annotated_keys(t):\n\
\ return get_update_as_tuples(input, output_keys)\n \
\ else:\n msg = create_error_message(\n message=f\"\
Expected dict, got {input}\",\n error_code=ErrorCode.INVALID_GRAPH_NODE_RETURN_VALUE,\n\
\ )\n raise InvalidUpdateError(msg)\n\n #\
\ state updaters\n write_entries: tuple[ChannelWriteEntry | ChannelWriteTupleEntry,\
\ ...] = (\n ChannelWriteTupleEntry(\n mapper=_get_root\
\ if output_keys == [\"__root__\"] else _get_updates\n ),\n \
\ ChannelWriteTupleEntry(\n mapper=_control_branch,\n \
\ static=_control_static(node.ends)\n if node is not\
\ None and node.ends is not None\n else None,\n ),\n\
\ )\n\n # add node and output channel\n if key == START:\n\
\ self.nodes[key] = PregelNode(\n tags=[TAG_HIDDEN],\n\
\ triggers=[START],\n channels=START,\n \
\ writers=[ChannelWrite(write_entries)],\n )\n elif node\
\ is not None:\n input_schema = node.input_schema if node else self.builder.state_schema\n\
\ input_channels = list(self.builder.schemas[input_schema])\n \
\ is_single_input = len(input_channels) == 1 and \"__root__\" in input_channels\n\
\ if input_schema in self.schema_to_mapper:\n mapper\
\ = self.schema_to_mapper[input_schema]\n else:\n mapper\
\ = _pick_mapper(input_channels, input_schema)\n self.schema_to_mapper[input_schema]\
\ = mapper\n\n branch_channel = _CHANNEL_BRANCH_TO.format(key)\n \
\ self.channels[branch_channel] = (\n LastValueAfterFinish(Any)\n\
\ if node.defer\n else EphemeralValue(Any, guard=False)\n\
\ )\n self.nodes[key] = PregelNode(\n triggers=[branch_channel],\n\
\ # read state keys and managed values\n channels=(\"\
__root__\" if is_single_input else input_channels),\n # coerce\
\ state dict to schema class (eg. pydantic model)\n mapper=mapper,\n\
\ # publish to state keys\n writers=[ChannelWrite(write_entries)],\n\
\ metadata=node.metadata,\n retry_policy=node.retry_policy,\n\
\ cache_policy=node.cache_policy,\n bound=node.runnable,\
\ # type: ignore[arg-type]\n )\n else:\n raise RuntimeError"
- "def tick(\n self,\n tasks: Iterable[PregelExecutableTask],\n \
\ *,\n reraise: bool = True,\n timeout: float | None = None,\n\
\ retry_policy: Sequence[RetryPolicy] | None = None,\n get_waiter:\
\ Callable[[], concurrent.futures.Future[None]] | None = None,\n schedule_task:\
\ Callable[\n [PregelExecutableTask, int, Call | None],\n \
\ PregelExecutableTask | None,\n ],\n ) -> Iterator[None]:\n \
\ tasks = tuple(tasks)\n futures = FuturesDict(\n callback=weakref.WeakMethod(self.commit),\n\
\ event=threading.Event(),\n future_type=concurrent.futures.Future,\n\
\ )\n # give control back to the caller\n yield\n \
\ # fast path if single task with no timeout and no waiter\n if len(tasks)\
\ == 0:\n return\n elif len(tasks) == 1 and timeout is None\
\ and get_waiter is None:\n t = tasks[0]\n try:\n \
\ run_with_retry(\n t,\n retry_policy,\n\
\ configurable={\n CONFIG_KEY_CALL:\
\ partial(\n _call,\n weakref.ref(t),\n\
\ retry_policy=retry_policy,\n \
\ futures=weakref.ref(futures),\n schedule_task=schedule_task,\n\
\ submit=self.submit,\n ),\n\
\ },\n )\n self.commit(t, None)\n\
\ except Exception as exc:\n self.commit(t, exc)\n \
\ if reraise and futures:\n # will be re-raised\
\ after futures are done\n fut: concurrent.futures.Future =\
\ concurrent.futures.Future()\n fut.set_exception(exc)\n \
\ futures.done.add(fut)\n elif reraise:\n \
\ if tb := exc.__traceback__:\n while tb.tb_next\
\ is not None and any(\n tb.tb_frame.f_code.co_filename.endswith(name)\n\
\ for name in EXCLUDED_FRAME_FNAMES\n \
\ ):\n tb = tb.tb_next\n \
\ exc.__traceback__ = tb\n raise\n if not\
\ futures: # maybe `t` scheduled another task\n return\n \
\ else:\n tasks = () # don't reschedule this task\n \
\ # add waiter task if requested\n if get_waiter is not None:\n \
\ futures[get_waiter()] = None\n # schedule tasks\n for t\
\ in tasks:\n fut = self.submit()( # type: ignore[misc]\n \
\ run_with_retry,\n t,\n retry_policy,\n\
\ configurable={\n CONFIG_KEY_CALL: partial(\n\
\ _call,\n weakref.ref(t),\n \
\ retry_policy=retry_policy,\n futures=weakref.ref(futures),\n\
\ schedule_task=schedule_task,\n \
\ submit=self.submit,\n ),\n },\n \
\ __reraise_on_exit__=reraise,\n )\n futures[fut]\
\ = t\n # execute tasks, and wait for one to fail or all to finish.\n \
\ # each task is independent from all other concurrent tasks\n #\
\ yield updates/debug output as each task finishes\n end_time = timeout\
\ + time.monotonic() if timeout else None\n while len(futures) > (1 if\
\ get_waiter is not None else 0):\n done, inflight = concurrent.futures.wait(\n\
\ futures,\n return_when=concurrent.futures.FIRST_COMPLETED,\n\
\ timeout=(max(0, end_time - time.monotonic()) if end_time else\
\ None),\n )\n if not done:\n break # timed\
\ out\n for fut in done:\n task = futures.pop(fut)\n\
\ if task is None:\n # waiter task finished,\
\ schedule another\n if inflight and get_waiter is not None:\n\
\ futures[get_waiter()] = None\n else:\n \
\ # remove references to loop vars\n del fut, task\n\
\ # maybe stop other tasks\n if _should_stop_others(done):\n\
\ break\n # give control back to the caller\n \
\ yield\n # wait for done callbacks\n futures.event.wait(\n\
\ timeout=(max(0, end_time - time.monotonic()) if end_time else None)\n\
\ )\n # give control back to the caller\n yield\n \
\ # panic on failure or timeout\n try:\n _panic_or_proceed(\n\
\ futures.done.union(f for f, t in futures.items() if t is not\
\ None),\n panic=reraise,\n )\n except Exception\
\ as exc:\n if tb := exc.__traceback__:\n while tb.tb_next\
\ is not None and any(\n tb.tb_frame.f_code.co_filename.endswith(name)\n\
\ for name in EXCLUDED_FRAME_FNAMES\n ):\n \
\ tb = tb.tb_next\n exc.__traceback__ = tb\n\
\ raise"
- source_sentence: Explain the async aupdate_state logic
sentences:
- "class MyClass:\n def __call__(self, state):\n return\n\n \
\ def class_method(self, state):\n return"
- "async def aupdate_state(\n self,\n config: RunnableConfig,\n \
\ values: dict[str, Any] | Any | None,\n as_node: str | None = None,\n\
\ *,\n headers: dict[str, str] | None = None,\n params: QueryParamTypes\
\ | None = None,\n ) -> RunnableConfig:\n \"\"\"Update the state of\
\ a thread.\n\n This method calls `POST /threads/{thread_id}/state`.\n\n\
\ Args:\n config: A `RunnableConfig` that includes `thread_id`\
\ in the\n `configurable` field.\n values: Values to\
\ update to the state.\n as_node: Update the state as if this node\
\ had just executed.\n\n Returns:\n `RunnableConfig` for the\
\ updated thread.\n \"\"\"\n client = self._validate_client()\n\
\ merged_config = merge_configs(self.config, config)\n\n response:\
\ dict = await client.threads.update_state( # type: ignore\n thread_id=merged_config[\"\
configurable\"][\"thread_id\"],\n values=values,\n as_node=as_node,\n\
\ checkpoint=self._get_checkpoint(merged_config),\n headers=headers,\n\
\ params=params,\n )\n return self._get_config(response[\"\
checkpoint\"])"
- "def __init__(self, typ: Any, guard: bool = True) -> None:\n super().__init__(typ)\n\
\ self.guard = guard\n self.value = MISSING"
- source_sentence: How to implement langchain_to_openai_messages?
sentences:
- "def __init__(\n self,\n message: str,\n *args: object,\n\
\ since: tuple[int, int],\n expected_removal: tuple[int, int] |\
\ None = None,\n ) -> None:\n super().__init__(message, *args)\n \
\ self.message = message.rstrip(\".\")\n self.since = since\n \
\ self.expected_removal = (\n expected_removal if expected_removal\
\ is not None else (since[0] + 1, 0)\n )"
- "def test_batch_get_ops(store: PostgresStore) -> None:\n # Setup test data\n\
\ store.put((\"test\",), \"key1\", {\"data\": \"value1\"})\n store.put((\"\
test\",), \"key2\", {\"data\": \"value2\"})\n\n ops = [\n GetOp(namespace=(\"\
test\",), key=\"key1\"),\n GetOp(namespace=(\"test\",), key=\"key2\"),\n\
\ GetOp(namespace=(\"test\",), key=\"key3\"), # Non-existent key\n \
\ ]\n\n results = store.batch(ops)\n\n assert len(results) == 3\n assert\
\ results[0] is not None\n assert results[1] is not None\n assert results[2]\
\ is None\n assert results[0].key == \"key1\"\n assert results[1].key ==\
\ \"key2\""
- "def langchain_to_openai_messages(messages: List[BaseMessage]):\n \"\"\"\n\
\ Convert a list of langchain base messages to a list of openai messages.\n\
\n Parameters:\n messages (List[BaseMessage]): A list of langchain base\
\ messages.\n\n Returns:\n List[dict]: A list of openai messages.\n\
\ \"\"\"\n\n return [\n convert_message_to_dict(m) if isinstance(m,\
\ BaseMessage) else m\n for m in messages\n ]"
- source_sentence: Explain the CheckpointPayload logic
sentences:
- "class LocalDeps(NamedTuple):\n \"\"\"A container for referencing and managing\
\ local Python dependencies.\n\n A \"local dependency\" is any entry in the\
\ config's `dependencies` list\n that starts with \".\" (dot), denoting a relative\
\ path\n to a local directory containing Python code.\n\n For each local\
\ dependency, the system inspects its directory to\n determine how it should\
\ be installed inside the Docker container.\n\n Specifically, we detect:\n\n\
\ - **Real packages**: Directories containing a `pyproject.toml` or a `setup.py`.\n\
\ These can be installed with pip as a regular Python package.\n - **Faux\
\ packages**: Directories that do not include a `pyproject.toml` or\n `setup.py`\
\ but do contain Python files and possibly an `__init__.py`. For\n these,\
\ the code dynamically generates a minimal `pyproject.toml` in the\n Docker\
\ image so that they can still be installed with pip.\n - **Requirements files**:\
\ If a local dependency directory\n has a `requirements.txt`, it is tracked\
\ so that those dependencies\n can be installed within the Docker container\
\ before installing the local package.\n\n Attributes:\n pip_reqs: A\
\ list of (host_requirements_path, container_requirements_path)\n tuples.\
\ Each entry points to a local `requirements.txt` file and where\n \
\ it should be placed inside the Docker container before running `pip install`.\n\
\n real_pkgs: A dictionary mapping a local directory path (host side) to\
\ a\n tuple of (dependency_string, container_package_path). These directories\n\
\ contain the necessary files (e.g., `pyproject.toml` or `setup.py`)\
\ to be\n installed as a standard Python package with pip.\n\n \
\ faux_pkgs: A dictionary mapping a local directory path (host side) to a\n\
\ tuple of (dependency_string, container_package_path). For these\n\
\ directories—called \"faux packages\"—the code will generate a minimal\n\
\ `pyproject.toml` inside the Docker image. This ensures that pip\n\
\ recognizes them as installable packages, even though they do not\n\
\ natively include packaging metadata.\n\n working_dir: The\
\ path inside the Docker container to use as the working\n directory.\
\ If the local dependency `\".\"` is present in the config, this\n \
\ field captures the path where that dependency will appear in the\n \
\ container (e.g., `/deps/<name>` or similar). Otherwise, it may be `None`.\n\
\n additional_contexts: A list of paths to directories that contain local\n\
\ dependencies in parent directories. These directories are added to\
\ the\n Docker build context to ensure that the Dockerfile can access\
\ them.\n \"\"\"\n\n pip_reqs: list[tuple[pathlib.Path, str]]\n real_pkgs:\
\ dict[pathlib.Path, tuple[str, str]]\n faux_pkgs: dict[pathlib.Path, tuple[str,\
\ str]]\n # if . is in dependencies, use it as working_dir\n working_dir:\
\ str | None = None\n # if there are local dependencies in parent directories,\
\ use additional_contexts\n additional_contexts: list[pathlib.Path] = None"
- "class CheckpointPayload(TypedDict):\n config: RunnableConfig | None\n metadata:\
\ CheckpointMetadata\n values: dict[str, Any]\n next: list[str]\n parent_config:\
\ RunnableConfig | None\n tasks: list[CheckpointTask]"
- "class _RuntimeOverrides(TypedDict, Generic[ContextT], total=False):\n context:\
\ ContextT\n store: BaseStore | None\n stream_writer: StreamWriter\n \
\ previous: Any"
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: codeBert dense retriever
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.84
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.84
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.84
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.93
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.84
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.84
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.84
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.465
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.16799999999999998
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.504
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.84
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.93
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8886895066001008
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.855
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.877942533867708
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.88
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.88
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.88
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.93
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.88
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.88
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.88
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.465
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.17599999999999993
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.528
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.88
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.93
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.907049725888945
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8883333333333333
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9038835868016827
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.87
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.87
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.87
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.92
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.87
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.87
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.87
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.46
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.17399999999999996
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.522
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.87
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.92
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8970497258889449
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8783333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8959313741265157
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.86
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.86
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.86
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.95
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.86
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.86
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.86
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.475
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.17199999999999996
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.516
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.86
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.95
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9086895066001008
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.875
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8949791356739454
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.84
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.84
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.84
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.93
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.84
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.84
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.84
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.465
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.16799999999999998
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.504
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.84
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.93
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8886895066001008
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.855
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8791923582191525
name: Cosine Map@100
---
# codeBert dense retriever
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [shubharuidas/codebert-embed-base-dense-retriever](https://huggingface.co/shubharuidas/codebert-embed-base-dense-retriever). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [shubharuidas/codebert-embed-base-dense-retriever](https://huggingface.co/shubharuidas/codebert-embed-base-dense-retriever) <!-- at revision 9594580ae943039d0b85feb304404f9b2bb203ce -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("shubharuidas/codebert-base-code-embed-mrl-langchain-langgraph")
# Run inference
sentences = [
'Explain the CheckpointPayload logic',
'class CheckpointPayload(TypedDict):\n config: RunnableConfig | None\n metadata: CheckpointMetadata\n values: dict[str, Any]\n next: list[str]\n parent_config: RunnableConfig | None\n tasks: list[CheckpointTask]',
'class _RuntimeOverrides(TypedDict, Generic[ContextT], total=False):\n context: ContextT\n store: BaseStore | None\n stream_writer: StreamWriter\n previous: Any',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.7282, 0.2122],
# [0.7282, 1.0000, 0.3511],
# [0.2122, 0.3511, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.84 |
| cosine_accuracy@3 | 0.84 |
| cosine_accuracy@5 | 0.84 |
| cosine_accuracy@10 | 0.93 |
| cosine_precision@1 | 0.84 |
| cosine_precision@3 | 0.84 |
| cosine_precision@5 | 0.84 |
| cosine_precision@10 | 0.465 |
| cosine_recall@1 | 0.168 |
| cosine_recall@3 | 0.504 |
| cosine_recall@5 | 0.84 |
| cosine_recall@10 | 0.93 |
| **cosine_ndcg@10** | **0.8887** |
| cosine_mrr@10 | 0.855 |
| cosine_map@100 | 0.8779 |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.88 |
| cosine_accuracy@3 | 0.88 |
| cosine_accuracy@5 | 0.88 |
| cosine_accuracy@10 | 0.93 |
| cosine_precision@1 | 0.88 |
| cosine_precision@3 | 0.88 |
| cosine_precision@5 | 0.88 |
| cosine_precision@10 | 0.465 |
| cosine_recall@1 | 0.176 |
| cosine_recall@3 | 0.528 |
| cosine_recall@5 | 0.88 |
| cosine_recall@10 | 0.93 |
| **cosine_ndcg@10** | **0.907** |
| cosine_mrr@10 | 0.8883 |
| cosine_map@100 | 0.9039 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.87 |
| cosine_accuracy@3 | 0.87 |
| cosine_accuracy@5 | 0.87 |
| cosine_accuracy@10 | 0.92 |
| cosine_precision@1 | 0.87 |
| cosine_precision@3 | 0.87 |
| cosine_precision@5 | 0.87 |
| cosine_precision@10 | 0.46 |
| cosine_recall@1 | 0.174 |
| cosine_recall@3 | 0.522 |
| cosine_recall@5 | 0.87 |
| cosine_recall@10 | 0.92 |
| **cosine_ndcg@10** | **0.897** |
| cosine_mrr@10 | 0.8783 |
| cosine_map@100 | 0.8959 |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 128
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.86 |
| cosine_accuracy@3 | 0.86 |
| cosine_accuracy@5 | 0.86 |
| cosine_accuracy@10 | 0.95 |
| cosine_precision@1 | 0.86 |
| cosine_precision@3 | 0.86 |
| cosine_precision@5 | 0.86 |
| cosine_precision@10 | 0.475 |
| cosine_recall@1 | 0.172 |
| cosine_recall@3 | 0.516 |
| cosine_recall@5 | 0.86 |
| cosine_recall@10 | 0.95 |
| **cosine_ndcg@10** | **0.9087** |
| cosine_mrr@10 | 0.875 |
| cosine_map@100 | 0.895 |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 64
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.84 |
| cosine_accuracy@3 | 0.84 |
| cosine_accuracy@5 | 0.84 |
| cosine_accuracy@10 | 0.93 |
| cosine_precision@1 | 0.84 |
| cosine_precision@3 | 0.84 |
| cosine_precision@5 | 0.84 |
| cosine_precision@10 | 0.465 |
| cosine_recall@1 | 0.168 |
| cosine_recall@3 | 0.504 |
| cosine_recall@5 | 0.84 |
| cosine_recall@10 | 0.93 |
| **cosine_ndcg@10** | **0.8887** |
| cosine_mrr@10 | 0.855 |
| cosine_map@100 | 0.8792 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 900 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 900 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.77 tokens</li><li>max: 356 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 267.71 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>How does put_item work in Python?</code> | <code>def put_item(<br> self,<br> namespace: Sequence[str],<br> /,<br> key: str,<br> value: Mapping[str, Any],<br> index: Literal[False] \| list[str] \| None = None,<br> ttl: int \| None = None,<br> headers: Mapping[str, str] \| None = None,<br> params: QueryParamTypes \| None = None,<br> ) -> None:<br> """Store or update an item.<br><br> Args:<br> namespace: A list of strings representing the namespace path.<br> key: The unique identifier for the item within the namespace.<br> value: A dictionary containing the item's data.<br> index: Controls search indexing - None (use defaults), False (disable), or list of field paths to index.<br> ttl: Optional time-to-live in minutes for the item, or None for no expiration.<br> headers: Optional custom headers to include with the request.<br> params: Optional query parameters to include with the request.<br><br> Returns:<br> `None`<br><br> ???+ example...</code> |
| <code>Explain the RunsClient:<br> """Client for managing runs in LangGraph.<br><br> A run is a single assistant invocation with optional input, config, context, and metadata.<br> This client manages runs, which can be stateful logic</code> | <code>class RunsClient:<br> """Client for managing runs in LangGraph.<br><br> A run is a single assistant invocation with optional input, config, context, and metadata.<br> This client manages runs, which can be stateful (on threads) or stateless.<br><br> ???+ example "Example"<br><br> ```python<br> client = get_client(url="http://localhost:2024")<br> run = await client.runs.create(assistant_id="asst_123", thread_id="thread_456", input={"query": "Hello"})<br> ```<br> """<br><br> def __init__(self, http: HttpClient) -> None:<br> self.http = http<br><br> @overload<br> def stream(<br> self,<br> thread_id: str,<br> assistant_id: str,<br> *,<br> input: Input \| None = None,<br> command: Command \| None = None,<br> stream_mode: StreamMode \| Sequence[StreamMode] = "values",<br> stream_subgraphs: bool = False,<br> stream_resumable: bool = False,<br> metadata: Mapping[str, Any] \| None = None,<br> config: Config \| None = None,<br> context: Context \| N...</code> |
| <code>Best practices for MyChildDict</code> | <code>class MyChildDict(MyBaseTypedDict):<br> val_11: int<br> val_11b: int \| None<br> val_11c: int \| None \| str</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: None
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `project`: huggingface
- `trackio_space_id`: trackio
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: no
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: True
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.7111 | 10 | 0.6327 | - | - | - | - | - |
| 1.0 | 15 | - | 0.8970 | 0.8979 | 0.8925 | 0.8979 | 0.8641 |
| 1.3556 | 20 | 0.2227 | - | - | - | - | - |
| **2.0** | **30** | **0.1692** | **0.8887** | **0.907** | **0.897** | **0.9087** | **0.8887** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.2.0
- Transformers: 4.57.6
- PyTorch: 2.9.0+cu126
- Accelerate: 1.12.0
- Datasets: 4.0.0
- Tokenizers: 0.22.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->