shubharuidas commited on
Commit
f320c0b
·
verified ·
1 Parent(s): cf9f6ff

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,1025 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - dense
10
+ - generated_from_trainer
11
+ - dataset_size:900
12
+ - loss:MatryoshkaLoss
13
+ - loss:MultipleNegativesRankingLoss
14
+ base_model: shubharuidas/codebert-embed-base-dense-retriever
15
+ widget:
16
+ - source_sentence: Best practices for __init__
17
+ sentences:
18
+ - "def close(self) -> None:\n self.sync()\n self.clear()"
19
+ - "class MyClass:\n def __call__(self, state):\n return\n\n \
20
+ \ def class_method(self, state):\n return"
21
+ - "def __init__(self, name: str):\n self.name = name\n self.lock\
22
+ \ = threading.Lock()"
23
+ - source_sentence: Explain the close logic
24
+ sentences:
25
+ - "def close(self) -> None:\n self.sync()\n self.clear()"
26
+ - "def attach_node(self, key: str, node: StateNodeSpec[Any, ContextT] | None) ->\
27
+ \ None:\n if key == START:\n output_keys = [\n \
28
+ \ k\n for k, v in self.builder.schemas[self.builder.input_schema].items()\n\
29
+ \ if not is_managed_value(v)\n ]\n else:\n \
30
+ \ output_keys = list(self.builder.channels) + [\n k for\
31
+ \ k, v in self.builder.managed.items()\n ]\n\n def _get_updates(\n\
32
+ \ input: None | dict | Any,\n ) -> Sequence[tuple[str, Any]]\
33
+ \ | None:\n if input is None:\n return None\n \
34
+ \ elif isinstance(input, dict):\n return [(k, v) for k, v\
35
+ \ in input.items() if k in output_keys]\n elif isinstance(input, Command):\n\
36
+ \ if input.graph == Command.PARENT:\n return\
37
+ \ None\n return [\n (k, v) for k, v in input._update_as_tuples()\
38
+ \ if k in output_keys\n ]\n elif (\n \
39
+ \ isinstance(input, (list, tuple))\n and input\n \
40
+ \ and any(isinstance(i, Command) for i in input)\n ):\n \
41
+ \ updates: list[tuple[str, Any]] = []\n for i in input:\n\
42
+ \ if isinstance(i, Command):\n if i.graph\
43
+ \ == Command.PARENT:\n continue\n \
44
+ \ updates.extend(\n (k, v) for k, v in i._update_as_tuples()\
45
+ \ if k in output_keys\n )\n else:\n\
46
+ \ updates.extend(_get_updates(i) or ())\n \
47
+ \ return updates\n elif (t := type(input)) and get_cached_annotated_keys(t):\n\
48
+ \ return get_update_as_tuples(input, output_keys)\n \
49
+ \ else:\n msg = create_error_message(\n message=f\"\
50
+ Expected dict, got {input}\",\n error_code=ErrorCode.INVALID_GRAPH_NODE_RETURN_VALUE,\n\
51
+ \ )\n raise InvalidUpdateError(msg)\n\n #\
52
+ \ state updaters\n write_entries: tuple[ChannelWriteEntry | ChannelWriteTupleEntry,\
53
+ \ ...] = (\n ChannelWriteTupleEntry(\n mapper=_get_root\
54
+ \ if output_keys == [\"__root__\"] else _get_updates\n ),\n \
55
+ \ ChannelWriteTupleEntry(\n mapper=_control_branch,\n \
56
+ \ static=_control_static(node.ends)\n if node is not\
57
+ \ None and node.ends is not None\n else None,\n ),\n\
58
+ \ )\n\n # add node and output channel\n if key == START:\n\
59
+ \ self.nodes[key] = PregelNode(\n tags=[TAG_HIDDEN],\n\
60
+ \ triggers=[START],\n channels=START,\n \
61
+ \ writers=[ChannelWrite(write_entries)],\n )\n elif node\
62
+ \ is not None:\n input_schema = node.input_schema if node else self.builder.state_schema\n\
63
+ \ input_channels = list(self.builder.schemas[input_schema])\n \
64
+ \ is_single_input = len(input_channels) == 1 and \"__root__\" in input_channels\n\
65
+ \ if input_schema in self.schema_to_mapper:\n mapper\
66
+ \ = self.schema_to_mapper[input_schema]\n else:\n mapper\
67
+ \ = _pick_mapper(input_channels, input_schema)\n self.schema_to_mapper[input_schema]\
68
+ \ = mapper\n\n branch_channel = _CHANNEL_BRANCH_TO.format(key)\n \
69
+ \ self.channels[branch_channel] = (\n LastValueAfterFinish(Any)\n\
70
+ \ if node.defer\n else EphemeralValue(Any, guard=False)\n\
71
+ \ )\n self.nodes[key] = PregelNode(\n triggers=[branch_channel],\n\
72
+ \ # read state keys and managed values\n channels=(\"\
73
+ __root__\" if is_single_input else input_channels),\n # coerce\
74
+ \ state dict to schema class (eg. pydantic model)\n mapper=mapper,\n\
75
+ \ # publish to state keys\n writers=[ChannelWrite(write_entries)],\n\
76
+ \ metadata=node.metadata,\n retry_policy=node.retry_policy,\n\
77
+ \ cache_policy=node.cache_policy,\n bound=node.runnable,\
78
+ \ # type: ignore[arg-type]\n )\n else:\n raise RuntimeError"
79
+ - "def tick(\n self,\n tasks: Iterable[PregelExecutableTask],\n \
80
+ \ *,\n reraise: bool = True,\n timeout: float | None = None,\n\
81
+ \ retry_policy: Sequence[RetryPolicy] | None = None,\n get_waiter:\
82
+ \ Callable[[], concurrent.futures.Future[None]] | None = None,\n schedule_task:\
83
+ \ Callable[\n [PregelExecutableTask, int, Call | None],\n \
84
+ \ PregelExecutableTask | None,\n ],\n ) -> Iterator[None]:\n \
85
+ \ tasks = tuple(tasks)\n futures = FuturesDict(\n callback=weakref.WeakMethod(self.commit),\n\
86
+ \ event=threading.Event(),\n future_type=concurrent.futures.Future,\n\
87
+ \ )\n # give control back to the caller\n yield\n \
88
+ \ # fast path if single task with no timeout and no waiter\n if len(tasks)\
89
+ \ == 0:\n return\n elif len(tasks) == 1 and timeout is None\
90
+ \ and get_waiter is None:\n t = tasks[0]\n try:\n \
91
+ \ run_with_retry(\n t,\n retry_policy,\n\
92
+ \ configurable={\n CONFIG_KEY_CALL:\
93
+ \ partial(\n _call,\n weakref.ref(t),\n\
94
+ \ retry_policy=retry_policy,\n \
95
+ \ futures=weakref.ref(futures),\n schedule_task=schedule_task,\n\
96
+ \ submit=self.submit,\n ),\n\
97
+ \ },\n )\n self.commit(t, None)\n\
98
+ \ except Exception as exc:\n self.commit(t, exc)\n \
99
+ \ if reraise and futures:\n # will be re-raised\
100
+ \ after futures are done\n fut: concurrent.futures.Future =\
101
+ \ concurrent.futures.Future()\n fut.set_exception(exc)\n \
102
+ \ futures.done.add(fut)\n elif reraise:\n \
103
+ \ if tb := exc.__traceback__:\n while tb.tb_next\
104
+ \ is not None and any(\n tb.tb_frame.f_code.co_filename.endswith(name)\n\
105
+ \ for name in EXCLUDED_FRAME_FNAMES\n \
106
+ \ ):\n tb = tb.tb_next\n \
107
+ \ exc.__traceback__ = tb\n raise\n if not\
108
+ \ futures: # maybe `t` scheduled another task\n return\n \
109
+ \ else:\n tasks = () # don't reschedule this task\n \
110
+ \ # add waiter task if requested\n if get_waiter is not None:\n \
111
+ \ futures[get_waiter()] = None\n # schedule tasks\n for t\
112
+ \ in tasks:\n fut = self.submit()( # type: ignore[misc]\n \
113
+ \ run_with_retry,\n t,\n retry_policy,\n\
114
+ \ configurable={\n CONFIG_KEY_CALL: partial(\n\
115
+ \ _call,\n weakref.ref(t),\n \
116
+ \ retry_policy=retry_policy,\n futures=weakref.ref(futures),\n\
117
+ \ schedule_task=schedule_task,\n \
118
+ \ submit=self.submit,\n ),\n },\n \
119
+ \ __reraise_on_exit__=reraise,\n )\n futures[fut]\
120
+ \ = t\n # execute tasks, and wait for one to fail or all to finish.\n \
121
+ \ # each task is independent from all other concurrent tasks\n #\
122
+ \ yield updates/debug output as each task finishes\n end_time = timeout\
123
+ \ + time.monotonic() if timeout else None\n while len(futures) > (1 if\
124
+ \ get_waiter is not None else 0):\n done, inflight = concurrent.futures.wait(\n\
125
+ \ futures,\n return_when=concurrent.futures.FIRST_COMPLETED,\n\
126
+ \ timeout=(max(0, end_time - time.monotonic()) if end_time else\
127
+ \ None),\n )\n if not done:\n break # timed\
128
+ \ out\n for fut in done:\n task = futures.pop(fut)\n\
129
+ \ if task is None:\n # waiter task finished,\
130
+ \ schedule another\n if inflight and get_waiter is not None:\n\
131
+ \ futures[get_waiter()] = None\n else:\n \
132
+ \ # remove references to loop vars\n del fut, task\n\
133
+ \ # maybe stop other tasks\n if _should_stop_others(done):\n\
134
+ \ break\n # give control back to the caller\n \
135
+ \ yield\n # wait for done callbacks\n futures.event.wait(\n\
136
+ \ timeout=(max(0, end_time - time.monotonic()) if end_time else None)\n\
137
+ \ )\n # give control back to the caller\n yield\n \
138
+ \ # panic on failure or timeout\n try:\n _panic_or_proceed(\n\
139
+ \ futures.done.union(f for f, t in futures.items() if t is not\
140
+ \ None),\n panic=reraise,\n )\n except Exception\
141
+ \ as exc:\n if tb := exc.__traceback__:\n while tb.tb_next\
142
+ \ is not None and any(\n tb.tb_frame.f_code.co_filename.endswith(name)\n\
143
+ \ for name in EXCLUDED_FRAME_FNAMES\n ):\n \
144
+ \ tb = tb.tb_next\n exc.__traceback__ = tb\n\
145
+ \ raise"
146
+ - source_sentence: Explain the async aupdate_state logic
147
+ sentences:
148
+ - "class MyClass:\n def __call__(self, state):\n return\n\n \
149
+ \ def class_method(self, state):\n return"
150
+ - "async def aupdate_state(\n self,\n config: RunnableConfig,\n \
151
+ \ values: dict[str, Any] | Any | None,\n as_node: str | None = None,\n\
152
+ \ *,\n headers: dict[str, str] | None = None,\n params: QueryParamTypes\
153
+ \ | None = None,\n ) -> RunnableConfig:\n \"\"\"Update the state of\
154
+ \ a thread.\n\n This method calls `POST /threads/{thread_id}/state`.\n\n\
155
+ \ Args:\n config: A `RunnableConfig` that includes `thread_id`\
156
+ \ in the\n `configurable` field.\n values: Values to\
157
+ \ update to the state.\n as_node: Update the state as if this node\
158
+ \ had just executed.\n\n Returns:\n `RunnableConfig` for the\
159
+ \ updated thread.\n \"\"\"\n client = self._validate_client()\n\
160
+ \ merged_config = merge_configs(self.config, config)\n\n response:\
161
+ \ dict = await client.threads.update_state( # type: ignore\n thread_id=merged_config[\"\
162
+ configurable\"][\"thread_id\"],\n values=values,\n as_node=as_node,\n\
163
+ \ checkpoint=self._get_checkpoint(merged_config),\n headers=headers,\n\
164
+ \ params=params,\n )\n return self._get_config(response[\"\
165
+ checkpoint\"])"
166
+ - "def __init__(self, typ: Any, guard: bool = True) -> None:\n super().__init__(typ)\n\
167
+ \ self.guard = guard\n self.value = MISSING"
168
+ - source_sentence: How to implement langchain_to_openai_messages?
169
+ sentences:
170
+ - "def __init__(\n self,\n message: str,\n *args: object,\n\
171
+ \ since: tuple[int, int],\n expected_removal: tuple[int, int] |\
172
+ \ None = None,\n ) -> None:\n super().__init__(message, *args)\n \
173
+ \ self.message = message.rstrip(\".\")\n self.since = since\n \
174
+ \ self.expected_removal = (\n expected_removal if expected_removal\
175
+ \ is not None else (since[0] + 1, 0)\n )"
176
+ - "def test_batch_get_ops(store: PostgresStore) -> None:\n # Setup test data\n\
177
+ \ store.put((\"test\",), \"key1\", {\"data\": \"value1\"})\n store.put((\"\
178
+ test\",), \"key2\", {\"data\": \"value2\"})\n\n ops = [\n GetOp(namespace=(\"\
179
+ test\",), key=\"key1\"),\n GetOp(namespace=(\"test\",), key=\"key2\"),\n\
180
+ \ GetOp(namespace=(\"test\",), key=\"key3\"), # Non-existent key\n \
181
+ \ ]\n\n results = store.batch(ops)\n\n assert len(results) == 3\n assert\
182
+ \ results[0] is not None\n assert results[1] is not None\n assert results[2]\
183
+ \ is None\n assert results[0].key == \"key1\"\n assert results[1].key ==\
184
+ \ \"key2\""
185
+ - "def langchain_to_openai_messages(messages: List[BaseMessage]):\n \"\"\"\n\
186
+ \ Convert a list of langchain base messages to a list of openai messages.\n\
187
+ \n Parameters:\n messages (List[BaseMessage]): A list of langchain base\
188
+ \ messages.\n\n Returns:\n List[dict]: A list of openai messages.\n\
189
+ \ \"\"\"\n\n return [\n convert_message_to_dict(m) if isinstance(m,\
190
+ \ BaseMessage) else m\n for m in messages\n ]"
191
+ - source_sentence: Explain the CheckpointPayload logic
192
+ sentences:
193
+ - "class LocalDeps(NamedTuple):\n \"\"\"A container for referencing and managing\
194
+ \ local Python dependencies.\n\n A \"local dependency\" is any entry in the\
195
+ \ config's `dependencies` list\n that starts with \".\" (dot), denoting a relative\
196
+ \ path\n to a local directory containing Python code.\n\n For each local\
197
+ \ dependency, the system inspects its directory to\n determine how it should\
198
+ \ be installed inside the Docker container.\n\n Specifically, we detect:\n\n\
199
+ \ - **Real packages**: Directories containing a `pyproject.toml` or a `setup.py`.\n\
200
+ \ These can be installed with pip as a regular Python package.\n - **Faux\
201
+ \ packages**: Directories that do not include a `pyproject.toml` or\n `setup.py`\
202
+ \ but do contain Python files and possibly an `__init__.py`. For\n these,\
203
+ \ the code dynamically generates a minimal `pyproject.toml` in the\n Docker\
204
+ \ image so that they can still be installed with pip.\n - **Requirements files**:\
205
+ \ If a local dependency directory\n has a `requirements.txt`, it is tracked\
206
+ \ so that those dependencies\n can be installed within the Docker container\
207
+ \ before installing the local package.\n\n Attributes:\n pip_reqs: A\
208
+ \ list of (host_requirements_path, container_requirements_path)\n tuples.\
209
+ \ Each entry points to a local `requirements.txt` file and where\n \
210
+ \ it should be placed inside the Docker container before running `pip install`.\n\
211
+ \n real_pkgs: A dictionary mapping a local directory path (host side) to\
212
+ \ a\n tuple of (dependency_string, container_package_path). These directories\n\
213
+ \ contain the necessary files (e.g., `pyproject.toml` or `setup.py`)\
214
+ \ to be\n installed as a standard Python package with pip.\n\n \
215
+ \ faux_pkgs: A dictionary mapping a local directory path (host side) to a\n\
216
+ \ tuple of (dependency_string, container_package_path). For these\n\
217
+ \ directories—called \"faux packages\"—the code will generate a minimal\n\
218
+ \ `pyproject.toml` inside the Docker image. This ensures that pip\n\
219
+ \ recognizes them as installable packages, even though they do not\n\
220
+ \ natively include packaging metadata.\n\n working_dir: The\
221
+ \ path inside the Docker container to use as the working\n directory.\
222
+ \ If the local dependency `\".\"` is present in the config, this\n \
223
+ \ field captures the path where that dependency will appear in the\n \
224
+ \ container (e.g., `/deps/<name>` or similar). Otherwise, it may be `None`.\n\
225
+ \n additional_contexts: A list of paths to directories that contain local\n\
226
+ \ dependencies in parent directories. These directories are added to\
227
+ \ the\n Docker build context to ensure that the Dockerfile can access\
228
+ \ them.\n \"\"\"\n\n pip_reqs: list[tuple[pathlib.Path, str]]\n real_pkgs:\
229
+ \ dict[pathlib.Path, tuple[str, str]]\n faux_pkgs: dict[pathlib.Path, tuple[str,\
230
+ \ str]]\n # if . is in dependencies, use it as working_dir\n working_dir:\
231
+ \ str | None = None\n # if there are local dependencies in parent directories,\
232
+ \ use additional_contexts\n additional_contexts: list[pathlib.Path] = None"
233
+ - "class CheckpointPayload(TypedDict):\n config: RunnableConfig | None\n metadata:\
234
+ \ CheckpointMetadata\n values: dict[str, Any]\n next: list[str]\n parent_config:\
235
+ \ RunnableConfig | None\n tasks: list[CheckpointTask]"
236
+ - "class _RuntimeOverrides(TypedDict, Generic[ContextT], total=False):\n context:\
237
+ \ ContextT\n store: BaseStore | None\n stream_writer: StreamWriter\n \
238
+ \ previous: Any"
239
+ pipeline_tag: sentence-similarity
240
+ library_name: sentence-transformers
241
+ metrics:
242
+ - cosine_accuracy@1
243
+ - cosine_accuracy@3
244
+ - cosine_accuracy@5
245
+ - cosine_accuracy@10
246
+ - cosine_precision@1
247
+ - cosine_precision@3
248
+ - cosine_precision@5
249
+ - cosine_precision@10
250
+ - cosine_recall@1
251
+ - cosine_recall@3
252
+ - cosine_recall@5
253
+ - cosine_recall@10
254
+ - cosine_ndcg@10
255
+ - cosine_mrr@10
256
+ - cosine_map@100
257
+ model-index:
258
+ - name: codeBert dense retriever
259
+ results:
260
+ - task:
261
+ type: information-retrieval
262
+ name: Information Retrieval
263
+ dataset:
264
+ name: dim 768
265
+ type: dim_768
266
+ metrics:
267
+ - type: cosine_accuracy@1
268
+ value: 0.84
269
+ name: Cosine Accuracy@1
270
+ - type: cosine_accuracy@3
271
+ value: 0.84
272
+ name: Cosine Accuracy@3
273
+ - type: cosine_accuracy@5
274
+ value: 0.84
275
+ name: Cosine Accuracy@5
276
+ - type: cosine_accuracy@10
277
+ value: 0.93
278
+ name: Cosine Accuracy@10
279
+ - type: cosine_precision@1
280
+ value: 0.84
281
+ name: Cosine Precision@1
282
+ - type: cosine_precision@3
283
+ value: 0.84
284
+ name: Cosine Precision@3
285
+ - type: cosine_precision@5
286
+ value: 0.84
287
+ name: Cosine Precision@5
288
+ - type: cosine_precision@10
289
+ value: 0.465
290
+ name: Cosine Precision@10
291
+ - type: cosine_recall@1
292
+ value: 0.16799999999999998
293
+ name: Cosine Recall@1
294
+ - type: cosine_recall@3
295
+ value: 0.504
296
+ name: Cosine Recall@3
297
+ - type: cosine_recall@5
298
+ value: 0.84
299
+ name: Cosine Recall@5
300
+ - type: cosine_recall@10
301
+ value: 0.93
302
+ name: Cosine Recall@10
303
+ - type: cosine_ndcg@10
304
+ value: 0.8886895066001008
305
+ name: Cosine Ndcg@10
306
+ - type: cosine_mrr@10
307
+ value: 0.855
308
+ name: Cosine Mrr@10
309
+ - type: cosine_map@100
310
+ value: 0.877942533867708
311
+ name: Cosine Map@100
312
+ - task:
313
+ type: information-retrieval
314
+ name: Information Retrieval
315
+ dataset:
316
+ name: dim 512
317
+ type: dim_512
318
+ metrics:
319
+ - type: cosine_accuracy@1
320
+ value: 0.88
321
+ name: Cosine Accuracy@1
322
+ - type: cosine_accuracy@3
323
+ value: 0.88
324
+ name: Cosine Accuracy@3
325
+ - type: cosine_accuracy@5
326
+ value: 0.88
327
+ name: Cosine Accuracy@5
328
+ - type: cosine_accuracy@10
329
+ value: 0.93
330
+ name: Cosine Accuracy@10
331
+ - type: cosine_precision@1
332
+ value: 0.88
333
+ name: Cosine Precision@1
334
+ - type: cosine_precision@3
335
+ value: 0.88
336
+ name: Cosine Precision@3
337
+ - type: cosine_precision@5
338
+ value: 0.88
339
+ name: Cosine Precision@5
340
+ - type: cosine_precision@10
341
+ value: 0.465
342
+ name: Cosine Precision@10
343
+ - type: cosine_recall@1
344
+ value: 0.17599999999999993
345
+ name: Cosine Recall@1
346
+ - type: cosine_recall@3
347
+ value: 0.528
348
+ name: Cosine Recall@3
349
+ - type: cosine_recall@5
350
+ value: 0.88
351
+ name: Cosine Recall@5
352
+ - type: cosine_recall@10
353
+ value: 0.93
354
+ name: Cosine Recall@10
355
+ - type: cosine_ndcg@10
356
+ value: 0.907049725888945
357
+ name: Cosine Ndcg@10
358
+ - type: cosine_mrr@10
359
+ value: 0.8883333333333333
360
+ name: Cosine Mrr@10
361
+ - type: cosine_map@100
362
+ value: 0.9038835868016827
363
+ name: Cosine Map@100
364
+ - task:
365
+ type: information-retrieval
366
+ name: Information Retrieval
367
+ dataset:
368
+ name: dim 256
369
+ type: dim_256
370
+ metrics:
371
+ - type: cosine_accuracy@1
372
+ value: 0.87
373
+ name: Cosine Accuracy@1
374
+ - type: cosine_accuracy@3
375
+ value: 0.87
376
+ name: Cosine Accuracy@3
377
+ - type: cosine_accuracy@5
378
+ value: 0.87
379
+ name: Cosine Accuracy@5
380
+ - type: cosine_accuracy@10
381
+ value: 0.92
382
+ name: Cosine Accuracy@10
383
+ - type: cosine_precision@1
384
+ value: 0.87
385
+ name: Cosine Precision@1
386
+ - type: cosine_precision@3
387
+ value: 0.87
388
+ name: Cosine Precision@3
389
+ - type: cosine_precision@5
390
+ value: 0.87
391
+ name: Cosine Precision@5
392
+ - type: cosine_precision@10
393
+ value: 0.46
394
+ name: Cosine Precision@10
395
+ - type: cosine_recall@1
396
+ value: 0.17399999999999996
397
+ name: Cosine Recall@1
398
+ - type: cosine_recall@3
399
+ value: 0.522
400
+ name: Cosine Recall@3
401
+ - type: cosine_recall@5
402
+ value: 0.87
403
+ name: Cosine Recall@5
404
+ - type: cosine_recall@10
405
+ value: 0.92
406
+ name: Cosine Recall@10
407
+ - type: cosine_ndcg@10
408
+ value: 0.8970497258889449
409
+ name: Cosine Ndcg@10
410
+ - type: cosine_mrr@10
411
+ value: 0.8783333333333334
412
+ name: Cosine Mrr@10
413
+ - type: cosine_map@100
414
+ value: 0.8959313741265157
415
+ name: Cosine Map@100
416
+ - task:
417
+ type: information-retrieval
418
+ name: Information Retrieval
419
+ dataset:
420
+ name: dim 128
421
+ type: dim_128
422
+ metrics:
423
+ - type: cosine_accuracy@1
424
+ value: 0.86
425
+ name: Cosine Accuracy@1
426
+ - type: cosine_accuracy@3
427
+ value: 0.86
428
+ name: Cosine Accuracy@3
429
+ - type: cosine_accuracy@5
430
+ value: 0.86
431
+ name: Cosine Accuracy@5
432
+ - type: cosine_accuracy@10
433
+ value: 0.95
434
+ name: Cosine Accuracy@10
435
+ - type: cosine_precision@1
436
+ value: 0.86
437
+ name: Cosine Precision@1
438
+ - type: cosine_precision@3
439
+ value: 0.86
440
+ name: Cosine Precision@3
441
+ - type: cosine_precision@5
442
+ value: 0.86
443
+ name: Cosine Precision@5
444
+ - type: cosine_precision@10
445
+ value: 0.475
446
+ name: Cosine Precision@10
447
+ - type: cosine_recall@1
448
+ value: 0.17199999999999996
449
+ name: Cosine Recall@1
450
+ - type: cosine_recall@3
451
+ value: 0.516
452
+ name: Cosine Recall@3
453
+ - type: cosine_recall@5
454
+ value: 0.86
455
+ name: Cosine Recall@5
456
+ - type: cosine_recall@10
457
+ value: 0.95
458
+ name: Cosine Recall@10
459
+ - type: cosine_ndcg@10
460
+ value: 0.9086895066001008
461
+ name: Cosine Ndcg@10
462
+ - type: cosine_mrr@10
463
+ value: 0.875
464
+ name: Cosine Mrr@10
465
+ - type: cosine_map@100
466
+ value: 0.8949791356739454
467
+ name: Cosine Map@100
468
+ - task:
469
+ type: information-retrieval
470
+ name: Information Retrieval
471
+ dataset:
472
+ name: dim 64
473
+ type: dim_64
474
+ metrics:
475
+ - type: cosine_accuracy@1
476
+ value: 0.84
477
+ name: Cosine Accuracy@1
478
+ - type: cosine_accuracy@3
479
+ value: 0.84
480
+ name: Cosine Accuracy@3
481
+ - type: cosine_accuracy@5
482
+ value: 0.84
483
+ name: Cosine Accuracy@5
484
+ - type: cosine_accuracy@10
485
+ value: 0.93
486
+ name: Cosine Accuracy@10
487
+ - type: cosine_precision@1
488
+ value: 0.84
489
+ name: Cosine Precision@1
490
+ - type: cosine_precision@3
491
+ value: 0.84
492
+ name: Cosine Precision@3
493
+ - type: cosine_precision@5
494
+ value: 0.84
495
+ name: Cosine Precision@5
496
+ - type: cosine_precision@10
497
+ value: 0.465
498
+ name: Cosine Precision@10
499
+ - type: cosine_recall@1
500
+ value: 0.16799999999999998
501
+ name: Cosine Recall@1
502
+ - type: cosine_recall@3
503
+ value: 0.504
504
+ name: Cosine Recall@3
505
+ - type: cosine_recall@5
506
+ value: 0.84
507
+ name: Cosine Recall@5
508
+ - type: cosine_recall@10
509
+ value: 0.93
510
+ name: Cosine Recall@10
511
+ - type: cosine_ndcg@10
512
+ value: 0.8886895066001008
513
+ name: Cosine Ndcg@10
514
+ - type: cosine_mrr@10
515
+ value: 0.855
516
+ name: Cosine Mrr@10
517
+ - type: cosine_map@100
518
+ value: 0.8791923582191525
519
+ name: Cosine Map@100
520
+ ---
521
+
522
+ # codeBert dense retriever
523
+
524
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [shubharuidas/codebert-embed-base-dense-retriever](https://huggingface.co/shubharuidas/codebert-embed-base-dense-retriever). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
525
+
526
+ ## Model Details
527
+
528
+ ### Model Description
529
+ - **Model Type:** Sentence Transformer
530
+ - **Base model:** [shubharuidas/codebert-embed-base-dense-retriever](https://huggingface.co/shubharuidas/codebert-embed-base-dense-retriever) <!-- at revision 9594580ae943039d0b85feb304404f9b2bb203ce -->
531
+ - **Maximum Sequence Length:** 512 tokens
532
+ - **Output Dimensionality:** 768 dimensions
533
+ - **Similarity Function:** Cosine Similarity
534
+ <!-- - **Training Dataset:** Unknown -->
535
+ - **Language:** en
536
+ - **License:** apache-2.0
537
+
538
+ ### Model Sources
539
+
540
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
541
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
542
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
543
+
544
+ ### Full Model Architecture
545
+
546
+ ```
547
+ SentenceTransformer(
548
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
549
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
550
+ )
551
+ ```
552
+
553
+ ## Usage
554
+
555
+ ### Direct Usage (Sentence Transformers)
556
+
557
+ First install the Sentence Transformers library:
558
+
559
+ ```bash
560
+ pip install -U sentence-transformers
561
+ ```
562
+
563
+ Then you can load this model and run inference.
564
+ ```python
565
+ from sentence_transformers import SentenceTransformer
566
+
567
+ # Download from the 🤗 Hub
568
+ model = SentenceTransformer("shubharuidas/codebert-base-code-embed-mrl-langchain-langgraph")
569
+ # Run inference
570
+ sentences = [
571
+ 'Explain the CheckpointPayload logic',
572
+ 'class CheckpointPayload(TypedDict):\n config: RunnableConfig | None\n metadata: CheckpointMetadata\n values: dict[str, Any]\n next: list[str]\n parent_config: RunnableConfig | None\n tasks: list[CheckpointTask]',
573
+ 'class _RuntimeOverrides(TypedDict, Generic[ContextT], total=False):\n context: ContextT\n store: BaseStore | None\n stream_writer: StreamWriter\n previous: Any',
574
+ ]
575
+ embeddings = model.encode(sentences)
576
+ print(embeddings.shape)
577
+ # [3, 768]
578
+
579
+ # Get the similarity scores for the embeddings
580
+ similarities = model.similarity(embeddings, embeddings)
581
+ print(similarities)
582
+ # tensor([[1.0000, 0.7282, 0.2122],
583
+ # [0.7282, 1.0000, 0.3511],
584
+ # [0.2122, 0.3511, 1.0000]])
585
+ ```
586
+
587
+ <!--
588
+ ### Direct Usage (Transformers)
589
+
590
+ <details><summary>Click to see the direct usage in Transformers</summary>
591
+
592
+ </details>
593
+ -->
594
+
595
+ <!--
596
+ ### Downstream Usage (Sentence Transformers)
597
+
598
+ You can finetune this model on your own dataset.
599
+
600
+ <details><summary>Click to expand</summary>
601
+
602
+ </details>
603
+ -->
604
+
605
+ <!--
606
+ ### Out-of-Scope Use
607
+
608
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
609
+ -->
610
+
611
+ ## Evaluation
612
+
613
+ ### Metrics
614
+
615
+ #### Information Retrieval
616
+
617
+ * Dataset: `dim_768`
618
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
619
+ ```json
620
+ {
621
+ "truncate_dim": 768
622
+ }
623
+ ```
624
+
625
+ | Metric | Value |
626
+ |:--------------------|:-----------|
627
+ | cosine_accuracy@1 | 0.84 |
628
+ | cosine_accuracy@3 | 0.84 |
629
+ | cosine_accuracy@5 | 0.84 |
630
+ | cosine_accuracy@10 | 0.93 |
631
+ | cosine_precision@1 | 0.84 |
632
+ | cosine_precision@3 | 0.84 |
633
+ | cosine_precision@5 | 0.84 |
634
+ | cosine_precision@10 | 0.465 |
635
+ | cosine_recall@1 | 0.168 |
636
+ | cosine_recall@3 | 0.504 |
637
+ | cosine_recall@5 | 0.84 |
638
+ | cosine_recall@10 | 0.93 |
639
+ | **cosine_ndcg@10** | **0.8887** |
640
+ | cosine_mrr@10 | 0.855 |
641
+ | cosine_map@100 | 0.8779 |
642
+
643
+ #### Information Retrieval
644
+
645
+ * Dataset: `dim_512`
646
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
647
+ ```json
648
+ {
649
+ "truncate_dim": 512
650
+ }
651
+ ```
652
+
653
+ | Metric | Value |
654
+ |:--------------------|:----------|
655
+ | cosine_accuracy@1 | 0.88 |
656
+ | cosine_accuracy@3 | 0.88 |
657
+ | cosine_accuracy@5 | 0.88 |
658
+ | cosine_accuracy@10 | 0.93 |
659
+ | cosine_precision@1 | 0.88 |
660
+ | cosine_precision@3 | 0.88 |
661
+ | cosine_precision@5 | 0.88 |
662
+ | cosine_precision@10 | 0.465 |
663
+ | cosine_recall@1 | 0.176 |
664
+ | cosine_recall@3 | 0.528 |
665
+ | cosine_recall@5 | 0.88 |
666
+ | cosine_recall@10 | 0.93 |
667
+ | **cosine_ndcg@10** | **0.907** |
668
+ | cosine_mrr@10 | 0.8883 |
669
+ | cosine_map@100 | 0.9039 |
670
+
671
+ #### Information Retrieval
672
+
673
+ * Dataset: `dim_256`
674
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
675
+ ```json
676
+ {
677
+ "truncate_dim": 256
678
+ }
679
+ ```
680
+
681
+ | Metric | Value |
682
+ |:--------------------|:----------|
683
+ | cosine_accuracy@1 | 0.87 |
684
+ | cosine_accuracy@3 | 0.87 |
685
+ | cosine_accuracy@5 | 0.87 |
686
+ | cosine_accuracy@10 | 0.92 |
687
+ | cosine_precision@1 | 0.87 |
688
+ | cosine_precision@3 | 0.87 |
689
+ | cosine_precision@5 | 0.87 |
690
+ | cosine_precision@10 | 0.46 |
691
+ | cosine_recall@1 | 0.174 |
692
+ | cosine_recall@3 | 0.522 |
693
+ | cosine_recall@5 | 0.87 |
694
+ | cosine_recall@10 | 0.92 |
695
+ | **cosine_ndcg@10** | **0.897** |
696
+ | cosine_mrr@10 | 0.8783 |
697
+ | cosine_map@100 | 0.8959 |
698
+
699
+ #### Information Retrieval
700
+
701
+ * Dataset: `dim_128`
702
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
703
+ ```json
704
+ {
705
+ "truncate_dim": 128
706
+ }
707
+ ```
708
+
709
+ | Metric | Value |
710
+ |:--------------------|:-----------|
711
+ | cosine_accuracy@1 | 0.86 |
712
+ | cosine_accuracy@3 | 0.86 |
713
+ | cosine_accuracy@5 | 0.86 |
714
+ | cosine_accuracy@10 | 0.95 |
715
+ | cosine_precision@1 | 0.86 |
716
+ | cosine_precision@3 | 0.86 |
717
+ | cosine_precision@5 | 0.86 |
718
+ | cosine_precision@10 | 0.475 |
719
+ | cosine_recall@1 | 0.172 |
720
+ | cosine_recall@3 | 0.516 |
721
+ | cosine_recall@5 | 0.86 |
722
+ | cosine_recall@10 | 0.95 |
723
+ | **cosine_ndcg@10** | **0.9087** |
724
+ | cosine_mrr@10 | 0.875 |
725
+ | cosine_map@100 | 0.895 |
726
+
727
+ #### Information Retrieval
728
+
729
+ * Dataset: `dim_64`
730
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
731
+ ```json
732
+ {
733
+ "truncate_dim": 64
734
+ }
735
+ ```
736
+
737
+ | Metric | Value |
738
+ |:--------------------|:-----------|
739
+ | cosine_accuracy@1 | 0.84 |
740
+ | cosine_accuracy@3 | 0.84 |
741
+ | cosine_accuracy@5 | 0.84 |
742
+ | cosine_accuracy@10 | 0.93 |
743
+ | cosine_precision@1 | 0.84 |
744
+ | cosine_precision@3 | 0.84 |
745
+ | cosine_precision@5 | 0.84 |
746
+ | cosine_precision@10 | 0.465 |
747
+ | cosine_recall@1 | 0.168 |
748
+ | cosine_recall@3 | 0.504 |
749
+ | cosine_recall@5 | 0.84 |
750
+ | cosine_recall@10 | 0.93 |
751
+ | **cosine_ndcg@10** | **0.8887** |
752
+ | cosine_mrr@10 | 0.855 |
753
+ | cosine_map@100 | 0.8792 |
754
+
755
+ <!--
756
+ ## Bias, Risks and Limitations
757
+
758
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
759
+ -->
760
+
761
+ <!--
762
+ ### Recommendations
763
+
764
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
765
+ -->
766
+
767
+ ## Training Details
768
+
769
+ ### Training Dataset
770
+
771
+ #### Unnamed Dataset
772
+
773
+ * Size: 900 training samples
774
+ * Columns: <code>anchor</code> and <code>positive</code>
775
+ * Approximate statistics based on the first 900 samples:
776
+ | | anchor | positive |
777
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
778
+ | type | string | string |
779
+ | details | <ul><li>min: 6 tokens</li><li>mean: 13.77 tokens</li><li>max: 356 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 267.71 tokens</li><li>max: 512 tokens</li></ul> |
780
+ * Samples:
781
+ | anchor | positive |
782
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
783
+ | <code>How does put_item work in Python?</code> | <code>def put_item(<br> self,<br> namespace: Sequence[str],<br> /,<br> key: str,<br> value: Mapping[str, Any],<br> index: Literal[False] \| list[str] \| None = None,<br> ttl: int \| None = None,<br> headers: Mapping[str, str] \| None = None,<br> params: QueryParamTypes \| None = None,<br> ) -> None:<br> """Store or update an item.<br><br> Args:<br> namespace: A list of strings representing the namespace path.<br> key: The unique identifier for the item within the namespace.<br> value: A dictionary containing the item's data.<br> index: Controls search indexing - None (use defaults), False (disable), or list of field paths to index.<br> ttl: Optional time-to-live in minutes for the item, or None for no expiration.<br> headers: Optional custom headers to include with the request.<br> params: Optional query parameters to include with the request.<br><br> Returns:<br> `None`<br><br> ???+ example...</code> |
784
+ | <code>Explain the RunsClient:<br> """Client for managing runs in LangGraph.<br><br> A run is a single assistant invocation with optional input, config, context, and metadata.<br> This client manages runs, which can be stateful logic</code> | <code>class RunsClient:<br> """Client for managing runs in LangGraph.<br><br> A run is a single assistant invocation with optional input, config, context, and metadata.<br> This client manages runs, which can be stateful (on threads) or stateless.<br><br> ???+ example "Example"<br><br> ```python<br> client = get_client(url="http://localhost:2024")<br> run = await client.runs.create(assistant_id="asst_123", thread_id="thread_456", input={"query": "Hello"})<br> ```<br> """<br><br> def __init__(self, http: HttpClient) -> None:<br> self.http = http<br><br> @overload<br> def stream(<br> self,<br> thread_id: str,<br> assistant_id: str,<br> *,<br> input: Input \| None = None,<br> command: Command \| None = None,<br> stream_mode: StreamMode \| Sequence[StreamMode] = "values",<br> stream_subgraphs: bool = False,<br> stream_resumable: bool = False,<br> metadata: Mapping[str, Any] \| None = None,<br> config: Config \| None = None,<br> context: Context \| N...</code> |
785
+ | <code>Best practices for MyChildDict</code> | <code>class MyChildDict(MyBaseTypedDict):<br> val_11: int<br> val_11b: int \| None<br> val_11c: int \| None \| str</code> |
786
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
787
+ ```json
788
+ {
789
+ "loss": "MultipleNegativesRankingLoss",
790
+ "matryoshka_dims": [
791
+ 768,
792
+ 512,
793
+ 256,
794
+ 128,
795
+ 64
796
+ ],
797
+ "matryoshka_weights": [
798
+ 1,
799
+ 1,
800
+ 1,
801
+ 1,
802
+ 1
803
+ ],
804
+ "n_dims_per_step": -1
805
+ }
806
+ ```
807
+
808
+ ### Training Hyperparameters
809
+ #### Non-Default Hyperparameters
810
+
811
+ - `eval_strategy`: epoch
812
+ - `per_device_train_batch_size`: 4
813
+ - `per_device_eval_batch_size`: 4
814
+ - `gradient_accumulation_steps`: 16
815
+ - `learning_rate`: 2e-05
816
+ - `num_train_epochs`: 2
817
+ - `lr_scheduler_type`: cosine
818
+ - `warmup_ratio`: 0.1
819
+ - `fp16`: True
820
+ - `load_best_model_at_end`: True
821
+ - `optim`: adamw_torch
822
+ - `batch_sampler`: no_duplicates
823
+
824
+ #### All Hyperparameters
825
+ <details><summary>Click to expand</summary>
826
+
827
+ - `overwrite_output_dir`: False
828
+ - `do_predict`: False
829
+ - `eval_strategy`: epoch
830
+ - `prediction_loss_only`: True
831
+ - `per_device_train_batch_size`: 4
832
+ - `per_device_eval_batch_size`: 4
833
+ - `per_gpu_train_batch_size`: None
834
+ - `per_gpu_eval_batch_size`: None
835
+ - `gradient_accumulation_steps`: 16
836
+ - `eval_accumulation_steps`: None
837
+ - `torch_empty_cache_steps`: None
838
+ - `learning_rate`: 2e-05
839
+ - `weight_decay`: 0.0
840
+ - `adam_beta1`: 0.9
841
+ - `adam_beta2`: 0.999
842
+ - `adam_epsilon`: 1e-08
843
+ - `max_grad_norm`: 1.0
844
+ - `num_train_epochs`: 2
845
+ - `max_steps`: -1
846
+ - `lr_scheduler_type`: cosine
847
+ - `lr_scheduler_kwargs`: None
848
+ - `warmup_ratio`: 0.1
849
+ - `warmup_steps`: 0
850
+ - `log_level`: passive
851
+ - `log_level_replica`: warning
852
+ - `log_on_each_node`: True
853
+ - `logging_nan_inf_filter`: True
854
+ - `save_safetensors`: True
855
+ - `save_on_each_node`: False
856
+ - `save_only_model`: False
857
+ - `restore_callback_states_from_checkpoint`: False
858
+ - `no_cuda`: False
859
+ - `use_cpu`: False
860
+ - `use_mps_device`: False
861
+ - `seed`: 42
862
+ - `data_seed`: None
863
+ - `jit_mode_eval`: False
864
+ - `bf16`: False
865
+ - `fp16`: True
866
+ - `fp16_opt_level`: O1
867
+ - `half_precision_backend`: auto
868
+ - `bf16_full_eval`: False
869
+ - `fp16_full_eval`: False
870
+ - `tf32`: None
871
+ - `local_rank`: 0
872
+ - `ddp_backend`: None
873
+ - `tpu_num_cores`: None
874
+ - `tpu_metrics_debug`: False
875
+ - `debug`: []
876
+ - `dataloader_drop_last`: False
877
+ - `dataloader_num_workers`: 0
878
+ - `dataloader_prefetch_factor`: None
879
+ - `past_index`: -1
880
+ - `disable_tqdm`: False
881
+ - `remove_unused_columns`: True
882
+ - `label_names`: None
883
+ - `load_best_model_at_end`: True
884
+ - `ignore_data_skip`: False
885
+ - `fsdp`: []
886
+ - `fsdp_min_num_params`: 0
887
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
888
+ - `fsdp_transformer_layer_cls_to_wrap`: None
889
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
890
+ - `parallelism_config`: None
891
+ - `deepspeed`: None
892
+ - `label_smoothing_factor`: 0.0
893
+ - `optim`: adamw_torch
894
+ - `optim_args`: None
895
+ - `adafactor`: False
896
+ - `group_by_length`: False
897
+ - `length_column_name`: length
898
+ - `project`: huggingface
899
+ - `trackio_space_id`: trackio
900
+ - `ddp_find_unused_parameters`: None
901
+ - `ddp_bucket_cap_mb`: None
902
+ - `ddp_broadcast_buffers`: False
903
+ - `dataloader_pin_memory`: True
904
+ - `dataloader_persistent_workers`: False
905
+ - `skip_memory_metrics`: True
906
+ - `use_legacy_prediction_loop`: False
907
+ - `push_to_hub`: False
908
+ - `resume_from_checkpoint`: None
909
+ - `hub_model_id`: None
910
+ - `hub_strategy`: every_save
911
+ - `hub_private_repo`: None
912
+ - `hub_always_push`: False
913
+ - `hub_revision`: None
914
+ - `gradient_checkpointing`: False
915
+ - `gradient_checkpointing_kwargs`: None
916
+ - `include_inputs_for_metrics`: False
917
+ - `include_for_metrics`: []
918
+ - `eval_do_concat_batches`: True
919
+ - `fp16_backend`: auto
920
+ - `push_to_hub_model_id`: None
921
+ - `push_to_hub_organization`: None
922
+ - `mp_parameters`:
923
+ - `auto_find_batch_size`: False
924
+ - `full_determinism`: False
925
+ - `torchdynamo`: None
926
+ - `ray_scope`: last
927
+ - `ddp_timeout`: 1800
928
+ - `torch_compile`: False
929
+ - `torch_compile_backend`: None
930
+ - `torch_compile_mode`: None
931
+ - `include_tokens_per_second`: False
932
+ - `include_num_input_tokens_seen`: no
933
+ - `neftune_noise_alpha`: None
934
+ - `optim_target_modules`: None
935
+ - `batch_eval_metrics`: False
936
+ - `eval_on_start`: False
937
+ - `use_liger_kernel`: False
938
+ - `liger_kernel_config`: None
939
+ - `eval_use_gather_object`: False
940
+ - `average_tokens_across_devices`: True
941
+ - `prompts`: None
942
+ - `batch_sampler`: no_duplicates
943
+ - `multi_dataset_batch_sampler`: proportional
944
+ - `router_mapping`: {}
945
+ - `learning_rate_mapping`: {}
946
+
947
+ </details>
948
+
949
+ ### Training Logs
950
+ | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
951
+ |:-------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
952
+ | 0.7111 | 10 | 0.6327 | - | - | - | - | - |
953
+ | 1.0 | 15 | - | 0.8970 | 0.8979 | 0.8925 | 0.8979 | 0.8641 |
954
+ | 1.3556 | 20 | 0.2227 | - | - | - | - | - |
955
+ | **2.0** | **30** | **0.1692** | **0.8887** | **0.907** | **0.897** | **0.9087** | **0.8887** |
956
+
957
+ * The bold row denotes the saved checkpoint.
958
+
959
+ ### Framework Versions
960
+ - Python: 3.12.12
961
+ - Sentence Transformers: 5.2.0
962
+ - Transformers: 4.57.6
963
+ - PyTorch: 2.9.0+cu126
964
+ - Accelerate: 1.12.0
965
+ - Datasets: 4.0.0
966
+ - Tokenizers: 0.22.2
967
+
968
+ ## Citation
969
+
970
+ ### BibTeX
971
+
972
+ #### Sentence Transformers
973
+ ```bibtex
974
+ @inproceedings{reimers-2019-sentence-bert,
975
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
976
+ author = "Reimers, Nils and Gurevych, Iryna",
977
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
978
+ month = "11",
979
+ year = "2019",
980
+ publisher = "Association for Computational Linguistics",
981
+ url = "https://arxiv.org/abs/1908.10084",
982
+ }
983
+ ```
984
+
985
+ #### MatryoshkaLoss
986
+ ```bibtex
987
+ @misc{kusupati2024matryoshka,
988
+ title={Matryoshka Representation Learning},
989
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
990
+ year={2024},
991
+ eprint={2205.13147},
992
+ archivePrefix={arXiv},
993
+ primaryClass={cs.LG}
994
+ }
995
+ ```
996
+
997
+ #### MultipleNegativesRankingLoss
998
+ ```bibtex
999
+ @misc{henderson2017efficient,
1000
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
1001
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
1002
+ year={2017},
1003
+ eprint={1705.00652},
1004
+ archivePrefix={arXiv},
1005
+ primaryClass={cs.CL}
1006
+ }
1007
+ ```
1008
+
1009
+ <!--
1010
+ ## Glossary
1011
+
1012
+ *Clearly define terms in order to be accessible across audiences.*
1013
+ -->
1014
+
1015
+ <!--
1016
+ ## Model Card Authors
1017
+
1018
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
1019
+ -->
1020
+
1021
+ <!--
1022
+ ## Model Card Contact
1023
+
1024
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
1025
+ -->
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "dtype": "float32",
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "roberta",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "output_past": true,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "transformers_version": "4.57.6",
24
+ "type_vocab_size": 1,
25
+ "use_cache": true,
26
+ "vocab_size": 50265
27
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SentenceTransformer",
3
+ "__version__": {
4
+ "sentence_transformers": "5.2.0",
5
+ "transformers": "4.57.6",
6
+ "pytorch": "2.9.0+cu126"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef396df74c9322e9fd2672b46ec93c46bb6156b9a1f34f093edf45711f93223e
3
+ size 498604904
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "50264": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "max_length": 512,
53
+ "model_max_length": 512,
54
+ "pad_to_multiple_of": null,
55
+ "pad_token": "<pad>",
56
+ "pad_token_type_id": 0,
57
+ "padding_side": "right",
58
+ "sep_token": "</s>",
59
+ "stride": 0,
60
+ "tokenizer_class": "RobertaTokenizer",
61
+ "trim_offsets": true,
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "<unk>"
65
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff