PyLate model based on jhu-clsp/ettin-encoder-17m
This is a PyLate model finetuned from jhu-clsp/ettin-encoder-17m on the code-retrieval-combined-v2-llm-negatives dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
Model Details
Model Description
- Model Type: PyLate model
- Base model: jhu-clsp/ettin-encoder-17m
- Document Length: 180 tokens
- Query Length: 32 tokens
- Output Dimensionality: 128 tokens
- Similarity Function: MaxSim
- Training Dataset:
Model Sources
- Documentation: PyLate Documentation
- Repository: PyLate on GitHub
- Hugging Face: PyLate models on Hugging Face
Full Model Architecture
ColBERT(
(0): Transformer({'max_seq_length': 31, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Dense({'in_features': 256, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'use_residual': False})
)
Usage
First install the PyLate library:
pip install -U pylate
Retrieval
Use this model with PyLate to index and retrieve documents. The index uses FastPLAID for efficient similarity search.
Indexing documents
Load the ColBERT model and initialize the PLAID index, then encode and index your documents:
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path="colbert-code-17m",
)
# Step 2: Initialize the PLAID index
index = indexes.PLAID(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.PLAID(
index_folder="pylate-index",
index_name="index",
)
Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path="colbert-code-17m",
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
Training Details
Training Dataset
code-retrieval-combined-v2-llm-negatives
- Dataset: code-retrieval-combined-v2-llm-negatives at 1917069
- Size: 1,188,486 training samples
- Columns:
query,positive,source,hard_negatives, andnegatives - Approximate statistics based on the first 1000 samples:
query positive source hard_negatives negatives type string string string list string details - min: 6 tokens
- mean: 23.61 tokens
- max: 32 tokens
- min: 14 tokens
- mean: 31.43 tokens
- max: 32 tokens
- min: 4 tokens
- mean: 7.46 tokens
- max: 10 tokens
- size: 1 elements
- min: 16 tokens
- mean: 31.15 tokens
- max: 32 tokens
- Samples:
query positive source hard_negatives negatives wait for AWS PCA CSR propagation before issuing certificatefunc (c *ACMPCA) WaitUntilCertificateAuthorityCSRCreated(input *GetCertificateAuthorityCsrInput) error {
return c.WaitUntilCertificateAuthorityCSRCreatedWithContext(aws.BackgroundContext(), input)
}csn_syntethic['func (c *ACMPCA) WaitUntilAuditReportCreated(input *DescribeCertificateAuthorityAuditReportInput) error {\n\treturn c.WaitUntilAuditReportCreatedWithContext(aws.BackgroundContext(), input)\n}']func (c *ACMPCA) WaitUntilAuditReportCreated(input *DescribeCertificateAuthorityAuditReportInput) error {
return c.WaitUntilAuditReportCreatedWithContext(aws.BackgroundContext(), input)
}func (f *Filter) Gather() ([]*dto.MetricFamily, error) {
mfs, err := f.Gatherer.Gather()
if err != nil {return nil, err
}
return f.Matcher.Match(mfs), nil
}csn_ccr['\n\tf.err = err\n\tif err != nil {\n\t\tfor _, node := range f.cells {\n\t\t\tnode.PropagateWatchError(err)\n\t\t}\n\t}\n}']
f.err = err
if err != nil {
for _, node := range f.cells {
node.PropagateWatchError(err)
}
}
}def latent_to_dist(name, x, hparams, output_channels=None):
"""Map latent to the mean and log-scale of a Gaussian.
Args:
name: variable scope.
x: 4-D Tensor of shape (NHWC)
hparams: HParams.
latent_architecture - can be "single_conv", "glow_nn" or "glow_resnet",
default = single_conv
latent_encoder_depth - int, depth of architecture, valid if
latent_architecture is "glow_nn" or "glow_resnet".
latent_pre_output_channels - 512, valid only when latent_architecture
is "glow_nn".
latent_encoder_width - 512, maximum width of the network
output_channels: int, number of output channels of the mean (and std).
if not provided, set it to be the output channels of x.
Returns:
dist: instance of tfp.distributions.Normal
Raises:
ValueError: If architecture not in ["single_conv", "glow_nn"]
"""
architecture = hparams.get("latent_a...mid_channels=mid_channels)
mean_log_scale = conv("glow_nn_zeros", mean_log_scale,
filter_size=[3, 3], stride=[1, 1],
output_channels=2output_channels,
apply_actnorm=False, conv_init="zeros")
elif architecture == "glow_resnet":
h = x
for layer in range(depth):
h3 = conv_stack("latent_resnet_%d" % layer, h,
mid_channels=width, output_channels=x_shape[-1],
dropout=hparams.coupling_dropout)
h += h3
mean_log_scale = conv("glow_res_final", h, conv_init="zeros",
output_channels=2output_channels,
apply_actnorm=False)
else:
raise ValueError("expected architecture to be single_conv or glow_nn "
"got %s" % architecture)
mean = mean_log_scale[:, :, :, 0::2]
log_scale = mean_log_scale[:, :, :, 1::2]
return tfp.distribu...csn_ccr[' first_relu=False,\n padding="SAME",\n strides=(2, 2),\n force2d=True,\n name="conv0")\n x = common_layers.conv_block(\n x, 64, [((1, 1), (3, 3))], padding="SAME", force2d=True, name="conv1")\n x = xnet_resblock(x, min(128, hidden_dim), True, "block0")\n x = xnet_resblock(x, min(256, hidden_dim), False, "block1")\n return xnet_resblock(x, hidden_dim, False, "block2")']first_relu=False,
padding="SAME",
strides=(2, 2),
force2d=True,
name="conv0")
x = common_layers.conv_block(
x, 64, [((1, 1), (3, 3))], padding="SAME", force2d=True, name="conv1")
x = xnet_resblock(x, min(128, hidden_dim), True, "block0")
x = xnet_resblock(x, min(256, hidden_dim), False, "block1")
return xnet_resblock(x, hidden_dim, False, "block2") - Loss:
pylate.losses.contrastive.Contrastive
Evaluation Dataset
code-retrieval-combined-v2-llm-negatives
- Dataset: code-retrieval-combined-v2-llm-negatives at 1917069
- Size: 12,005 evaluation samples
- Columns:
query,positive,source,hard_negatives, andnegatives - Approximate statistics based on the first 1000 samples:
query positive source hard_negatives negatives type string string string list string details - min: 6 tokens
- mean: 23.78 tokens
- max: 32 tokens
- min: 14 tokens
- mean: 31.45 tokens
- max: 32 tokens
- min: 4 tokens
- mean: 7.55 tokens
- max: 10 tokens
- size: 1 elements
- min: 14 tokens
- mean: 31.15 tokens
- max: 32 tokens
- Samples:
query positive source hard_negatives negatives public static function list_templates($competencyid, $onlyvisible) {
global $DB;
$sql = 'SELECT tpl.*
FROM {' . template::TABLE . '} tpl
JOIN {' . self::TABLE . '} tplcomp
ON tplcomp.templateid = tpl.id
WHERE tplcomp.competencyid = ? ';
$params = array($competencyid);
if ($onlyvisible) {
$sql .= ' AND tpl.visible = ?';
$params[] = 1;
}
$sql .= ' ORDER BY tpl.id ASC';
$results = $DB->get_records_sql($sql, $params);
$instances = array();
foreach ($results as $result) {
array_push($instances, new template(0, $result));
}
return $instances;
}csn_ccr[" FROM {assign_grades} g\n JOIN(' . $esql . ') e ON e.id = g.userid\n WHERE g.assignment = :assignid';\n\n return $DB->count_records_sql($sql, $params);\n }"]FROM {assign_grades} g
JOIN(' . $esql . ') e ON e.id = g.userid
WHERE g.assignment = :assignid';
return $DB->count_records_sql($sql, $params);
}python multiprocessing sandboxed process isolation file system networkdef start(self):
'''Create a process in which the isolated code will be run.'''
assert self._client is None
logger.debug('IsolationContext[%d] starting', id(self))
# Create the queues
request_queue = multiprocessing.Queue()
response_queue = multiprocessing.Queue()
# Launch the server process
server = Server(request_queue, response_queue) # Do not keep a reference to this object!
server_process = multiprocessing.Process(target=server.loop)
server_process.start()
# Create a client to talk to the server
self._client = Client(server_process, request_queue, response_queue)csn_syntethic['def start(cls, _init_logging=True):\n """\n Arrange for the subprocess to be started, if it is not already running.\n\n The parent process picks a UNIX socket path the child will use prior to\n fork, creates a socketpair used essentially as a semaphore, then blocks\n waiting for the child to indicate the UNIX socket is ready for use.\n\n :param bool _init_logging:\n For testing, if :data:False, don't initialize logging.\n """\n if cls.worker_sock is not None:\n return\n\n if faulthandler is not None:\n faulthandler.enable()\n\n mitogen.utils.setup_gil()\n cls.unix_listener_path = mitogen.unix.make_socket_path()\n cls.worker_sock, cls.child_sock = socket.socketpair()\n atexit.register(lambda: clean_shutdown(cls.worker_sock))\n mitogen.core.set_cloexec(cls.worker_sock.fileno())\n mitogen.core.set_cloexec(cls.child_sock.fileno())\n\n cls.profiling = os.environ.get('MITOGEN_PROFILING') is not None\n if cls.profiling:\n mitogen.core.enable_profiling()\n if _init_logging:\n ansible_mitogen.logging.setup()\n\n cls.original_env = dict(os.environ)\n cls.child_pid = os.fork()\n if cls.child_pid:\n save_pid('controller')\n ansible_mitogen.logging.set_process_name('top')\n ansible_mitogen.affinity.policy.assign_controller()\n cls.child_sock.close()\n cls.child_sock = None\n mitogen.core.io_op(cls.worker_sock.recv, 1)\n else:\n save_pid('mux')\n ansible_mitogen.logging.set_process_name('mux')\n ansible_mitogen.affinity.policy.assign_muxprocess()\n cls.worker_sock.close()\n cls.worker_sock = None\n self = cls()\n self.worker_main()']def start(cls, _init_logging=True):
"""
Arrange for the subprocess to be started, if it is not already running.
The parent process picks a UNIX socket path the child will use prior to
fork, creates a socketpair used essentially as a semaphore, then blocks
waiting for the child to indicate the UNIX socket is ready for use.
:param bool _init_logging:
For testing, if :data:False, don't initialize logging.
"""
if cls.worker_sock is not None:
return
if faulthandler is not None:
faulthandler.enable()
mitogen.utils.setup_gil()
cls.unix_listener_path = mitogen.unix.make_socket_path()
cls.worker_sock, cls.child_sock = socket.socketpair()
atexit.register(lambda: clean_shutdown(cls.worker_sock))
mitogen.core.set_cloexec(cls.worker_sock.fileno())
mitogen.core.set_cloexec(cls.child_sock.fileno())
cls.profiling = os.environ.get('MI...updates the current cookies with a new set
@param array $cookies new cookies with which to update current ones
@return boolean always return true
@access privatefunction UpdateCookies($cookies)
{
if (sizeof($this->cookies) == 0) {
// no existing cookies: take whatever is new
if (sizeof($cookies) > 0) {
$this->debug('Setting new cookie(s)');
$this->cookies = $cookies;
}
return TRUE;
}
if (sizeof($cookies) == 0) {
// no new cookies: keep what we've got
return TRUE;
}
// merge
foreach ($cookies as $newCookie) {
if (!is_array($newCookie)) {
continue;
}
if ((!isset($newCookie['name'])) || (!isset($newCookie['value']))) {
continue;
}
$newName = $newCookie['name'];
$found = FALSE;
for ($i = 0; $i < count($this->cookies); $i++) {
$cookie = $this->cookies[$i];
if (!is_array($cookie)) {
continue;
...csn["public function setCookies(array $cookies) : Request\n {\n $query = http_build_query($cookies);\n parse_str($query, $this->cookies);\n\n if ($cookies) {\n $cookie = str_replace('&', '; ', $query);\n $this->setHeader('Cookie', $cookie);\n } else {\n $this->removeHeader('Cookie');\n }\n\n return $this;\n }"]public function setCookies(array $cookies) : Request
{
$query = http_build_query($cookies);
parse_str($query, $this->cookies);
if ($cookies) {
$cookie = str_replace('&', '; ', $query);
$this->setHeader('Cookie', $cookie);
} else {
$this->removeHeader('Cookie');
}
return $this;
} - Loss:
pylate.losses.contrastive.Contrastive
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 256per_device_eval_batch_size: 256learning_rate: 3e-06num_train_epochs: 1fp16: Truehub_model_id: colbert-code-17m
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 256per_device_eval_batch_size: 256per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 3e-06weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 1max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: colbert-code-17mhub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Framework Versions
- Python: 3.12.3
- Sentence Transformers: 5.1.1
- PyLate: 1.4.0
- Transformers: 4.56.2
- PyTorch: 2.9.0+cu128
- Accelerate: 1.13.0
- Datasets: 4.8.4
- Tokenizers: 0.22.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
PyLate
@inproceedings{DBLP:conf/cikm/ChaffinS25,
author = {Antoine Chaffin and
Rapha{"{e}}l Sourty},
editor = {Meeyoung Cha and
Chanyoung Park and
Noseong Park and
Carl Yang and
Senjuti Basu Roy and
Jessie Li and
Jaap Kamps and
Kijung Shin and
Bryan Hooi and
Lifang He},
title = {PyLate: Flexible Training and Retrieval for Late Interaction Models},
booktitle = {Proceedings of the 34th {ACM} International Conference on Information
and Knowledge Management, {CIKM} 2025, Seoul, Republic of Korea, November
10-14, 2025},
pages = {6334--6339},
publisher = {{ACM}},
year = {2025},
url = {https://github.com/lightonai/pylate},
doi = {10.1145/3746252.3761608},
}
- Downloads last month
- 22
Model tree for benjamintli/colbert-code-17m
Base model
jhu-clsp/ettin-encoder-17m