diff --git "a/README.md" "b/README.md" --- "a/README.md" +++ "b/README.md" @@ -5,13 +5,490 @@ tags: - sentence-transformers - sentence-similarity - feature-extraction +- generated_from_trainer +- dataset_size:2117771 +- loss:Contrastive +- code +- embeddings +- retrieval +- code search +datasets: +- lightonai/nv-embed-supervised-distill-dedup-code pipeline_tag: sentence-similarity library_name: PyLate +license: apache-2.0 +language: +- en +- code +metrics: +- MaxSim_accuracy@1 +- MaxSim_accuracy@3 +- MaxSim_accuracy@5 +- MaxSim_accuracy@10 +- MaxSim_precision@1 +- MaxSim_precision@3 +- MaxSim_precision@5 +- MaxSim_precision@10 +- MaxSim_recall@1 +- MaxSim_recall@3 +- MaxSim_recall@5 +- MaxSim_recall@10 +- MaxSim_ndcg@10 +- MaxSim_mrr@10 +- MaxSim_map@100 +model-index: +- name: PyLate + results: + - task: + type: py-late-information-retrieval + name: Py Late Information Retrieval + dataset: + name: CodeSearchNetPython + type: CodeSearchNetPython + metrics: + - type: MaxSim_accuracy@1 + value: 0.855 + name: Maxsim Accuracy@1 + - type: MaxSim_accuracy@3 + value: 0.958 + name: Maxsim Accuracy@3 + - type: MaxSim_accuracy@5 + value: 0.972 + name: Maxsim Accuracy@5 + - type: MaxSim_accuracy@10 + value: 0.98 + name: Maxsim Accuracy@10 + - type: MaxSim_precision@1 + value: 0.855 + name: Maxsim Precision@1 + - type: MaxSim_precision@3 + value: 0.31933333333333325 + name: Maxsim Precision@3 + - type: MaxSim_precision@5 + value: 0.19440000000000004 + name: Maxsim Precision@5 + - type: MaxSim_precision@10 + value: 0.09800000000000002 + name: Maxsim Precision@10 + - type: MaxSim_recall@1 + value: 0.855 + name: Maxsim Recall@1 + - type: MaxSim_recall@3 + value: 0.958 + name: Maxsim Recall@3 + - type: MaxSim_recall@5 + value: 0.972 + name: Maxsim Recall@5 + - type: MaxSim_recall@10 + value: 0.98 + name: Maxsim Recall@10 + - type: MaxSim_ndcg@10 + value: 0.9243945806879859 + name: Maxsim Ndcg@10 + - type: MaxSim_mrr@10 + value: 0.9057539682539687 + name: Maxsim Mrr@10 + - type: MaxSim_map@100 + value: 0.9064418634729382 + name: Maxsim Map@100 + - task: + type: py-late-information-retrieval + name: Py Late Information Retrieval + dataset: + name: CodeSearchNetJavascript + type: CodeSearchNetJavascript + metrics: + - type: MaxSim_accuracy@1 + value: 0.707 + name: Maxsim Accuracy@1 + - type: MaxSim_accuracy@3 + value: 0.815 + name: Maxsim Accuracy@3 + - type: MaxSim_accuracy@5 + value: 0.845 + name: Maxsim Accuracy@5 + - type: MaxSim_accuracy@10 + value: 0.877 + name: Maxsim Accuracy@10 + - type: MaxSim_precision@1 + value: 0.707 + name: Maxsim Precision@1 + - type: MaxSim_precision@3 + value: 0.2716666666666666 + name: Maxsim Precision@3 + - type: MaxSim_precision@5 + value: 0.169 + name: Maxsim Precision@5 + - type: MaxSim_precision@10 + value: 0.08770000000000001 + name: Maxsim Precision@10 + - type: MaxSim_recall@1 + value: 0.707 + name: Maxsim Recall@1 + - type: MaxSim_recall@3 + value: 0.815 + name: Maxsim Recall@3 + - type: MaxSim_recall@5 + value: 0.845 + name: Maxsim Recall@5 + - type: MaxSim_recall@10 + value: 0.877 + name: Maxsim Recall@10 + - type: MaxSim_ndcg@10 + value: 0.7937015046112885 + name: Maxsim Ndcg@10 + - type: MaxSim_mrr@10 + value: 0.7667960317460317 + name: Maxsim Mrr@10 + - type: MaxSim_map@100 + value: 0.7695522566859624 + name: Maxsim Map@100 + - task: + type: py-late-information-retrieval + name: Py Late Information Retrieval + dataset: + name: CodeSearchNetGo + type: CodeSearchNetGo + metrics: + - type: MaxSim_accuracy@1 + value: 0.92 + name: Maxsim Accuracy@1 + - type: MaxSim_accuracy@3 + value: 0.978 + name: Maxsim Accuracy@3 + - type: MaxSim_accuracy@5 + value: 0.987 + name: Maxsim Accuracy@5 + - type: MaxSim_accuracy@10 + value: 0.991 + name: Maxsim Accuracy@10 + - type: MaxSim_precision@1 + value: 0.92 + name: Maxsim Precision@1 + - type: MaxSim_precision@3 + value: 0.32599999999999996 + name: Maxsim Precision@3 + - type: MaxSim_precision@5 + value: 0.19740000000000005 + name: Maxsim Precision@5 + - type: MaxSim_precision@10 + value: 0.09910000000000002 + name: Maxsim Precision@10 + - type: MaxSim_recall@1 + value: 0.92 + name: Maxsim Recall@1 + - type: MaxSim_recall@3 + value: 0.978 + name: Maxsim Recall@3 + - type: MaxSim_recall@5 + value: 0.987 + name: Maxsim Recall@5 + - type: MaxSim_recall@10 + value: 0.991 + name: Maxsim Recall@10 + - type: MaxSim_ndcg@10 + value: 0.9607370553228975 + name: Maxsim Ndcg@10 + - type: MaxSim_mrr@10 + value: 0.9504940476190477 + name: Maxsim Mrr@10 + - type: MaxSim_map@100 + value: 0.9507803176498298 + name: Maxsim Map@100 + - task: + type: py-late-information-retrieval + name: Py Late Information Retrieval + dataset: + name: CodeSearchNetRuby + type: CodeSearchNetRuby + metrics: + - type: MaxSim_accuracy@1 + value: 0.737 + name: Maxsim Accuracy@1 + - type: MaxSim_accuracy@3 + value: 0.87 + name: Maxsim Accuracy@3 + - type: MaxSim_accuracy@5 + value: 0.899 + name: Maxsim Accuracy@5 + - type: MaxSim_accuracy@10 + value: 0.921 + name: Maxsim Accuracy@10 + - type: MaxSim_precision@1 + value: 0.737 + name: Maxsim Precision@1 + - type: MaxSim_precision@3 + value: 0.2899999999999999 + name: Maxsim Precision@3 + - type: MaxSim_precision@5 + value: 0.17980000000000002 + name: Maxsim Precision@5 + - type: MaxSim_precision@10 + value: 0.09210000000000003 + name: Maxsim Precision@10 + - type: MaxSim_recall@1 + value: 0.737 + name: Maxsim Recall@1 + - type: MaxSim_recall@3 + value: 0.87 + name: Maxsim Recall@3 + - type: MaxSim_recall@5 + value: 0.899 + name: Maxsim Recall@5 + - type: MaxSim_recall@10 + value: 0.921 + name: Maxsim Recall@10 + - type: MaxSim_ndcg@10 + value: 0.8356874462458972 + name: Maxsim Ndcg@10 + - type: MaxSim_mrr@10 + value: 0.8076091269841275 + name: Maxsim Mrr@10 + - type: MaxSim_map@100 + value: 0.8095189889370982 + name: Maxsim Map@100 + - task: + type: py-late-information-retrieval + name: Py Late Information Retrieval + dataset: + name: CodeSearchNetJava + type: CodeSearchNetJava + metrics: + - type: MaxSim_accuracy@1 + value: 0.755 + name: Maxsim Accuracy@1 + - type: MaxSim_accuracy@3 + value: 0.914 + name: Maxsim Accuracy@3 + - type: MaxSim_accuracy@5 + value: 0.937 + name: Maxsim Accuracy@5 + - type: MaxSim_accuracy@10 + value: 0.951 + name: Maxsim Accuracy@10 + - type: MaxSim_precision@1 + value: 0.755 + name: Maxsim Precision@1 + - type: MaxSim_precision@3 + value: 0.30466666666666664 + name: Maxsim Precision@3 + - type: MaxSim_precision@5 + value: 0.18740000000000004 + name: Maxsim Precision@5 + - type: MaxSim_precision@10 + value: 0.09510000000000002 + name: Maxsim Precision@10 + - type: MaxSim_recall@1 + value: 0.755 + name: Maxsim Recall@1 + - type: MaxSim_recall@3 + value: 0.914 + name: Maxsim Recall@3 + - type: MaxSim_recall@5 + value: 0.937 + name: Maxsim Recall@5 + - type: MaxSim_recall@10 + value: 0.951 + name: Maxsim Recall@10 + - type: MaxSim_ndcg@10 + value: 0.8654697550394161 + name: Maxsim Ndcg@10 + - type: MaxSim_mrr@10 + value: 0.836704761904762 + name: Maxsim Mrr@10 + - type: MaxSim_map@100 + value: 0.8379490131977781 + name: Maxsim Map@100 + - task: + type: py-late-information-retrieval + name: Py Late Information Retrieval + dataset: + name: CodeSearchNetPhp + type: CodeSearchNetPhp + metrics: + - type: MaxSim_accuracy@1 + value: 0.802 + name: Maxsim Accuracy@1 + - type: MaxSim_accuracy@3 + value: 0.91 + name: Maxsim Accuracy@3 + - type: MaxSim_accuracy@5 + value: 0.932 + name: Maxsim Accuracy@5 + - type: MaxSim_accuracy@10 + value: 0.953 + name: Maxsim Accuracy@10 + - type: MaxSim_precision@1 + value: 0.802 + name: Maxsim Precision@1 + - type: MaxSim_precision@3 + value: 0.30333333333333323 + name: Maxsim Precision@3 + - type: MaxSim_precision@5 + value: 0.18640000000000004 + name: Maxsim Precision@5 + - type: MaxSim_precision@10 + value: 0.09530000000000001 + name: Maxsim Precision@10 + - type: MaxSim_recall@1 + value: 0.802 + name: Maxsim Recall@1 + - type: MaxSim_recall@3 + value: 0.91 + name: Maxsim Recall@3 + - type: MaxSim_recall@5 + value: 0.932 + name: Maxsim Recall@5 + - type: MaxSim_recall@10 + value: 0.953 + name: Maxsim Recall@10 + - type: MaxSim_ndcg@10 + value: 0.8823849310511876 + name: Maxsim Ndcg@10 + - type: MaxSim_mrr@10 + value: 0.8592019841269843 + name: Maxsim Mrr@10 + - type: MaxSim_map@100 + value: 0.8600229577124362 + name: Maxsim Map@100 + - task: + type: code-search-network + name: Code Search Network + dataset: + name: CodeSearchNet mean + type: CodeSearchNet_mean + metrics: + - type: MaxSim_accuracy@1 + value: 0.7959999999999999 + name: Maxsim Accuracy@1 + - type: MaxSim_accuracy@3 + value: 0.9075000000000001 + name: Maxsim Accuracy@3 + - type: MaxSim_accuracy@5 + value: 0.9286666666666666 + name: Maxsim Accuracy@5 + - type: MaxSim_accuracy@10 + value: 0.9455 + name: Maxsim Accuracy@10 + - type: MaxSim_precision@1 + value: 0.7959999999999999 + name: Maxsim Precision@1 + - type: MaxSim_precision@3 + value: 0.30249999999999994 + name: Maxsim Precision@3 + - type: MaxSim_precision@5 + value: 0.1857333333333334 + name: Maxsim Precision@5 + - type: MaxSim_precision@10 + value: 0.09455000000000002 + name: Maxsim Precision@10 + - type: MaxSim_recall@1 + value: 0.7959999999999999 + name: Maxsim Recall@1 + - type: MaxSim_recall@3 + value: 0.9075000000000001 + name: Maxsim Recall@3 + - type: MaxSim_recall@5 + value: 0.9286666666666666 + name: Maxsim Recall@5 + - type: MaxSim_recall@10 + value: 0.9455 + name: Maxsim Recall@10 + - type: MaxSim_ndcg@10 + value: 0.877062545493112 + name: Maxsim Ndcg@10 + - type: MaxSim_mrr@10 + value: 0.8544266534391536 + name: Maxsim Mrr@10 + - type: MaxSim_map@100 + value: 0.8557108996093405 + name: Maxsim Map@100 --- + + +# LateOn-Code + +The [LateOn-Code collection](https://huggingface.co/collections/lightonai/lateon-code) is composed of [PyLate](https://github.com/lightonai/pylate) models optimized for code retrieval. These late interaction models are first pre-trained following the methodology of [CoRNStack](https://arxiv.org/pdf/2412.01007). These pre-trained models are then further fine-tuned on train sets of CoIR using the [nv-retriever](https://arxiv.org/abs/2407.15831) methodology to mine hard negatives while preventing false negatives. + +We started from the two best ColBERT models on the BEIR benchmark for their respective sizes. The first one, [LateOn-Code](https://huggingface.co/lightonai/LateOn-Code) is based on in-house LateOn model, a new version of [GTE-ModernColBERT-v1](https://huggingface.co/lightonai/GTE-ModernColBERT-v1) built on ModernBERT-base (also developed at LightOn). This version underwent significantly deeper training, crossing the 57 mark on BEIR, almost a 2.5-point improvement and is thus SOTA by a large margin. We'll release this base model along with training data and boilerplates in the near future, so stay tuned\! The second, [LateOn-Code-edge](https://huggingface.co/lightonai/LateOn-Code-edge) is a smaller model based on the [edge-colbert model family from mixedbread](https://www.mixedbread.com/blog/edge-v0), using the [smallest variant (Ettin-17M)](https://huggingface.co/mixedbread-ai/mxbai-edge-colbert-v0-17m) for maximum efficiency. For more details on the training setup, please refer to our [blogpost](https://huggingface.co/blog/lightonai/colgrep-lateon-code). + +The original [CoRNStack data](https://huggingface.co/collections/nomic-ai/cornstack) in a format compatible with PyLate can be found [here](https://huggingface.co/datasets/lightonai/cornstack) while the fine-tuning data can be found [here](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code). Training boilerplates can be found [here in the PyLate repository](https://github.com/lightonai/pylate/tree/main/examples/train/lateon_code) + +## MTEB (Code, v1) benchmark results + + +Pre-trained models achieve very competitive results as the 17M model outperforms the very strong granite-embedding-small-english-r2 by an average of 1.7. This is truly impressive, as the granite model is almost three times bigger (17M vs 48M), but is also a beast on its own in the <100M parameters range. It also outperforms the larger granite variant (149M). The larger version nicely scales by improving over the performance of its little sibling by 6.5 on average. + +Although the pre-training results are already very impressive given that they are mostly out-of-domain, running a proper fine-tuning using the training data of CoIR significantly boost the performance of the models. Notably, the 17M model increases from 57.50 to 66.64 (+9.14), getting pretty close to EmbeddingGemma-300M while being 17 times smaller. The larger one increases from 63.77 to 74.12 (+10.35), strongly outperforming EmbeddingGemma-300M and getting closer to strong LLM models such as Qwen3-Embedding-0.6B and C2LLM-0.5B while being much smaller. + +| Model | Params | Type | **Avg** | Apps | COIR CSNet | CodeEdit | CodeFB MT | CodeFB ST | CSNet CC | CSNet | CodeTrans Contest | CodeTrans DL | CosQA | StackOF QA | Synth T2SQL | +|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| +| **Baseline** | | | | | | | | | | | | | | | | +| BM25 | - | Lexical | 44.41 | 4.76 | 40.86 | 49.85 | 59.19 | 68.15 | 53.97 | 60.01 | 47.78 | 34.42 | 18.75 | 70.26 | 24.94 | +| **Small (≤50M)** | | | | | | | | | | | | | | | | +| granite-embedding-small-english-r2 | 47M | Single vector | 55.84 | 13.54 | 60.46 | 57.16 | 52.19 | 76.85 | 48.42 | 78.28 | **77.63** | 33.63 | 35.58 | **90.04** | 46.33 | +| [LateOn-Code-edge-pretrain](https://huggingface.co/lightonai/LateOn-Code-edge-pretrain) | 17M | Multi vector | 57.50 | 10.81 | 73.78 | 62.07 | 51.92 | 76.65 | 63.22 | **88.03** | 71.31 | 33.16 | 30.53 | 74.63 | 53.83 | +| [LateOn-Code-edge](https://huggingface.co/lightonai/LateOn-Code-edge) | 17M | Multi vector | **66.64** | **26.22** | **81.60** | **62.21** | **74.25** | **87.12** | **79.26** | 87.85 | 75.36 | **37.08** | **40.54** | 85.63 | **62.57** | +| *Δ (fine-tune - pretrain)* | | | *+9.14* | *+15.41* | *+7.82* | *+0.14* | *+22.33* | *+10.47* | *+16.04* | *-0.18* | *+4.05* | *+3.92* | *+10.01* | *+11.00* | *+8.74* | +| **Medium (100M–300M)** | | | | | | | | | | | | | | | | +| granite-embedding-english-r2 | 149M | Single vector | 57.22 | 13.96 | 64.65 | 59.35 | 52.54 | 77.18 | 47.67 | 80.79 | 77.07 | 35.03 | 37.01 | 91.80 | 49.55 | +| CodeRankEmbed | 137M | Single vector | 60.47 | 23.45 | 83.20 | 59.98 | 42.61 | 78.10 | 68.89 | 89.50 | 66.43 | 34.49 | 35.17 | 80.53 | 63.27 | +| GTE-ModernBERT | 149M | Single vector | 71.66 | 57.72 | 83.10 | 55.83 | **86.15** | 86.00 | **93.61** | 88.76 | 72.35 | 37.27 | 43.36 | 91.14 | **64.61** | +| embeddinggemma-300m | 300M | Single vector | 68.76 | **84.39** | 75.54 | 62.10 | 51.42 | 80.26 | 73.71 | 90.15 | 85.51 | 33.52 | 43.60 | 86.47 | 58.42 | +| [LateOn-Code-pretrain](https://huggingface.co/lightonai/LateOn-Code-pretrain) | 149M | Multi vector | 63.77 | 23.09 | 80.27 | **68.74** | 50.21 | 82.66 | 71.47 | **91.05** | 82.20 | 34.46 | 34.15 | 85.61 | 61.34 | +| [LateOn-Code](https://huggingface.co/lightonai/LateOn-Code) | 149M | Multi vector | **74.12** | 54.76 | **86.57** | 64.99 | 82.22 | **90.40** | 89.32 | 90.40 | **87.44** | **41.00** | **45.23** | **93.43** | 63.67 | +| *Δ (fine-tune - pretrain)* | | | *+10.35* | *+31.67* | *+6.30* | *-3.75* | *+32.01* | *+7.74* | *+17.85* | *-0.65* | *+5.24* | *+6.54* | *+11.08* | *+7.82* | *+2.33* | +| **Large (≥500M)** | | | | | | | | | | | | | | | | +| C2LLM-0.5B | 500M | Single vector | **75.46** | 61.02 | **86.71** | **71.39** | **92.29** | 88.63 | **96.29** | 89.20 | 84.27 | **33.99** | **38.30** | 89.40 | 74.08 | +| Qwen3-Embedding-0.6B | 600M | Single vector | 75.42 | **75.34** | 84.69 | 64.42 | 90.82 | **86.39** | 91.72 | **91.01** | **86.05** | 31.36 | 36.48 | **89.99** | **76.74** | + +Best result across all sizes is underlined. Best within each size category is **bolded**. + +# Colgrep + +The LateOn-Code family model can easily be used within ColGrep, an easy-to-use search tool that give their powerful search capabilities to coding agent. It has been designed to extend grep capabilities to get the best of both world and is very effective to enhance the quality of the answer while diminishing answer time and tokens consumption. Given the performance of the very light-weight 17M model, it can easily run quickly on any computer. + +## Install ColGrep +```bash +# macOS / Linux +curl --proto '=https' --tlsv1.2 -LsSf https://github.com/lightonai/next-plaid/releases/latest/download/colgrep-installer.sh | sh + +# Windows (PowerShell) +powershell -c "irm https://github.com/lightonai/next-plaid/releases/latest/download/colgrep-installer.ps1 | iex" +``` + +## Search + +```bash +# Semantic search — find code by meaning +colgrep "function that retries HTTP requests" + +# Regex search +colgrep -e "async fn\s+\w+" + +# Hybrid — regex narrows candidates, semantics ranks them +colgrep -e "Result<" "error handling" --include="*.rs" +``` + +## Install for Claude Code + +```bash +colgrep --install-claude-code +``` + +## Choose a Model + +```bash +# Set the model +colgrep set-model lightonai/LateOn-Code # default: lightonai/LateOn-Code-edge +``` +For more information about ColGrep, please refer to the [official documentation](https://github.com/lightonai/next-plaid/tree/main/colgrep) + + # PyLate -This is a [PyLate](https://github.com/lightonai/pylate) model trained. It maps sentences & paragraphs to sequences of 48-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator. +This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [lightonai/LateOn-Code-edge-pretrain](https://huggingface.co/lightonai/LateOn-Code-edge-pretrain) on the [apps](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [synthetictext2sql](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [cosqa](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [codefeedbackst](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [codefeedbackmt](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [stackoverflowqa](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [codetranscontest](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [codetransdl](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_go](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_java](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_javascript](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_php](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_python](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_ruby](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_ccr_go](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_ccr_java](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_ccr_javascript](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_ccr_php](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code), [CodeSearchNet_ccr_python](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) and [CodeSearchNet_ccr_ruby](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) datasets. It maps sentences & paragraphs to sequences of 48-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator. ## Model Details @@ -22,9 +499,29 @@ This is a [PyLate](https://github.com/lightonai/pylate) model trained. It maps s - **Query Length:** 256 tokens - **Output Dimensionality:** 48 tokens - **Similarity Function:** MaxSim - - - +- **Training Datasets:** + - [apps](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [synthetictext2sql](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [cosqa](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [codefeedbackst](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [codefeedbackmt](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [stackoverflowqa](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [codetranscontest](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [codetransdl](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_go](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_java](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_javascript](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_php](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_python](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_ruby](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_ccr_go](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_ccr_java](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_ccr_javascript](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_ccr_php](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_ccr_python](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) + - [CodeSearchNet_ccr_ruby](https://huggingface.co/datasets/lightonai/nv-embed-supervised-distill-dedup-code) +- **Language:** English, code +- **License:** Apache 2.0 ### Model Sources @@ -62,7 +559,7 @@ from pylate import indexes, models, retrieve # Step 1: Load the ColBERT model model = models.ColBERT( - model_name_or_path="lightonai/LateOn-Code-v0-edge-merge-code-nocode", + model_name_or_path="pylate_model_id", ) # Step 2: Initialize the PLAID index @@ -146,7 +643,7 @@ documents_ids = [ ] model = models.ColBERT( - model_name_or_path="lightonai/LateOn-Code-v0-edge-merge-code-nocode", + model_name_or_path="pylate_model_id", ) queries_embeddings = model.encode( @@ -190,6 +687,54 @@ You can finetune this model on your own dataset. *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> +## Evaluation + +### Metrics + +#### Py Late Information Retrieval +* Dataset: `['CodeSearchNetPython', 'CodeSearchNetJavascript', 'CodeSearchNetGo', 'CodeSearchNetRuby', 'CodeSearchNetJava', 'CodeSearchNetPhp']` +* Evaluated with `pylate.evaluation.pylate_information_retrieval_evaluator.PyLateInformationRetrievalEvaluator` + +| Metric | CodeSearchNetPython | CodeSearchNetJavascript | CodeSearchNetGo | CodeSearchNetRuby | CodeSearchNetJava | CodeSearchNetPhp | +|:--------------------|:--------------------|:------------------------|:----------------|:------------------|:------------------|:-----------------| +| MaxSim_accuracy@1 | 0.855 | 0.707 | 0.92 | 0.737 | 0.755 | 0.802 | +| MaxSim_accuracy@3 | 0.958 | 0.815 | 0.978 | 0.87 | 0.914 | 0.91 | +| MaxSim_accuracy@5 | 0.972 | 0.845 | 0.987 | 0.899 | 0.937 | 0.932 | +| MaxSim_accuracy@10 | 0.98 | 0.877 | 0.991 | 0.921 | 0.951 | 0.953 | +| MaxSim_precision@1 | 0.855 | 0.707 | 0.92 | 0.737 | 0.755 | 0.802 | +| MaxSim_precision@3 | 0.3193 | 0.2717 | 0.326 | 0.29 | 0.3047 | 0.3033 | +| MaxSim_precision@5 | 0.1944 | 0.169 | 0.1974 | 0.1798 | 0.1874 | 0.1864 | +| MaxSim_precision@10 | 0.098 | 0.0877 | 0.0991 | 0.0921 | 0.0951 | 0.0953 | +| MaxSim_recall@1 | 0.855 | 0.707 | 0.92 | 0.737 | 0.755 | 0.802 | +| MaxSim_recall@3 | 0.958 | 0.815 | 0.978 | 0.87 | 0.914 | 0.91 | +| MaxSim_recall@5 | 0.972 | 0.845 | 0.987 | 0.899 | 0.937 | 0.932 | +| MaxSim_recall@10 | 0.98 | 0.877 | 0.991 | 0.921 | 0.951 | 0.953 | +| **MaxSim_ndcg@10** | **0.9244** | **0.7937** | **0.9607** | **0.8357** | **0.8655** | **0.8824** | +| MaxSim_mrr@10 | 0.9058 | 0.7668 | 0.9505 | 0.8076 | 0.8367 | 0.8592 | +| MaxSim_map@100 | 0.9064 | 0.7696 | 0.9508 | 0.8095 | 0.8379 | 0.86 | + +#### Code Search Network +* Dataset: `CodeSearchNet_mean` +* Evaluated with `pylate.evaluation.code_search_network_evaluator.CodeSearchNetworkEvaluator` + +| Metric | Value | +|:--------------------|:-----------| +| MaxSim_accuracy@1 | 0.796 | +| MaxSim_accuracy@3 | 0.9075 | +| MaxSim_accuracy@5 | 0.9287 | +| MaxSim_accuracy@10 | 0.9455 | +| MaxSim_precision@1 | 0.796 | +| MaxSim_precision@3 | 0.3025 | +| MaxSim_precision@5 | 0.1857 | +| MaxSim_precision@10 | 0.0946 | +| MaxSim_recall@1 | 0.796 | +| MaxSim_recall@3 | 0.9075 | +| MaxSim_recall@5 | 0.9287 | +| MaxSim_recall@10 | 0.9455 | +| **MaxSim_ndcg@10** | **0.8771** | +| MaxSim_mrr@10 | 0.8544 | +| MaxSim_map@100 | 0.8557 | +