| --- |
| license: apache-2.0 |
| language: |
| - code |
| library_name: transformers |
| pipeline_tag: feature-extraction |
| base_model: codesage/codesage-large-v2 |
| tags: |
| - onnx |
| - teradata |
| - byom |
| - embeddings |
| - feature-extraction |
| --- |
| |
|
|
|
|
| > Read the disclaimer below before using this model. |
|
|
| ---- |
|
|
| # codesage-large-v2 -- ONNX for Teradata BYOM |
|
|
| This repository hosts an **ONNX-converted** version of the upstream |
| model [`codesage/codesage-large-v2`](https://huggingface.co/codesage/codesage-large-v2), |
| packaged for the Teradata Vantage `mldb.ONNXEmbeddings` BYOM |
| function. It is **not** the original PyTorch model -- only the |
| inference graph and tokenizer needed for in-database embedding |
| generation. |
|
|
| What's different from upstream: |
|
|
| - **Format**: ONNX (opset 14, IR version 8 -- BYOM 6+ compatible), |
| produced from the upstream weights with architecture-aware |
| post-processing baked in. |
| - **Precision**: dynamic int8 quantization. See the variants table |
| below for what is shipped for this model. |
| - **Pooling and post-processing**: this graph emits the raw |
| `sentence_embedding` tensor. Pooling rule is |
| **mean**. |
| - **Verification**: every variant's cosine fidelity vs. the |
| upstream PyTorch reference is recorded on a fixed |
| CodeSearchNet sample. Numbers may not generalize |
| to your data. |
|
|
| ## Model details |
|
|
| | | | |
| |---|---| |
| | Upstream repo | [`codesage/codesage-large-v2`](https://huggingface.co/codesage/codesage-large-v2) | |
| | Architecture | `CodeSage` (encoder) | |
| | Parameters | 1,313,464,320 | |
| | Output dimensions | 2048 | |
| | Pooling | `mean` | |
| | Instruction prefix | no | |
| | Max input tokens (advertised) | 2048 | |
| | Languages | 9 | |
| | License | apache-2.0 | |
| | ONNX opset | 14 | |
| | ONNX IR version | 8 (BYOM 6+ compatible) | |
|
|
| <details> |
| <summary>Full language list (9)</summary> |
|
|
| - `c` |
| - `c-sharp` |
| - `go` |
| - `java` |
| - `javascript` |
| - `typescript` |
| - `php` |
| - `python` |
| - `ruby` |
|
|
| </details> |
|
|
| ## Quantization variants |
|
|
| This repository ships the following variants. Quality numbers are |
| measured against the upstream PyTorch reference on a fixed |
| CodeSearchNet sample. The **Size** column is the |
| on-disk size of the ONNX weight file in megabytes (MB, 10^6 bytes). |
|
|
| | Variant | Size (MB) | p50 cosine | R@1 | |
| |---|---|---|---| |
| | `ffn_skip` | 1318.9 | 0.819499 | 0.919 | |
|
|
|
|
| How to read the quality columns: |
|
|
| - **p50 cosine** is the median cosine similarity between this |
| variant's embeddings and the fp32 ONNX reference, computed over |
| a fixed evaluation set. Higher means closer to the unquantized |
| model; **1.0** is identical. |
| - **R@1** is top-1 retrieval consistency: if you use this variant |
| as a search index, R@1 is the fraction of queries that get the |
| same nearest neighbor as the fp32 reference would. Higher is |
| better. |
|
|
| Notes: |
| - **ffn_skip**: dynamic int8 with the feed-forward (FFN) MatMul |
| layers kept in **fp32**, while attention and projection MatMuls |
| stay quantized. The FFN layers are where most of the quantization |
| error in transformer blocks concentrates; leaving them in fp32 |
| recovers most of the quality loss for a modest size increase. |
| The artifact is roughly **3x smaller than fp32** (larger than the |
| per_channel int8 sibling). |
| |
| ## Quickstart: using this model with Teradata BYOM |
| |
| Requires Teradata Vantage with **BYOM 6+** (`mldb.ONNXEmbeddings`). |
|
|
| ```python |
| import getpass |
| import teradataml as tdml |
| from huggingface_hub import hf_hub_download |
| |
| repo_id = "Teradata/codesage-large-v2" |
| model_id = "codesage-large-v2" # arbitrary, used as the BYOM model_id |
| onnx_file = "onnx/model-ffn_skip.onnx" |
| |
| # 1. Download the ONNX + tokenizer for the chosen variant. |
| hf_hub_download(repo_id=repo_id, filename=onnx_file, local_dir="./") |
| hf_hub_download(repo_id=repo_id, filename="tokenizer.json", local_dir="./") |
| |
| # 2. Connect to Vantage. |
| tdml.create_context( |
| host=input("host: "), |
| username=input("user: "), |
| password=getpass.getpass("password: "), |
| ) |
| |
| # 3. Load model + tokenizer into BYOM tables (one-time per model_id). |
| tdml.save_byom(model_id=model_id, model_file=onnx_file, |
| table_name="embeddings_models") |
| tdml.save_byom(model_id=model_id, model_file="tokenizer.json", |
| table_name="embeddings_tokenizers") |
| ``` |
|
|
| Then call `mldb.ONNXEmbeddings` against an input table whose |
| `txt` column carries the strings to embed: |
|
|
| This model emits a **2048-dimensional** embedding. |
| Teradata's wide-table output projection is capped at **2048 columns**, |
| which blocks the `FLOAT32(2048) + Accumulate('id')` |
| projection that smaller models use. Pick the SQL form that matches |
| your Vantage version: |
|
|
| **Option A -- `VARBYTE` (works on TD 17.20 and TD 20.0+)** |
|
|
| The vector lands as raw header-less float32 bytes |
| (`2048 * 4 = 8192` bytes), |
| which fits in a single `VARBYTE` column and dodges the 2048-column cap. |
| Decode on the client side or wrap the call in a UDF that returns |
| `FLOAT[]`. |
|
|
| ```sql |
| SELECT * |
| FROM mldb.ONNXEmbeddings( |
| ON (SELECT id, txt FROM your_input_table) AS InputTable |
| ON (SELECT model_id, model FROM embeddings_models |
| WHERE model_id = 'codesage-large-v2') AS ModelTable DIMENSION |
| ON (SELECT model_id, tokenizer FROM embeddings_tokenizers |
| WHERE model_id = 'codesage-large-v2') AS TokenizerTable DIMENSION |
| USING |
| Accumulate('id') |
| ModelOutputTensor('sentence_embedding') |
| OutputFormat('VARBYTE(8192)') |
| OverwriteCachedModel('*') |
| ) AS t |
| ORDER BY id; |
| ``` |
|
|
| **Option B -- `VECTOR` (TD 20.0+ only)** |
|
|
| Vantage 20.0 introduced a native `VECTOR` datatype that holds the |
| full embedding as a single typed column, with native vector-similarity |
| operators available on it. |
|
|
| ```sql |
| SELECT * |
| FROM mldb.ONNXEmbeddings( |
| ON (SELECT id, txt FROM your_input_table) AS InputTable |
| ON (SELECT model_id, model FROM embeddings_models |
| WHERE model_id = 'codesage-large-v2') AS ModelTable DIMENSION |
| ON (SELECT model_id, tokenizer FROM embeddings_tokenizers |
| WHERE model_id = 'codesage-large-v2') AS TokenizerTable DIMENSION |
| USING |
| Accumulate('id') |
| ModelOutputTensor('sentence_embedding') |
| OutputFormat('VECTOR') |
| OverwriteCachedModel('*') |
| ) AS t |
| ORDER BY id; |
| ``` |
|
|
| Use `VECTOR` if your Vantage version supports it; otherwise fall back |
| to `VARBYTE`. Both forms emit the same underlying float32 values. |
|
|
| Pooling rule **`mean`** is applied **inside** the converted |
| ONNX graph -- the output tensor named above already contains the |
| pooled, post-processed embedding vector. |
|
|
| ## Original model attribution |
|
|
| The original weights and training methodology belong to |
| **the CodeSage authors**. Please cite their work, not this |
| repository, in academic contexts. The canonical upstream model card |
| is at |
| [`codesage/codesage-large-v2`](https://huggingface.co/codesage/codesage-large-v2); |
| refer to it for benchmarks, training details, intended use, and |
| citation information. |
|
|
| ## Reporting issues |
|
|
| For ONNX-conversion or BYOM-compatibility issues specific to this |
| Teradata-converted artifact, please open a **Discussion** on this |
| model's Hugging Face page. Questions about the underlying model |
| quality, training, or intended use should go to the upstream |
| maintainer's model card. |
|
|
| ---- |
|
|
| DISCLAIMER: The content herein ("Content") is provided "AS IS" and is not covered by any Teradata Operations, Inc. and its affiliates ("Teradata") agreements. Its listing here does not constitute certification or endorsement by Teradata. |
|
|
| To the extent any of the Content contains or is related to any artificial intelligence ("AI") or other language learning models ("Models") that interoperate with the products and services of Teradata, by accessing, bringing, deploying or using such Models, you acknowledge and agree that you are solely responsible for ensuring compliance with all applicable laws, regulations, and restrictions governing the use, deployment, and distribution of AI technologies. This includes, but is not limited to, AI Diffusion Rules, European Union AI Act, AI-related laws and regulations, privacy laws, export controls, and financial or sector-specific regulations. |
|
|
| While Teradata may provide support, guidance, or assistance in the deployment or implementation of Models to interoperate with Teradata's products and/or services, you remain fully responsible for ensuring that your Models, data, and applications comply with all relevant legal and regulatory obligations. Our assistance does not constitute legal or regulatory approval, and Teradata disclaims any liability arising from non-compliance with applicable laws. |
|
|
| You must determine the suitability of the Models for any purpose. Given the probabilistic nature of machine learning and modeling, the use of the Models may in some situations result in incorrect output that does not accurately reflect the action generated. You should evaluate the accuracy of any output as appropriate for your use case, including by using human review of the output. |
|
|