sasha-smirnov commited on
Commit
3e9ea69
·
verified ·
1 Parent(s): 6929eeb

Initial publish via td-embeddings

Browse files
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - code
5
+ library_name: transformers
6
+ pipeline_tag: feature-extraction
7
+ base_model: codesage/codesage-base
8
+ tags:
9
+ - onnx
10
+ - teradata
11
+ - byom
12
+ - embeddings
13
+ - feature-extraction
14
+ ---
15
+
16
+
17
+
18
+ > Read the disclaimer below before using this model.
19
+
20
+ ----
21
+
22
+ # codesage-base -- ONNX for Teradata BYOM
23
+
24
+ This repository hosts an **ONNX-converted** version of the upstream
25
+ model [`codesage/codesage-base`](https://huggingface.co/codesage/codesage-base),
26
+ packaged for the Teradata Vantage `mldb.ONNXEmbeddings` BYOM
27
+ function. It is **not** the original PyTorch model -- only the
28
+ inference graph and tokenizer needed for in-database embedding
29
+ generation.
30
+
31
+ What's different from upstream:
32
+
33
+ - **Format**: ONNX (opset 14, IR version 8 -- BYOM 6+ compatible),
34
+ produced from the upstream weights with architecture-aware
35
+ post-processing baked in.
36
+ - **Precision**: dynamic int8 quantization. See the variants table
37
+ below for what is shipped for this model.
38
+ - **Pooling and post-processing**: this graph emits the raw
39
+ `sentence_embedding` tensor. Pooling rule is
40
+ **mean**.
41
+ - **Verification**: every variant's cosine fidelity vs. the
42
+ upstream PyTorch reference is recorded on a fixed
43
+ CodeSearchNet sample. Numbers may not generalize
44
+ to your data.
45
+
46
+ ## Model details
47
+
48
+ | | |
49
+ |---|---|
50
+ | Upstream repo | [`codesage/codesage-base`](https://huggingface.co/codesage/codesage-base) |
51
+ | Architecture | `CodeSage` (encoder) |
52
+ | Parameters | 354,742,272 |
53
+ | Output dimensions | 1024 |
54
+ | Pooling | `mean` |
55
+ | Instruction prefix | no |
56
+ | Max input tokens (advertised) | 2048 |
57
+ | Languages | 9 |
58
+ | License | apache-2.0 |
59
+ | ONNX opset | 14 |
60
+ | ONNX IR version | 8 (BYOM 6+ compatible) |
61
+
62
+ <details>
63
+ <summary>Full language list (9)</summary>
64
+
65
+ - `c`
66
+ - `c-sharp`
67
+ - `go`
68
+ - `java`
69
+ - `javascript`
70
+ - `typescript`
71
+ - `php`
72
+ - `python`
73
+ - `ruby`
74
+
75
+ </details>
76
+
77
+ ## Quantization variants
78
+
79
+ This repository ships the following variants. Quality numbers are
80
+ measured against the upstream PyTorch reference on a fixed
81
+ CodeSearchNet sample. The **Size** column is the
82
+ on-disk size of the ONNX weight file in megabytes (MB, 10^6 bytes).
83
+
84
+ | Variant | Size (MB) | p50 cosine | R@1 |
85
+ |---|---|---|---|
86
+ | `fp32` | 1419.7 | 1.000000 | — |
87
+ | `ffn_skip` | 358.0 | 0.914119 | 0.934 |
88
+
89
+
90
+ How to read the quality columns:
91
+
92
+ - **p50 cosine** is the median cosine similarity between this
93
+ variant's embeddings and the fp32 ONNX reference, computed over
94
+ a fixed evaluation set. Higher means closer to the unquantized
95
+ model; **1.0** is identical.
96
+ - **R@1** is top-1 retrieval consistency: if you use this variant
97
+ as a search index, R@1 is the fraction of queries that get the
98
+ same nearest neighbor as the fp32 reference would. Higher is
99
+ better.
100
+
101
+ Notes:
102
+ - **fp32**: full-precision reference. Useful for an accuracy ceiling,
103
+ but BYOM users almost always want one of the int8 variants for
104
+ in-database scoring -- they are 3-4x smaller and load much faster.
105
+ - **ffn_skip**: dynamic int8 with the feed-forward (FFN) MatMul
106
+ layers kept in **fp32**, while attention and projection MatMuls
107
+ stay quantized. The FFN layers are where most of the quantization
108
+ error in transformer blocks concentrates; leaving them in fp32
109
+ recovers most of the quality loss for a modest size increase.
110
+ The artifact is roughly **3x smaller than fp32** (larger than the
111
+ per_channel int8 sibling).
112
+
113
+ ## Quickstart: using this model with Teradata BYOM
114
+
115
+ Requires Teradata Vantage with **BYOM 6+** (`mldb.ONNXEmbeddings`).
116
+
117
+ ```python
118
+ import getpass
119
+ import teradataml as tdml
120
+ from huggingface_hub import hf_hub_download
121
+
122
+ repo_id = "Teradata/codesage-base"
123
+ model_id = "codesage-base" # arbitrary, used as the BYOM model_id
124
+ onnx_file = "onnx/model-ffn_skip.onnx"
125
+
126
+ # 1. Download the ONNX + tokenizer for the chosen variant.
127
+ hf_hub_download(repo_id=repo_id, filename=onnx_file, local_dir="./")
128
+ hf_hub_download(repo_id=repo_id, filename="tokenizer.json", local_dir="./")
129
+
130
+ # 2. Connect to Vantage.
131
+ tdml.create_context(
132
+ host=input("host: "),
133
+ username=input("user: "),
134
+ password=getpass.getpass("password: "),
135
+ )
136
+
137
+ # 3. Load model + tokenizer into BYOM tables (one-time per model_id).
138
+ tdml.save_byom(model_id=model_id, model_file=onnx_file,
139
+ table_name="embeddings_models")
140
+ tdml.save_byom(model_id=model_id, model_file="tokenizer.json",
141
+ table_name="embeddings_tokenizers")
142
+ ```
143
+
144
+ Then call `mldb.ONNXEmbeddings` against an input table whose
145
+ `txt` column carries the strings to embed:
146
+
147
+ ```sql
148
+ SELECT *
149
+ FROM mldb.ONNXEmbeddings(
150
+ ON (SELECT id, txt FROM your_input_table) AS InputTable
151
+ ON (SELECT model_id, model FROM embeddings_models
152
+ WHERE model_id = 'codesage-base') AS ModelTable DIMENSION
153
+ ON (SELECT model_id, tokenizer FROM embeddings_tokenizers
154
+ WHERE model_id = 'codesage-base') AS TokenizerTable DIMENSION
155
+ USING
156
+ Accumulate('id')
157
+ ModelOutputTensor('sentence_embedding')
158
+ OutputFormat('FLOAT32(1024)')
159
+ OverwriteCachedModel('*')
160
+ ) AS t
161
+ ORDER BY id;
162
+ ```
163
+
164
+ Pooling rule **`mean`** is applied **inside** the converted
165
+ ONNX graph -- the output tensor named above already contains the
166
+ pooled, post-processed embedding vector.
167
+
168
+ ## Original model attribution
169
+
170
+ The original weights and training methodology belong to
171
+ **the CodeSage authors**. Please cite their work, not this
172
+ repository, in academic contexts. The canonical upstream model card
173
+ is at
174
+ [`codesage/codesage-base`](https://huggingface.co/codesage/codesage-base);
175
+ refer to it for benchmarks, training details, intended use, and
176
+ citation information.
177
+
178
+ ## Reporting issues
179
+
180
+ For ONNX-conversion or BYOM-compatibility issues specific to this
181
+ Teradata-converted artifact, please open a **Discussion** on this
182
+ model's Hugging Face page. Questions about the underlying model
183
+ quality, training, or intended use should go to the upstream
184
+ maintainer's model card.
185
+
186
+ ----
187
+
188
+ DISCLAIMER: The content herein ("Content") is provided "AS IS" and is not covered by any Teradata Operations, Inc. and its affiliates ("Teradata") agreements. Its listing here does not constitute certification or endorsement by Teradata.
189
+
190
+ To the extent any of the Content contains or is related to any artificial intelligence ("AI") or other language learning models ("Models") that interoperate with the products and services of Teradata, by accessing, bringing, deploying or using such Models, you acknowledge and agree that you are solely responsible for ensuring compliance with all applicable laws, regulations, and restrictions governing the use, deployment, and distribution of AI technologies. This includes, but is not limited to, AI Diffusion Rules, European Union AI Act, AI-related laws and regulations, privacy laws, export controls, and financial or sector-specific regulations.
191
+
192
+ While Teradata may provide support, guidance, or assistance in the deployment or implementation of Models to interoperate with Teradata's products and/or services, you remain fully responsible for ensuring that your Models, data, and applications comply with all relevant legal and regulatory obligations. Our assistance does not constitute legal or regulatory approval, and Teradata disclaims any liability arising from non-compliance with applicable laws.
193
+
194
+ You must determine the suitability of the Models for any purpose. Given the probabilistic nature of machine learning and modeling, the use of the Models may in some situations result in incorrect output that does not accurately reflect the action generated. You should evaluate the accuracy of any output as appropriate for your use case, including by using human review of the output.
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "codesage/codesage-base",
3
+ "architectures": [
4
+ "CodeSage"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "config_codesage.CodeSageConfig",
8
+ "AutoTokenizer": "tokenization_codesage.CodeSageTokenizer",
9
+ "AutoModel": "modeling_codesage.CodeSageModel",
10
+ "AutoModelForMaskedLM": "modeling_codesage.CodeSageForMaskedLM",
11
+ "AutoModelForSequenceClassification": "modeling_codesage.CodeSageForSequenceClassification"
12
+ },
13
+ "activation_function": "gelu_new",
14
+ "attention_dropout_prob": 0.1,
15
+ "embedding_dropout_prob": 0.1,
16
+ "initializer_range": 0.02,
17
+ "layer_norm_epsilon": 1e-05,
18
+ "hidden_size": 1024,
19
+ "num_attention_heads": 8,
20
+ "num_hidden_layers": 24,
21
+ "intermediate_size": 4096,
22
+ "max_position_embeddings": 2048,
23
+ "residual_dropout_prob": 0.1,
24
+ "vocab_size": 49154
25
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.28.1"
6
+ }
onnx/model-ffn_skip.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6bc05d273d77de0f4b0e0fe5e67676dcf35d5b989113dfcd86f956276d21b68
3
+ size 358040219
onnx/model-fp32.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5036be2a0c15dd40ac58a63c1f93da9e0129a6514f175779a5e13fa478ddb9ce
3
+ size 1419735005
special_tokens_map.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "<fim_prefix>",
5
+ "<fim_middle>",
6
+ "<fim_suffix>",
7
+ "<fim_pad>",
8
+ "<filename>",
9
+ "<gh_stars>",
10
+ "<issue_start>",
11
+ "<issue_comment>",
12
+ "<issue_closed>",
13
+ "<jupyter_start>",
14
+ "<jupyter_text>",
15
+ "<jupyter_code>",
16
+ "<jupyter_output>",
17
+ "<empty_output>",
18
+ "<commit_before>",
19
+ "<commit_msg>",
20
+ "<commit_after>",
21
+ "<reponame>"
22
+ ],
23
+ "bos_token": "<|endoftext|>",
24
+ "eos_token": "<|endoftext|>",
25
+ "mask_token": "<mask>",
26
+ "pad_token": "<pad>",
27
+ "unk_token": "<|endoftext|>"
28
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "additional_special_tokens": [
4
+ "<|endoftext|>",
5
+ "<fim_prefix>",
6
+ "<fim_middle>",
7
+ "<fim_suffix>",
8
+ "<fim_pad>",
9
+ "<filename>",
10
+ "<gh_stars>",
11
+ "<issue_start>",
12
+ "<issue_comment>",
13
+ "<issue_closed>",
14
+ "<jupyter_start>",
15
+ "<jupyter_text>",
16
+ "<jupyter_code>",
17
+ "<jupyter_output>",
18
+ "<empty_output>",
19
+ "<commit_before>",
20
+ "<commit_msg>",
21
+ "<commit_after>",
22
+ "<reponame>"
23
+ ],
24
+ "bos_token": "<|endoftext|>",
25
+ "clean_up_tokenization_spaces": true,
26
+ "eos_token": "<|endoftext|>",
27
+ "add_eos_token": true,
28
+ "model_max_length": 1000000000000000019884624838656,
29
+ "unk_token": "<|endoftext|>",
30
+ "vocab_size": 49152,
31
+ "tokenizer_class": "CodeSageTokenizer",
32
+ "auto_map": {
33
+ "AutoTokenizer": ["tokenization_codesage.CodeSageTokenizer", null]
34
+ }
35
+ }