thebajajra commited on
Commit
d1fae38
·
verified ·
1 Parent(s): 89a1c24

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +204 -0
README.md CHANGED
@@ -34,5 +34,209 @@ tags:
34
  - foundation-model
35
  ---
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6893dd21467f7d2f5f358a95/BAafq5QMJI_-CQSgr5PzF.png)
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  - foundation-model
35
  ---
36
 
37
+ # RexBERT-large
38
+
39
+ > **TL;DR**: An encoder-only transformer (ModernBERT-style) for **e-commerce** applications, trained in three phases—**Pre-training**, **Context Extension**, and **Decay**—to power product search, attribute extraction, classification, and embeddings use cases. The model has been trained on 2.3T+ tokens along with 350B+ e-commerce-specific tokens
40
+
41
+ ---
42
+
43
+ ## Table of Contents
44
+ - [Quick Start](#quick-start)
45
+ - [Intended Uses & Limitations](#intended-uses--limitations)
46
+ - [Model Description](#model-description)
47
+ - [Training Recipe](#training-recipe)
48
+ - [Data Overview](#data-overview)
49
+ - [Evaluation](#evaluation)
50
+ - [Usage Examples](#usage-examples)
51
+ - [Masked language modeling](#1-masked-language-modeling)
52
+ - [Embeddings / feature extraction](#2-embeddings--feature-extraction)
53
+ - [Text classification fine-tune](#3-text-classification-fine-tune)
54
+ - [Model Architecture & Compatibility](#model-architecture--compatibility)
55
+ - [Efficiency & Deployment Tips](#efficiency--deployment-tips)
56
+ - [Responsible & Safe Use](#responsible--safe-use)
57
+ - [License](#license)
58
+ - [Maintainers & Contact](#maintainers--contact)
59
+ - [Citation](#citation)
60
+
61
+ ---
62
+
63
+ ## Quick Start
64
+
65
+ ```python
66
+ import torch
67
+ from transformers import AutoTokenizer, AutoModel, AutoModelForMaskedLM, pipeline
68
+
69
+ MODEL_ID = "thebajajra/RexBERT-large"
70
+
71
+ # Tokenizer
72
+ tok = AutoTokenizer.from_pretrained(MODEL_ID, use_fast=True)
73
+
74
+ # 1) Fill-Mask (if MLM head is present)
75
+ mlm = pipeline("fill-mask", model=MODEL_ID, tokenizer=tok)
76
+ print(mlm("These running shoes are great for [MASK] training."))
77
+
78
+ # 2) Feature extraction (CLS or mean-pooled embeddings)
79
+ enc = AutoModel.from_pretrained(MODEL_ID)
80
+ inputs = tok(["wireless mouse", "ergonomic mouse pad"], padding=True, truncation=True, return_tensors="pt")
81
+ with torch.no_grad():
82
+ out = enc(**inputs, output_hidden_states=True)
83
+ # Mean-pool last hidden state for sentence embeddings
84
+ emb = (out.last_hidden_state * inputs.attention_mask.unsqueeze(-1)).sum(dim=1) / inputs.attention_mask.sum(dim=1, keepdim=True)
85
+ ```
86
+
87
+
88
+ ---
89
+
90
+ ## Intended Uses & Limitations
91
+
92
+ **Use cases**
93
+ - Product & query **retrieval/semantic search** (titles, descriptions, attributes)
94
+ - **Attribute extraction** / slot filling (brand, color, size, material)
95
+ - **Classification** (category assignment, unsafe/regulated item filtering, review sentiment)
96
+ - **Reranking** and **query understanding** (spelling/ASR normalization, acronym expansion)
97
+
98
+ **Out of scope**
99
+ - Long-form **generation** (use a decoder/seq-to-seq LM instead)
100
+ - High-stakes decisions without human review (pricing, compliance, safety flags)
101
+
102
+ **Target users**
103
+ - Search/recs engineers, e-commerce data teams, ML researchers working on domain-specific encoders
104
+
105
+ ---
106
+
107
+ ## Model Description
108
+
109
+ RexBERT-large is an **encoder-only**, 400M parameter transformer trained with a masked-language-modeling objective and optimized for **e-commerce related text**. The three-phase training curriculum improves general language understanding, extends context handling, and then **specializes** on a very large corpus of commerce data to capture domain-specific terminology and entity distributions.
110
+
111
+ ---
112
+
113
+ ## Training Recipe
114
+
115
+ RexBERT-large was trained in **three phases**:
116
+
117
+ 1) **Pre-training**
118
+ General-purpose MLM pre-training on diverse English text for robust linguistic representations.
119
+
120
+ 2) **Context Extension**
121
+ Continued training with **increased max sequence length** to better handle long product pages, concatenated attribute blocks, multi-turn queries, and facet strings. This preserves prior capabilities while expanding context handling.
122
+
123
+ 3) **Decay on 350B+ e-commerce tokens**
124
+ Final specialization stage on **350B+ domain-specific tokens** (product catalogs, queries, reviews, taxonomy/attributes). Learning rate and sampling weights are annealed (decayed) to consolidate domain knowledge and stabilize performance on commerce tasks.
125
+
126
+ **Training details (fill in):**
127
+ - Optimizer / LR schedule: TODO
128
+ - Effective batch size / steps per phase: TODO
129
+ - Context lengths per phase (e.g., 512 → 1k/2k): TODO
130
+ - Tokenizer/vocab: TODO
131
+ - Hardware & wall-clock: TODO
132
+ - Checkpoint tags: TODO (e.g., `pretrain`, `ext`, `decay`)
133
+
134
+ ---
135
+
136
+ ## Data Overview
137
+
138
+ - **Domain mix:**
139
+ - **Data quality:**
140
+
141
+
142
+
143
+ ---
144
+
145
+ ## Evaluation
146
+
147
+
148
+ ### Performance Highlights
149
+
150
+
151
+
152
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6893dd21467f7d2f5f358a95/BAafq5QMJI_-CQSgr5PzF.png)
153
 
154
+ ---
155
+
156
+ ## Usage Examples
157
+
158
+ ### 1) Masked language modeling
159
+ ```python
160
+ from transformers import AutoModelForMaskedLM, AutoTokenizer, pipeline
161
+
162
+ m = AutoModelForMaskedLM.from_pretrained("thebajajra/RexBERT-large")
163
+ t = AutoTokenizer.from_pretrained("thebajajra/RexBERT-large")
164
+ fill = pipeline("fill-mask", model=m, tokenizer=t)
165
+
166
+ fill("Best [MASK] headphones under $100.")
167
+ ```
168
+
169
+ ### 2) Embeddings / feature extraction
170
+ ```python
171
+ import torch
172
+ from transformers import AutoTokenizer, AutoModel
173
+
174
+ tok = AutoTokenizer.from_pretrained("thebajajra/RexBERT-large")
175
+ enc = AutoModel.from_pretrained("thebajajra/RexBERT-large")
176
+
177
+ texts = ["nike air zoom pegasus 40", "running shoes pegasus zoom nike"]
178
+ batch = tok(texts, padding=True, truncation=True, return_tensors="pt")
179
+
180
+ with torch.no_grad():
181
+ out = enc(**batch)
182
+ # Mean-pool last hidden state
183
+ attn = batch["attention_mask"].unsqueeze(-1)
184
+ emb = (out.last_hidden_state * attn).sum(1) / attn.sum(1)
185
+ # Normalize for cosine similarity (recommended for retrieval)
186
+ emb = torch.nn.functional.normalize(emb, p=2, dim=1)
187
+ ```
188
+
189
+ ### 3) Text classification fine-tune
190
+ ```python
191
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer
192
+
193
+ tok = AutoTokenizer.from_pretrained("thebajajra/RexBERT-large")
194
+ model = AutoModelForSequenceClassification.from_pretrained("thebajajra/RexBERT-large", num_labels=NUM_LABELS)
195
+
196
+ # Prepare your Dataset objects: train_ds, val_ds (text→label)
197
+ args = TrainingArguments(
198
+ per_device_train_batch_size=32,
199
+ per_device_eval_batch_size=32,
200
+ learning_rate=3e-5,
201
+ num_train_epochs=3,
202
+ evaluation_strategy="steps",
203
+ fp16=True,
204
+ report_to="none",
205
+ load_best_model_at_end=True,
206
+ )
207
+
208
+ trainer = Trainer(model=model, args=args, train_dataset=train_ds, eval_dataset=val_ds, tokenizer=tok)
209
+ trainer.train()
210
+ ```
211
+
212
+ ---
213
+
214
+ ## Model Architecture & Compatibility
215
+
216
+ - **Architecture:** Encoder-only, ModernBERT-style **large** model.
217
+ - **Libraries:** Works with **🤗 Transformers**; supports **fill-mask** and **feature-extraction** pipelines.
218
+ - **Context length:** Increased during the **Context Extension** phase—ensure `max_position_embeddings` in `config.json` matches your desired max length.
219
+ - **Files:** `config.json`, tokenizer files, and (optionally) heads for MLM or classification.
220
+ - **Export:** Standard PyTorch weights; you can export ONNX / TorchScript for production if needed.
221
+
222
+ ---
223
+
224
+ ## Responsible & Safe Use
225
+
226
+ - **Biases:** Commerce data can encode brand, price, and region biases; audit downstream classifiers/retrievers for disparate error rates across categories/regions.
227
+ - **Sensitive content:** Add filters for adult/regulated items; document moderation thresholds if you release classifiers.
228
+ - **Privacy:** Do not expose PII; ensure training data complies with terms and applicable laws.
229
+ - **Misuse:** This model is **not** a substitute for legal/compliance review for listings.
230
+
231
+ ---
232
+
233
+ ## License
234
+
235
+ - **License:** `apache-2.0`.
236
+ ---
237
+
238
+ ## Maintainers & Contact
239
+
240
+ - **Author/maintainer:** [Rahul Bajaj](https://huggingface.co/thebajajra)
241
+
242
+ ---