thebajajra commited on
Commit
62bbcdb
·
verified ·
1 Parent(s): dfa4244

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -146
README.md CHANGED
@@ -9,185 +9,103 @@ pipeline_tag: fill-mask
9
  ---
10
  # [RexBERT-mini](https://huggingface.co/owlgebra-ai/RexBERT-mini)
11
 
12
- <!-- Provide a quick summary of what the model is/does. -->
13
 
14
- The model is part of RexBERT series of models.
15
-
16
- ## Model Details
17
-
18
- ### Model Description
19
-
20
- <!-- Provide a longer summary of what this model is. -->
21
-
22
- - **Developed by:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** English
25
- - **License:** [More Information Needed]
26
-
27
- ### Model Sources [optional]
28
-
29
- <!-- Provide the basic links for the model. -->
30
-
31
- - **Repository:** [More Information Needed]
32
- - **Paper [optional]:** [More Information Needed]
33
- - **Demo [optional]:** [More Information Needed]
34
-
35
- ## Uses
36
-
37
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
38
-
39
- ### Direct Use
40
-
41
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
42
-
43
- [More Information Needed]
44
-
45
- ### Downstream Use [optional]
46
-
47
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
48
-
49
- [More Information Needed]
50
-
51
- ### Out-of-Scope Use
52
-
53
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
54
-
55
- [More Information Needed]
56
-
57
- ## Bias, Risks, and Limitations
58
-
59
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
-
61
- [More Information Needed]
62
-
63
- ### Recommendations
64
-
65
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
66
-
67
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
68
-
69
- ## How to Get Started with the Model
70
 
71
- Use the code below to get started with the model.
72
 
73
- [More Information Needed]
 
 
 
 
74
 
75
- ## Training Details
76
 
77
- ### Training Data
 
 
78
 
79
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
80
 
81
- [More Information Needed]
 
82
 
83
- ### Training Procedure
84
 
85
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
86
 
87
- #### Preprocessing [optional]
 
88
 
89
- [More Information Needed]
 
 
90
 
 
 
 
 
91
 
92
- #### Training Hyperparameters
93
 
94
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
95
 
96
- #### Speeds, Sizes, Times [optional]
 
 
 
 
 
97
 
98
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
99
 
100
- [More Information Needed]
101
 
102
  ## Evaluation
103
 
104
- <!-- This section describes the evaluation protocols and provides the results. -->
105
-
106
- ### Testing Data, Factors & Metrics
107
-
108
- #### Testing Data
109
-
110
- <!-- This should link to a Dataset Card if possible. -->
111
-
112
- [More Information Needed]
113
-
114
- #### Factors
115
-
116
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
117
-
118
- [More Information Needed]
119
-
120
- #### Metrics
121
-
122
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
123
-
124
- [More Information Needed]
125
-
126
- ### Results
127
-
128
- [More Information Needed]
129
-
130
- #### Summary
131
-
132
-
133
-
134
- ## Model Examination [optional]
135
-
136
- <!-- Relevant interpretability work for the model goes here -->
137
-
138
- [More Information Needed]
139
-
140
- ## Environmental Impact
141
-
142
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
143
-
144
-
145
- - **Hardware Type:** [More Information Needed]
146
- - **Hours used:** [More Information Needed]
147
- - **Cloud Provider:** [More Information Needed]
148
- - **Compute Region:** [More Information Needed]
149
- - **Carbon Emitted:** [More Information Needed]
150
-
151
- ## Technical Specifications [optional]
152
-
153
- ### Model Architecture and Objective
154
-
155
- [More Information Needed]
156
-
157
- ### Compute Infrastructure
158
-
159
- [More Information Needed]
160
-
161
- #### Hardware
162
 
163
- [More Information Needed]
164
 
165
- #### Software
166
 
167
- [More Information Needed]
 
 
168
 
169
- ## Citation [optional]
170
 
171
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
172
 
173
- **BibTeX:**
 
174
 
175
- [More Information Needed]
176
 
177
- **APA:**
178
 
179
- [More Information Needed]
180
 
181
- ## Glossary [optional]
 
 
 
 
 
 
 
182
 
183
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
184
 
185
- [More Information Needed]
186
 
187
- ## More Information [optional]
188
 
189
- [More Information Needed]
190
 
191
- ## Model Card Authors
192
 
193
- [Rahul Bajaj](https://huggingface.co/thebajajra)
 
9
  ---
10
  # [RexBERT-mini](https://huggingface.co/owlgebra-ai/RexBERT-mini)
11
 
12
+ > An efficient, English encoder-only model (masked-language model) with ~8k token context, targeted at e-commerce and retail NLP.
13
 
14
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
+ ## Model summary
17
 
18
+ - **Model type:** `ModernBertForMaskedLM` (encoder-only, masked-language modeling head)
19
+ - **Domain/language:** English; repository tagged for **e-commerce**/**retail** tasks
20
+ - **Context length:** 7,999–8,192 tokens (config max_position_embeddings=7999; ModernBERT supports up to 8192)
21
+ - **License:** Apache-2.0
22
+ ---
23
 
24
+ ## Intended uses & limitations
25
 
26
+ ### Direct use
27
+ - **Fill-mask** and cloze completion (e.g., product titles, attributes, query reformulation).
28
+ - **Embeddings / feature extraction** for classification, clustering, retrieval re-ranking, and semantic search in retail catalogs and queries (via pooled encoder states). (ModernBERT is a drop-in BERT-style encoder.)
29
 
30
+ ### Downstream use
31
+ - Fine-tune for product categorization, attribute extraction, NER, intent classification, and retrieval-augmented ranking tasks in commerce search & browse. (Use a task head or pooled embeddings.)
32
 
33
+ ### Out-of-scope / not recommended
34
+ - **Autoregressive text generation** or chat; this is not a decoder LLM. Use decoder or seq2seq models for long-form generation.
35
 
36
+ ---
37
 
38
+ ## How to get started
39
 
40
+ ```python
41
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
42
 
43
+ model_id = "owlgebra-ai/RexBERT-mini"
44
+ tok = AutoTokenizer.from_pretrained(model_id)
45
+ model = AutoModelForMaskedLM.from_pretrained(model_id)
46
 
47
+ text = "The customer purchased a [MASK] with free shipping."
48
+ inputs = tok(text, return_tensors="pt")
49
+ logits = model(**inputs).logits # use top-k on tok.mask_token_id
50
+ ```
51
 
52
+ ---
53
 
54
+ ## Model details
55
 
56
+ ### Architecture (from config)
57
+ - **Backbone:** ModernBERT (`model_type: "modernbert"`, `architectures: ["ModernBertForMaskedLM"]`)
58
+ - **Layers / heads / width:** 19 encoder layers, 8 attention heads, hidden size 512; intermediate (MLP) size 768; GELU activations.
59
+ - **Attention:** Local window 128 with **global attention every 3 layers**; RoPE θ=160k (local & global).
60
+ - **Positional strategy:** `position_embedding_type: "sans_pos"`.
61
+ - **Dropout:** attention/embedding/MLP dropouts set to 0.0 in the published config.
62
 
63
+ ## Training data & procedure
64
 
65
+ ---
66
 
67
  ## Evaluation
68
 
69
+ - No formal benchmark results are published at this time. Consider reporting:
70
+
71
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
 
73
 
74
+ ## Technical notes for practitioners
75
 
76
+ - **Pooling:** Use mean pooling over last hidden states (the config’s classifier pooling is `"mean"`), or task-specific pooling.
77
+ - **Long sequences:** Leverage the extended context for product pages, multi-turn queries, or concatenated fields; ModernBERT uses efficient attention and RoPE for long inputs.
78
+ - **Libraries:** Tested with `transformers>=4.48.0`
79
 
80
+ ---
81
 
82
+ ## Model sources
83
 
84
+ - **Hugging Face:** `owlgebra-ai/RexBERT-mini` — https://huggingface.co/owlgebra-ai/RexBERT-mini
85
+ - **Background on ModernBERT:** https://huggingface.co/docs/transformers/en/model_doc/modernbert and overview: https://huggingface.co/docs/transformers/model_doc/modernbert
86
 
87
+ ---
88
 
89
+ ## Citation
90
 
91
+ If you use this model, please cite the repository:
92
 
93
+ ```
94
+ @software{rexbert_mini_2025,
95
+ title = {RexBERT-mini},
96
+ author = {Owlgebra AI},
97
+ year = {2025},
98
+ url = {https://huggingface.co/owlgebra-ai/RexBERT-mini}
99
+ }
100
+ ```
101
 
102
+ ---
103
 
104
+ ## Contact & maintenance
105
 
106
+ - **Author(s):** [Rahul Bajaj](https://huggingface.co/thebajajra)
107
 
108
+ - **Issues / questions:** Open an issue or discussion on the HF model page.
109
 
110
+ ---
111