Update README.md
Browse files
README.md
CHANGED
|
@@ -2,14 +2,38 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
-
pipeline_tag:
|
|
|
|
| 6 |
---
|
| 7 |
|
| 8 |
# Monarch Mixer-BERT
|
| 9 |
|
| 10 |
The 80M checkpoint for M2-BERT-base from the paper [Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture](https://arxiv.org/abs/2310.12109).
|
| 11 |
-
This model has been pretrained with sequence length 2048, and it has been fine-tuned for retrieval.
|
| 12 |
|
| 13 |
-
This model was trained by
|
| 14 |
|
| 15 |
Check out our [GitHub](https://github.com/HazyResearch/m2/tree/main) for instructions on how to download and fine-tune it!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
+
pipeline_tag: sentence-similarity
|
| 6 |
+
inference: false
|
| 7 |
---
|
| 8 |
|
| 9 |
# Monarch Mixer-BERT
|
| 10 |
|
| 11 |
The 80M checkpoint for M2-BERT-base from the paper [Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture](https://arxiv.org/abs/2310.12109).
|
| 12 |
+
This model has been pretrained with sequence length 2048, and it has been fine-tuned for long-context retrieval.
|
| 13 |
|
| 14 |
+
This model was trained by Jon Saad-Falcon, Dan Fu, and Simran Arora.
|
| 15 |
|
| 16 |
Check out our [GitHub](https://github.com/HazyResearch/m2/tree/main) for instructions on how to download and fine-tune it!
|
| 17 |
+
|
| 18 |
+
## How to use
|
| 19 |
+
|
| 20 |
+
You can load this model using Hugging Face `AutoModel`:
|
| 21 |
+
```python
|
| 22 |
+
from transformers import AutoModelForMaskedLM
|
| 23 |
+
model = AutoModelForMaskedLM.from_pretrained("togethercomputer/m2-bert-80M-2k-retrieval", trust_remote_code=True)
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
This model generates embeddings for retrieval. The embeddings have a dimensionality of 768:
|
| 27 |
+
```
|
| 28 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
| 29 |
+
|
| 30 |
+
max_seq_length = 2048
|
| 31 |
+
testing_string = "Every morning, I make a cup of coffee to start my day."
|
| 32 |
+
model = AutoModelForMaskedLM.from_pretrained("togethercomputer/m2-bert-80M-2k-retrieval", trust_remote_code=True)
|
| 33 |
+
|
| 34 |
+
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", model_max_length=max_seq_length)
|
| 35 |
+
input_ids = tokenizer([testing_string], return_tensors="pt", padding="max_length", return_token_type_ids=False, truncation=True, max_length=max_seq_length)
|
| 36 |
+
|
| 37 |
+
outputs = model(**input_ids)
|
| 38 |
+
embeddings = outputs['sentence_embedding']
|
| 39 |
+
```
|