Upload model card
Browse files
README.md
CHANGED
|
@@ -1,97 +1,70 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
license: mit
|
| 4 |
-
|
| 5 |
tags:
|
| 6 |
-
-
|
| 7 |
-
- static-embeddings
|
| 8 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
-
#
|
| 12 |
-
|
| 13 |
-
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of a Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
|
| 14 |
-
|
| 15 |
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
Install model2vec using pip:
|
| 19 |
-
```
|
| 20 |
-
pip install model2vec
|
| 21 |
-
```
|
| 22 |
|
| 23 |
## Usage
|
| 24 |
|
| 25 |
-
### Using Model2Vec
|
| 26 |
-
|
| 27 |
-
The [Model2Vec library](https://github.com/MinishLab/model2vec) is the fastest and most lightweight way to run Model2Vec models.
|
| 28 |
-
|
| 29 |
-
Load this model using the `from_pretrained` method:
|
| 30 |
```python
|
| 31 |
-
from
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
```
|
| 39 |
|
| 40 |
-
##
|
| 41 |
|
| 42 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
-
|
| 45 |
-
from sentence_transformers import SentenceTransformer
|
| 46 |
-
|
| 47 |
-
# Load a pretrained Sentence Transformer model
|
| 48 |
-
model = SentenceTransformer("static_model")
|
| 49 |
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
|
|
|
|
|
|
| 53 |
|
| 54 |
-
##
|
| 55 |
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
```python
|
| 59 |
-
from model2vec.distill import distill
|
| 60 |
-
|
| 61 |
-
# Distill a Sentence Transformer model, in this case the BAAI/bge-base-en-v1.5 model
|
| 62 |
-
m2v_model = distill(model_name="BAAI/bge-base-en-v1.5", pca_dims=256)
|
| 63 |
-
|
| 64 |
-
# Save the model
|
| 65 |
-
m2v_model.save_pretrained("m2v_model")
|
| 66 |
```
|
| 67 |
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
## Additional Resources
|
| 75 |
-
|
| 76 |
-
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
|
| 77 |
-
- [Model2Vec Base Models](https://huggingface.co/collections/minishlab/model2vec-base-models-66fd9dd9b7c3b3c0f25ca90e)
|
| 78 |
-
- [Model2Vec Results](https://github.com/MinishLab/model2vec/tree/main/results)
|
| 79 |
-
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
|
| 80 |
-
- [Website](https://minishlab.github.io/)
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
## Library Authors
|
| 84 |
-
|
| 85 |
-
Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
|
| 86 |
-
|
| 87 |
-
## Citation
|
| 88 |
-
|
| 89 |
-
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
|
| 90 |
```
|
| 91 |
-
@article{minishlab2024model2vec,
|
| 92 |
-
author = {Tulkens, Stephan and {van Dongen}, Thomas},
|
| 93 |
-
title = {Model2Vec: Fast State-of-the-Art Static Embeddings},
|
| 94 |
-
year = {2024},
|
| 95 |
-
url = {https://github.com/MinishLab/model2vec}
|
| 96 |
-
}
|
| 97 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
+
language: en
|
| 3 |
license: mit
|
| 4 |
+
library_name: model2vec
|
| 5 |
tags:
|
| 6 |
+
- model2vec
|
| 7 |
+
- static-embeddings
|
| 8 |
+
- topic-classification
|
| 9 |
+
- openalex
|
| 10 |
+
- scientific-papers
|
| 11 |
+
pipeline_tag: text-classification
|
| 12 |
+
datasets:
|
| 13 |
+
- jimnoneill/paper-to-field-training
|
| 14 |
---
|
| 15 |
|
| 16 |
+
# Paper-to-Field Classifier
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
+
Lightweight CPU-based topic classifier for scientific paper abstracts using the
|
| 19 |
+
[OpenAlex taxonomy](https://docs.openalex.org/api-entities/topics) (4,516 topics → 245 subfields → 26 fields → 4 domains).
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
## Usage
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
```python
|
| 24 |
+
from paper_classifier import PaperClassifier
|
| 25 |
+
|
| 26 |
+
classifier = PaperClassifier()
|
| 27 |
+
classifier.initialize()
|
| 28 |
+
|
| 29 |
+
result = classifier.classify(
|
| 30 |
+
title="Attention Is All You Need",
|
| 31 |
+
abstract="The dominant sequence transduction models are based on complex recurrent or convolutional neural networks..."
|
| 32 |
+
)
|
| 33 |
+
|
| 34 |
+
print(result)
|
| 35 |
+
# {
|
| 36 |
+
# 'topic': {'id': 10209, 'name': 'Neural Machine Translation and Sequence Models', 'score': 0.87},
|
| 37 |
+
# 'subfield': {'id': 1702, 'name': 'Artificial Intelligence'},
|
| 38 |
+
# 'field': {'id': 17, 'name': 'Computer Science'},
|
| 39 |
+
# 'domain': {'id': 3, 'name': 'Physical Sciences'}
|
| 40 |
+
# }
|
| 41 |
```
|
| 42 |
|
| 43 |
+
## Model Details
|
| 44 |
|
| 45 |
+
- **Base model**: [minishlab/potion-base-32M](https://huggingface.co/minishlab/potion-base-32M) (Model2Vec)
|
| 46 |
+
- **Fine-tuned on**: ~50K domain-balanced paper abstracts from OpenAlex
|
| 47 |
+
- **Taxonomy**: OpenAlex (4,516 topics, 245 subfields, 26 fields, 4 domains)
|
| 48 |
+
- **Input**: Paper title + abstract (truncated to 500 chars)
|
| 49 |
+
- **Inference**: CPU-only, ~3,000 papers/second
|
| 50 |
|
| 51 |
+
## Training
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
+
Trained on OpenAlex bulk data with papers filtered for:
|
| 54 |
+
- English language
|
| 55 |
+
- Has abstract
|
| 56 |
+
- Primary topic confidence score > 0.8
|
| 57 |
+
- Domain-balanced sampling (~12.5K per domain)
|
| 58 |
|
| 59 |
+
## Install
|
| 60 |
|
| 61 |
+
```bash
|
| 62 |
+
pip install paper-classifier
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
```
|
| 64 |
|
| 65 |
+
Or from source:
|
| 66 |
+
```bash
|
| 67 |
+
git clone https://github.com/jimnoneill/paper-to-field.git
|
| 68 |
+
cd paper-to-field
|
| 69 |
+
pip install -e .
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|