Update README.md
Browse files
README.md
CHANGED
|
@@ -19,6 +19,8 @@ inference: false
|
|
| 19 |
|
| 20 |
This model provides the best embedding for the Entity Recognition task in English.
|
| 21 |
|
|
|
|
|
|
|
| 22 |
**Checkout other models by NuMind:**
|
| 23 |
* SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1)
|
| 24 |
* SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1)
|
|
@@ -29,6 +31,8 @@ inference: false
|
|
| 29 |
|
| 30 |
**Metrics:**
|
| 31 |
|
|
|
|
|
|
|
| 32 |
Here is the aggregated performance of the models over several datasets.
|
| 33 |
|
| 34 |
k=X means that as a training data for this evaluation, we took only X examples for each class, trained the model, and evaluated it on the full test set.
|
|
@@ -46,8 +50,6 @@ NuNER v1.0 has similar performance to 7B LLMs (70 times bigger that NuNER v1.0)
|
|
| 46 |
| UniversalNER (7B) | 57.89 ± 4.34 | 71.02 ± 1.53 |
|
| 47 |
| NuNER v1.0 (100M) | 58.75 ± 0.93 | 70.30 ± 0.35 |
|
| 48 |
|
| 49 |
-
Read more about evaluation protocol & datasets in our [paper](https://arxiv.org/abs/2402.15343) and [blog post](https://www.numind.ai/blog/a-foundation-model-for-entity-recognition).
|
| 50 |
-
|
| 51 |
## Usage
|
| 52 |
|
| 53 |
Embeddings can be used out of the box or fine-tuned on specific datasets.
|
|
|
|
| 19 |
|
| 20 |
This model provides the best embedding for the Entity Recognition task in English.
|
| 21 |
|
| 22 |
+
This model is based on our [Paper](https://arxiv.org/abs/2402.15343).
|
| 23 |
+
|
| 24 |
**Checkout other models by NuMind:**
|
| 25 |
* SOTA Multilingual Entity Recognition Foundation Model: [link](https://huggingface.co/numind/entity-recognition-multilingual-general-sota-v1)
|
| 26 |
* SOTA Sentiment Analysis Foundation Model: [English](https://huggingface.co/numind/generic-sentiment-v1), [Multilingual](https://huggingface.co/numind/generic-sentiment-multi-v1)
|
|
|
|
| 31 |
|
| 32 |
**Metrics:**
|
| 33 |
|
| 34 |
+
Read more about evaluation protocol & datasets in our [paper](https://arxiv.org/abs/2402.15343).
|
| 35 |
+
|
| 36 |
Here is the aggregated performance of the models over several datasets.
|
| 37 |
|
| 38 |
k=X means that as a training data for this evaluation, we took only X examples for each class, trained the model, and evaluated it on the full test set.
|
|
|
|
| 50 |
| UniversalNER (7B) | 57.89 ± 4.34 | 71.02 ± 1.53 |
|
| 51 |
| NuNER v1.0 (100M) | 58.75 ± 0.93 | 70.30 ± 0.35 |
|
| 52 |
|
|
|
|
|
|
|
| 53 |
## Usage
|
| 54 |
|
| 55 |
Embeddings can be used out of the box or fine-tuned on specific datasets.
|