Update README.md
Browse files
README.md
CHANGED
|
@@ -12,17 +12,31 @@ We present B2NERD, a cohesive and efficient dataset that can improve LLMs' gener
|
|
| 12 |
Our B2NER models, trained on B2NERD, outperform GPT-4 by 6.8-12.0 F1 points and surpass previous methods in 3 out-of-domain benchmarks across 15 datasets and 6 languages.
|
| 13 |
|
| 14 |
- 📖 Paper: [Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition](http://arxiv.org/abs/2406.11192)
|
| 15 |
-
- 🎮
|
| 16 |
- 📀 Data: You can download from here (the B2NERD_data.zip in the "Files and versions" tab). See below data section for more information.
|
| 17 |
- 💾 Model (LoRA Adapters): See [7B model](https://huggingface.co/Umean/B2NER-Internlm2.5-7B-LoRA) and [20B model](https://huggingface.co/Umean/B2NER-Internlm2-20B-LoRA). You may refer to the github repo for quick demo usage.
|
| 18 |
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
# Data
|
| 22 |
-
One of the paper's core contribution is the construction of B2NERD dataset. It's a cohesive and efficient collection refined from 54 English and Chinese datasets and designed for Open NER model training. **The preprocessed test datasets (7 for Chinese NER and 7 for English NER) used for Open NER OOD evaluation in our paper are also included in the released dataset** to facilitate convenient evaluation for future research.
|
| 23 |
|
| 24 |
We provide 3 versions of our dataset.
|
| 25 |
-
-
|
| 26 |
- `B2NERD_all`: Contain ~1.4M samples from 54 datasets. The full-data version of our dataset suitable for in-domain supervised evaluation. It has standardized entity definitions but does not undergo any data selection or pruning.
|
| 27 |
- `B2NERD_raw`: The raw collected datasets with raw entity labels. It goes through basic format preprocessing but without further standardization.
|
| 28 |
|
|
|
|
| 12 |
Our B2NER models, trained on B2NERD, outperform GPT-4 by 6.8-12.0 F1 points and surpass previous methods in 3 out-of-domain benchmarks across 15 datasets and 6 languages.
|
| 13 |
|
| 14 |
- 📖 Paper: [Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition](http://arxiv.org/abs/2406.11192)
|
| 15 |
+
- 🎮 GitHub Repo: https://github.com/UmeanNever/B2NER .
|
| 16 |
- 📀 Data: You can download from here (the B2NERD_data.zip in the "Files and versions" tab). See below data section for more information.
|
| 17 |
- 💾 Model (LoRA Adapters): See [7B model](https://huggingface.co/Umean/B2NER-Internlm2.5-7B-LoRA) and [20B model](https://huggingface.co/Umean/B2NER-Internlm2-20B-LoRA). You may refer to the github repo for quick demo usage.
|
| 18 |
|
| 19 |
+
**Feature Highlights:**
|
| 20 |
+
- Curated dataset (B2NERD) refined from the largest bilingual NER dataset collection to date for training Open NER models.
|
| 21 |
+
- Achieves SoTA OOD NER performance across multiple benchmarks with light-weight LoRA adapters (<=50MB).
|
| 22 |
+
- Uses simple natural language format prompt, achieving 4X faster inference speed than previous SoTA which use complex prompts.
|
| 23 |
+
- Easy integration with other IE tasks by adopting UIE-style instructions.
|
| 24 |
+
- Provides a universal entity taxonomy that guides the definition and label naming of new entities.
|
| 25 |
+
- We have open-sourced our data, code, and models, and provided easy-to-follow usage instructions.
|
| 26 |
+
|
| 27 |
+
| Model | Avg. F1 on OOD English datasets | Avg. F1 on OOD Chinese datasets | Avg. F1 on OOD multilingual dataset
|
| 28 |
+
|-------|------------------------|------------------------|--|
|
| 29 |
+
| Previous SoTA | 69.1 | 42.7 | 36.6
|
| 30 |
+
| GPT | 60.1 | 54.7 | 31.8
|
| 31 |
+
| B2NER | **72.1** | **61.3** | **43.3**
|
| 32 |
+
|
| 33 |
+
See our [GitHub Repo](https://github.com/UmeanNever/B2NER) for more information on data usage and this work.
|
| 34 |
|
| 35 |
# Data
|
| 36 |
+
One of the paper's core contribution is the construction of B2NERD dataset. It's a cohesive and efficient collection refined from 54 English and Chinese datasets and designed for Open NER model training. **The preprocessed test datasets (7 for Chinese NER and 7 for English NER) used for Open NER OOD evaluation in our paper are also included in the released dataset** to facilitate convenient evaluation for future research. See the tables below for our train/test splits and dataset statistics.
|
| 37 |
|
| 38 |
We provide 3 versions of our dataset.
|
| 39 |
+
- `B2NERD` (**Recommended**): Contain ~52k samples from 54 Chinese or English datasets. This is the final version of our dataset suitable for out-of-domain / zero-shot NER model training. It features standardized entity definitions and pruned, diverse training data, while also including separate unpruned test data.
|
| 40 |
- `B2NERD_all`: Contain ~1.4M samples from 54 datasets. The full-data version of our dataset suitable for in-domain supervised evaluation. It has standardized entity definitions but does not undergo any data selection or pruning.
|
| 41 |
- `B2NERD_raw`: The raw collected datasets with raw entity labels. It goes through basic format preprocessing but without further standardization.
|
| 42 |
|