techysanoj commited on
Commit
d00f07d
·
verified ·
1 Parent(s): baedfcb

Upload 7 files

Browse files
README.md CHANGED
@@ -1,3 +1,86 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - as
4
+ - bn
5
+ - gu
6
+ - hi
7
+ - kn
8
+ - ml
9
+ - mr
10
+ - or
11
+ - pa
12
+ - ta
13
+ - te
14
+ license: mit
15
+ datasets:
16
+ - Samanantar
17
+ tags:
18
+ - ner
19
+ - Pytorch
20
+ - transformer
21
+ - multilingual
22
+ - nlp
23
+ - indicnlp
24
+ ---
25
+
26
+ # IndicNER
27
+ IndicNER is a model trained to complete the task of identifying named entities from sentences in Indian languages. Our model is specifically fine-tuned to the 11 Indian languages mentioned above over millions of sentences. The model is then benchmarked over a human annotated testset and multiple other publicly available Indian NER datasets.
28
+ The 11 languages covered by IndicNER are: Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu.
29
+
30
+ ## Training Corpus
31
+ Our model was trained on a [dataset](https://huggingface.co/datasets/ai4bharat/naamapadam) which we mined from the existing [Samanantar Corpus](https://huggingface.co/datasets/ai4bharat/samanantar). We used a bert-base-multilingual-uncased model as the starting point and then fine-tuned it to the NER dataset mentioned previously.
32
+
33
+ ## Downloads
34
+ Download from this same Huggingface repo.
35
+
36
+ Update 20 Dec 2022: We released a new paper documenting IndicNER and Naamapadam. We have a different model reported in the paper. We will update the repo here soon with this model.
37
+
38
+ ## Usage
39
+
40
+ You can use [this Colab notebook](https://colab.research.google.com/drive/1sYa-PDdZQ_c9SzUgnhyb3Fl7j96QBCS8?usp=sharing) for samples on using IndicNER or for finetuning a pre-trained model on Naampadam dataset to build your own NER models.
41
+
42
+ <!-- citing information -->
43
+ ## Citing
44
+
45
+ If you are using IndicNER, please cite the following article:
46
+ ```
47
+ @misc{mhaske2022naamapadam,
48
+ doi = {10.48550/ARXIV.2212.10168},
49
+ url = {https://arxiv.org/abs/2212.10168},
50
+ author = {Mhaske, Arnav and Kedia, Harshit and Doddapaneni, Sumanth and Khapra, Mitesh M. and Kumar, Pratyush and Murthy, Rudra and Kunchukuttan, Anoop},
51
+ title = {Naamapadam: A Large-Scale Named Entity Annotated Data for Indic Languages}
52
+ publisher = {arXiv},
53
+ year = {2022},
54
+ copyright = {arXiv.org perpetual, non-exclusive license}
55
+ }
56
+
57
+ ```
58
+ We would like to hear from you if:
59
+
60
+ - You are using our resources. Please let us know how you are putting these resources to use.
61
+ - You have any feedback on these resources.
62
+
63
+
64
+ <!-- License -->
65
+ ## License
66
+
67
+ The IndicNER code (and models) are released under the MIT License.
68
+
69
+ <!-- Contributors -->
70
+ ## Contributors
71
+ - Arnav Mhaske <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
72
+ - Harshit Kedia <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
73
+ - Sumanth Doddapaneni <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
74
+ - Mitesh M. Khapra <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
75
+ - Pratyush Kumar <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
76
+ - Rudra Murthy <sub> ([AI4Bharat](https://ai4bharat.org), [IBM](https://www.ibm.com))</sub>
77
+ - Anoop Kunchukuttan <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
78
+
79
+ This work is the outcome of a volunteer effort as part of the [AI4Bharat initiative](https://ai4bharat.iitm.ac.in).
80
+
81
+
82
+ <!-- Contact -->
83
+ ## Contact
84
+ - Anoop Kunchukuttan ([anoop.kunchukuttan@gmail.com](mailto:anoop.kunchukuttan@gmail.com))
85
+ - Rudra Murthy V ([rmurthyv@in.ibm.com](mailto:rmurthyv@in.ibm.com))
86
+
config.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "../base_models/mbert_uncased/checkpoint-1/",
3
+ "architectures": [
4
+ "BertForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "directionality": "bidi",
9
+ "finetuning_task": "ner",
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "id2label": {
15
+ "0": "B-LOC",
16
+ "1": "B-ORG",
17
+ "2": "B-PER",
18
+ "3": "I-LOC",
19
+ "4": "I-ORG",
20
+ "5": "I-PER",
21
+ "6": "O"
22
+ },
23
+ "initializer_range": 0.02,
24
+ "intermediate_size": 3072,
25
+ "label2id": {
26
+ "B-LOC": 0,
27
+ "B-ORG": 1,
28
+ "B-PER": 2,
29
+ "I-LOC": 3,
30
+ "I-ORG": 4,
31
+ "I-PER": 5,
32
+ "O": 6
33
+ },
34
+ "layer_norm_eps": 1e-12,
35
+ "max_position_embeddings": 512,
36
+ "model_type": "bert",
37
+ "num_attention_heads": 12,
38
+ "num_hidden_layers": 12,
39
+ "pad_token_id": 0,
40
+ "pooler_fc_size": 768,
41
+ "pooler_num_attention_heads": 12,
42
+ "pooler_num_fc_layers": 3,
43
+ "pooler_size_per_head": 128,
44
+ "pooler_type": "first_token_transform",
45
+ "position_embedding_type": "absolute",
46
+ "torch_dtype": "float32",
47
+ "transformers_version": "4.15.0",
48
+ "type_vocab_size": 2,
49
+ "use_cache": true,
50
+ "vocab_size": 105879
51
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79e328e6ea15b7058047be65ae8237007ceb8d179bade7c5502390d217c047e1
3
+ size 667173367
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "../base_models/mbert_uncased/checkpoint-1/", "tokenizer_class": "BertTokenizer"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff