lukasweber commited on
Commit
ce8e82b
·
1 Parent(s): 43cf377

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -5,11 +5,11 @@ tags:
5
  - automotive
6
  ---
7
 
8
- WG-BERT is a pre-trained model to analyze automotive entities in automotive-related texts. WG-BERT is build by continual
9
- pretraining the BERT language model in the automotive domain by using a corpus of automotive (workshop feedback) texts via the masked language modelling (MLM) approach.
10
- WG-BERT is further fine-tuned for automotive entity recognition (subtask of Named Entity Recognition (NER)) to extract components and its complaints out of automotive texts.
11
  The dataset for continual pretraining consists of ~ 4 million sentences.
12
  The dataset for fine-tuning consists of ~5.500 gold annotated sentences by automotive domain experts.
13
  We choose as the training architecture the BERT-base-uncased version.
14
 
15
- Please contact Lukas Weber lukas-weber[at]hotmail[dot]de / lukas.l.weber[at]mercedes-benz[dot]com about any WG_BERT related issues and questions.
 
5
  - automotive
6
  ---
7
 
8
+ WG-BERT is a pre-trained model to analyze automotive entities in automotive-related texts. WG-BERT is trained by continually
9
+ pretraining the BERT language model in the automotive domain by using a corpus of automotive (workshop feedback) texts via the masked language modeling (MLM) approach.
10
+ WG-BERT is further fine-tuned for automotive entity recognition (subtask of Named Entity Recognition (NER)) to extract components and their complaints out of automotive texts.
11
  The dataset for continual pretraining consists of ~ 4 million sentences.
12
  The dataset for fine-tuning consists of ~5.500 gold annotated sentences by automotive domain experts.
13
  We choose as the training architecture the BERT-base-uncased version.
14
 
15
+ Please contact Lukas Weber lukas-weber[at]hotmail[dot]de / lukas.l.weber[at]mercedes-benz[dot]com about any WG-BERT related issues and questions.