nielsr HF Staff commited on
Commit
1c397d8
·
1 Parent(s): 29cd5b6
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -9,7 +9,7 @@ license: apache-2.0
9
  # TAPAS base model
10
 
11
  This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
12
- This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
13
 
14
  The other (non-default) version which can be used is the one with absolute position embeddings:
15
  - `revision="v1"`, which corresponds to `tapas_inter_masklm_base`
@@ -35,8 +35,8 @@ was pretrained with two objectives:
35
 
36
  This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
37
  to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
38
- or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
39
- jointly train this randomly initialized classification head with the base model on TabFact.
40
 
41
 
42
  ## Intended uses & limitations
 
9
  # TAPAS base model
10
 
11
  This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
12
+ This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
13
 
14
  The other (non-default) version which can be used is the one with absolute position embeddings:
15
  - `revision="v1"`, which corresponds to `tapas_inter_masklm_base`
 
35
 
36
  This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
37
  to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
38
+ or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then
39
+ jointly train these randomly initialized classification heads with the base model on a downstream task.
40
 
41
 
42
  ## Intended uses & limitations