mercelisw commited on
Commit
6c517ce
·
verified ·
0 Parent(s):

Duplicate from mercelisw/electra-grc

Browse files
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - grc
4
+ tags:
5
+ - ELECTRA
6
+ - TensorFlow
7
+ ---
8
+
9
+
10
+
11
+ An ELECTRA-small model for Ancient Greek, trained on texts from Homer up until the 4th century AD from the literary [GLAUx](https://github.com/alekkeersmaekers/glaux) corpus and the [DukeNLP](https://github.com/alekkeersmaekers/duke-nlp) papyrus corpus.
12
+
13
+ The model has some design choices made to combat data sparsity:
14
+ * Its input should always be in Unicode NFD (so separate Unicode signs for diacritics).
15
+ * All grave accents should be replaced with acute accents (καί, not καὶ).
16
+ * When a word contains two accents, the second one should be removed (εἶπε μοι, not εἶπέ μοι).
17
+
18
+ If you use it in conjunction with [glaux-nlp](https://github.com/alekkeersmaekers/glaux-nlp), you can pass the tokenized sentence to normalize_tokens from tokenization.Tokenization, using normalization_rule=greek_glaux, which will do all these normalizations for you.
19
+
20
+ ## Citation
21
+
22
+ ```bibtex
23
+ @misc{mercelis_electra-grc_2022,
24
+ title = {electra-grc},
25
+ url = {https://huggingface.co/mercelisw/electra-grc},
26
+ abstract = {An ELECTRA-small model for Ancient Greek, trained on texts from Homer up until the 4th century AD.},
27
+ author = {Mercelis, Wouter and Keersmaekers, Alek},
28
+ year = {2022},
29
+ }
30
+ ```
config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "vocab_size": 32000,
3
+ "embedding_size": 128,
4
+ "hidden_size": 256,
5
+ "num_hidden_layers": 12,
6
+ "num_attention_heads": 4,
7
+ "intermediate_size": 1024,
8
+ "generator_size": "0.25",
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "attention_probs_dropout_prob": 0.1,
12
+ "max_position_embeddings": 512,
13
+ "type_vocab_size": 2,
14
+ "initializer_range": 0.02
15
+ }
model.ckpt-500000.index ADDED
Binary file (43.6 kB). View file
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:606da6bfd2f2c55deb9112e74d43afdb842e8d00bd2e57e3aadaae75dac6a26b
3
+ size 57983495
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": false, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "texts/models/electra-grc-2", "do_basic_tokenize": true, "never_split": null, "tokenizer_class": "ElectraTokenizer"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff