Marijke commited on
Commit
5108a21
·
verified ·
1 Parent(s): b91375f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -26,8 +26,12 @@ This model is part of a series of models trained for the ML4AL paper “Gotta ca
26
 
27
  ### Training Data
28
 
29
- **Repository:** [https://github.com/NER-AncientLanguages/NERAncientGreekML4AL] (for data and training scripts). We thank the following projects for helping to provide the training data:
30
-
 
 
 
 
31
 
32
  ### Training Hyperparameters
33
 
@@ -35,9 +39,9 @@ We use Weights & Biases for hyperparameter optimization with a random search str
35
 
36
  The search space includes:
37
 
38
- Learning Rate: Sampled uniformly between 1e-6 and 1e-4
39
- Weight Decay: One of [0.1, 0.01, 0.001]
40
- Number of Training Epochs: One of [3, 4, 5, 6]
41
 
42
  For the final training of this model, the hyperparameters were:
43
  Learning Rate: 9.889410158465026e-05
 
26
 
27
  ### Training Data
28
 
29
+ **Repository:** [https://github.com/NER-AncientLanguages/NERAncientGreekML4AL] (for data and training scripts). \
30
+ We thank the following projects for providing the training data: \
31
+ Digital Periegesis: [https://www.periegesis.org/en ] \
32
+ Josh Kemp, annotated Odessey: [https://medium.com/pelagios/beyond-translation-building-better-greek-scholars-561ab331a1bc] \
33
+ the Stepbible project: [https://github.com/STEPBible/STEPBible-Data] \
34
+ Perseus Digital Library, Deipnosophistae: [urn:cts:greekLit:tlg0008.tlg001.perseus-grc4] \
35
 
36
  ### Training Hyperparameters
37
 
 
39
 
40
  The search space includes:
41
 
42
+ Learning Rate: Sampled uniformly between 1e-6 and 1e-4 \
43
+ Weight Decay: One of [0.1, 0.01, 0.001] \
44
+ Number of Training Epochs: One of [3, 4, 5, 6] \
45
 
46
  For the final training of this model, the hyperparameters were:
47
  Learning Rate: 9.889410158465026e-05