LatinCy w2v models
Details coming soon.
Evaluation
The models included here have been evaluated on an analogy task and an odd-one-out task. See the following repo for more details about the evaluation dataset: https://github.com/diyclassics/latincy-w2v-eval.
LatinCy w2v Analogies Evaluation
Total analogies: 1383 Analogies solved by at least one model: 800 Analogies not solved by any model: 583
| Model | Rank 1 Accuracy | Rank 2 Accuracy | Rank 3 Accuracy | Rank 5 Accuracy | Rank 10 Accuracy | Rank 1 Correct | Rank 2 Correct | Rank 3 Correct | Rank 5 Correct | Rank 10 Correct |
|---|---|---|---|---|---|---|---|---|---|---|
| latincy_w2v-CBOW-50-10-0_0_2 | 0.542 | 0.675 | 0.749 | 0.830 | 0.953 | 434 | 540 | 599 | 664 | 762 |
| latincy_w2v-CBOW-50-5-0_0_2 | 0.545 | 0.690 | 0.771 | 0.864 | 0.979 | 436 | 552 | 617 | 691 | 783 |
| latincy_w2v-SG-50-5-0_0_2 | 0.525 | 0.664 | 0.757 | 0.831 | 0.964 | 420 | 531 | 606 | 665 | 771 |
| latincy_w2v-SG-50-10-0_0_2 | 0.506 | 0.667 | 0.764 | 0.839 | 0.946 | 405 | 534 | 611 | 671 | 757 |
| latincy_w2v-CBOW-100-10-0_0_2 | 0.716 | 0.855 | 0.949 | 1.000 | 1.000 | 573 | 684 | 759 | 800 | 800 |
| latincy_w2v-CBOW-100-5-0_0_2 | 0.705 | 0.853 | 0.940 | 1.000 | 1.000 | 564 | 682 | 752 | 800 | 800 |
| latincy_w2v-SG-100-5-0_0_2 | 0.701 | 0.865 | 0.955 | 1.000 | 1.000 | 561 | 692 | 764 | 800 | 800 |
| latincy_w2v-SG-100-10-0_0_2 | 0.675 | 0.819 | 0.884 | 0.978 | 1.000 | 540 | 655 | 707 | 782 | 800 |
| latincy_w2v-CBOW-300-10-0_0_2 | 0.766 | 0.938 | 1.000 | 1.000 | 1.000 | 613 | 750 | 800 | 800 | 800 |
| latincy_w2v-CBOW-300-5-0_0_2 | 0.752 | 0.920 | 0.999 | 1.000 | 1.000 | 602 | 736 | 799 | 800 | 800 |
| latincy_w2v-SG-300-10-0_0_2 | 0.584 | 0.754 | 0.854 | 0.988 | 1.000 | 467 | 603 | 683 | 790 | 800 |
| latincy_w2v-SG-300-5-0_0_2 | 0.565 | 0.775 | 0.881 | 1.000 | 1.000 | 452 | 620 | 705 | 800 | 800 |
LatinCy w2v Odd-One-Out Evaluation
Total tasks: 2728 Tasks solved by at least one model: 2179 Tasks not solved by any model: 549
| Model | Accuracy | Correct |
|---|---|---|
| latincy_w2v-CBOW-50-10-0_0_2 | 0.620 | 1691 |
| latincy_w2v-CBOW-50-5-0_0_2 | 0.613 | 1671 |
| latincy_w2v-SG-50-10-0_0_2 | 0.549 | 1499 |
| latincy_w2v-SG-50-5-0_0_2 | 0.560 | 1528 |
| latincy_w2v-CBOW-100-10-0_0_2 | 0.626 | 1707 |
| latincy_w2v-CBOW-100-5-0_0_2 | 0.642 | 1751 |
| latincy_w2v-SG-100-10-0_0_2 | 0.543 | 1482 |
| latincy_w2v-SG-100-5-0_0_2 | 0.556 | 1518 |
| latincy_w2v-CBOW-300-10-0_0_2 | 0.666 | 1817 |
| latincy_w2v-CBOW-300-5-0_0_2 | 0.667 | 1820 |
| latincy_w2v-SG-300-10-0_0_2 | 0.562 | 1534 |
| latincy_w2v-SG-300-5-0_0_2 | 0.577 | 1575 |
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support