Update README.md
Browse files
README.md
CHANGED
|
@@ -1,34 +1,52 @@
|
|
| 1 |
---
|
| 2 |
license: cc0-1.0
|
| 3 |
---
|
| 4 |
-
# Affiliation Triplet
|
| 5 |
|
| 6 |
-
This dataset is designed for training an embedding model using triplet loss
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
|
| 8 |
## Dataset Details
|
| 9 |
|
| 10 |
### Data Fields
|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
-
|
| 15 |
-
- `candidate_affiliation`: (string) An affiliation string to be compared against the anchor.
|
| 16 |
-
- `label`: (integer) `1` if the affiliations are a positive match, `0` if they are a negative match.
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
-
|
| 21 |
|
| 22 |
-
### Positive
|
| 23 |
|
| 24 |
-
|
| 25 |
-
These are pairs where the `anchor_affiliation` and `candidate_affiliation` are known to belong to the same institution. They are created by taking an anchor affiliation and pairing it with a different known affiliation string that shares the same ROR ID.
|
| 26 |
|
| 27 |
-
|
| 28 |
-
Negative examples are used to teach the model what *doesn't* constitute a match. This dataset includes two types of negative examples:
|
| 29 |
|
| 30 |
-
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
|
|
|
|
| 1 |
---
|
| 2 |
license: cc0-1.0
|
| 3 |
---
|
| 4 |
+
# Affiliation Triplet Curriculum Dataset
|
| 5 |
|
| 6 |
+
This dataset is designed for training an embedding model using triplet loss. It contains triplets of affiliation strings (`anchor`, `positive`, `negative`) structured to teach a model to recognize when two strings refer to the same institution.
|
| 7 |
+
|
| 8 |
+
The dataset is sorted from easiest to hardest to facilitate curriculum learning, allowing the model to learn from simple examples before progressing to more challenging ones.
|
| 9 |
+
|
| 10 |
+
---
|
| 11 |
|
| 12 |
## Dataset Details
|
| 13 |
|
| 14 |
### Data Fields
|
| 15 |
|
| 16 |
+
Each row in the dataset is a complete triplet with the following fields:
|
| 17 |
+
|
| 18 |
+
- `triplet_id`: (integer) A unique identifier for each triplet in the sequence.
|
| 19 |
+
- `anchor`: (string) The primary affiliation string.
|
| 20 |
+
- `positive`: (string) An affiliation string known to be a match for the `anchor`.
|
| 21 |
+
- `negative`: (string) An affiliation string known not to be a match for the `anchor`.
|
| 22 |
+
- `difficulty`: (float) A calculated score representing the triplet's difficulty (`positive_dist_ratio - negative_dist_ratio`). Lower (and negative) scores indicate harder triplets.
|
| 23 |
+
- `positive_dist_ratio`: (float) The fuzzy string similarity score (0-100) between the normalized `anchor` and `positive`.
|
| 24 |
+
- `negative_dist_ratio`: (float) The fuzzy string similarity score (0-100) between the normalized `anchor` and `negative`.
|
| 25 |
+
- `negative_type`: (string) The type of negative example, either 'hard' or 'easy'.
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
## Dataset Structure and Curriculum Learning
|
| 30 |
+
|
| 31 |
+
The dataset is delivered as a single, unsplit file.
|
| 32 |
|
| 33 |
+
It is **not randomly shuffled**. Instead, it is intentionally sorted by the `difficulty` field in descending order (from highest score to lowest). This creates a curriculum:
|
|
|
|
|
|
|
| 34 |
|
| 35 |
+
1. **Easiest Triplets (Top of the file):** These have a high positive similarity and a low negative similarity.
|
| 36 |
+
2. **Hardest Triplets (Bottom of the file):** These have a low positive similarity and a high negative similarity, often resulting in a negative difficulty score. These examples are crucial for teaching the model to handle nuanced and tricky cases.
|
| 37 |
+
|
| 38 |
+
---
|
| 39 |
|
| 40 |
+
## Triplet Generation
|
| 41 |
|
| 42 |
+
### Positive Examples
|
| 43 |
|
| 44 |
+
The `positive` affiliation is a known variant of the `anchor`, both sharing the same ROR ID. To ensure a meaningful learning signal, trivial pairs (e.g., "Google" vs. "google") with near-perfect similarity (>=99%) are filtered out. This forces the model to learn from meaningful variations like abbreviations, acronyms, or different subunit names.
|
|
|
|
| 45 |
|
| 46 |
+
### Negative Examples
|
|
|
|
| 47 |
|
| 48 |
+
The dataset includes two types of negative examples to provide a robust training experience. The final dataset is generated with a target ratio of **80% hard negatives** and **20% easy negatives**.
|
| 49 |
|
| 50 |
+
1. **Hard Negatives:** These are intentionally tricky pairs. They are generated by pairing an `anchor` with an affiliation that is semantically similar but incorrect (e.g., "University of Michigan" vs. "Western Michigan University"). This forces the model to learn subtle but important distinctions.
|
| 51 |
|
| 52 |
+
2. **Easy Negatives:** These pairs are generated by selecting an affiliation from a completely different and unrelated institution. These are generally easier for the model to distinguish and help it learn the broad feature space of what makes affiliations different.
|