File size: 8,260 Bytes
b9ee1a4 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 | # WordNet 3.0 Research Guide
This guide explains the structure of the WordNet files used in this dataset project,
how this adapter parses them, and how to interpret the resulting records for
research workflows.
## Scope
This project packages WordNet 3.0 lexical database files for four parts of speech:
- noun
- verb
- adjective
- adverb
The adapter supports three variants:
- `data` - synset records (`data.noun`, `data.verb`, `data.adj`, `data.adv`)
- `index` - lemma index records (`index.noun`, `index.verb`, `index.adj`, `index.adv`)
- `exceptions` - morphology exceptions (`noun.exc`, `verb.exc`, `adj.exc`, `adv.exc`)
## Source Files In This Project
Under `datasets/wordnet/data`:
- `data.noun`, `data.verb`, `data.adj`, `data.adv`
- `index.noun`, `index.verb`, `index.adj`, `index.adv`
- `noun.exc`, `verb.exc`, `adj.exc`, `adv.exc`
Important: WordNet files begin with license/header text before structured records.
Those rows are valid file content but not structured lexical records.
## Canonical Format References
Primary references for WordNet 3.0 file formats:
- Princeton `wndb(5WN)`: data/index/exception file grammar
- Princeton `wninput(5WN)`: pointer symbols and lexical conventions
See:
- [WordNet `wndb(5WN)`](https://wordnet.princeton.edu/documentation/wndb5wn)
- [WordNet `wninput(5WN)`](https://wordnet.princeton.edu/documentation/wninput5wn)
## Core Concepts
- Synset: a set of synonymous word senses.
- `synset_offset`: byte offset used as an address into `data.*` files.
- `ss_type`: part-of-speech/synset type marker (`n`, `v`, `a`, `s`, `r`).
- Pointer: typed relation from one synset (or specific word sense) to another.
- `source/target`: a four-hex-digit field that distinguishes semantic and lexical pointers.
### Synset Identity
`synset_offset` alone is not globally unique across all POS files.
Use `(synset_offset, pos)` as the unique synset key.
## Raw File Grammar
## `data.*` Files
Per `wndb(5WN)`, data lines are:
```text
synset_offset lex_filenum ss_type w_cnt word lex_id [word lex_id ...] p_cnt [ptr ...] [frames ...] | gloss
```
Where:
- `synset_offset`: 8-digit decimal byte offset.
- `lex_filenum`: 2-digit decimal lexical file id.
- `ss_type`: one of `n`, `v`, `a`, `s`, `r`.
- `w_cnt`: 2-digit hexadecimal count of words in the synset.
- `word`: token, underscores for spaces (adjectives may include markers like `(p)`).
- `lex_id`: 1-digit hexadecimal sense discriminator within lexicographer files.
- `p_cnt`: 3-digit decimal pointer count.
- `ptr`:
```text
pointer_symbol target_synset_offset pos source/target
```
- `source/target`: four hexadecimal digits, split as `ss tt`:
- `0000` means semantic pointer (synset-to-synset)
- non-zero means lexical pointer from source word number `ss` to target word number `tt`
- `frames` (verbs only):
```text
f_cnt + f_num w_num [ + f_num w_num ... ]
```
- `gloss`: free text after `|`.
## `index.*` Files
Per `wndb(5WN)`, index lines are:
```text
lemma pos synset_cnt p_cnt [ptr_symbol ...] sense_cnt tagsense_cnt synset_offset [synset_offset ...]
```
Where:
- `lemma`: lower-case lemma/collocation (`_` for spaces).
- `pos`: `n`, `v`, `a`, `r`.
- `synset_cnt`: number of senses/synsets for this lemma in POS.
- `p_cnt`: number of pointer symbols listed.
- `ptr_symbol`: pointer symbol types seen for lemma senses.
- `sense_cnt`: redundant historical count field.
- `tagsense_cnt`: number of tagged (frequency-ranked) senses.
- trailing `synset_offset` list length should equal `synset_cnt`.
## `*.exc` Exception Files
Per `wndb(5WN)`, exception lines are:
```text
inflected_form base_form [base_form ...]
```
These map irregular inflections to one or more base forms.
## Pointer Symbols You Will See
This project normalizes common pointer symbols to labels when parsing `data.*`.
| Symbol | Label |
| --- | --- |
| `!` | antonym |
| `@` | hypernym |
| `@i` | instance_hypernym |
| `~` | hyponym |
| `~i` | instance_hyponym |
| `#m` | member_holonym |
| `#s` | substance_holonym |
| `#p` | part_holonym |
| `%m` | member_meronym |
| `%s` | substance_meronym |
| `%p` | part_meronym |
| `=` | attribute |
| `+` | derivationally_related_form |
| `;c` | domain_of_synset_topic |
| `-c` | member_of_domain_topic |
| `;r` | domain_of_synset_region |
| `-r` | member_of_domain_region |
| `;u` | domain_of_synset_usage |
| `-u` | member_of_domain_usage |
| `*` | entailment |
| `>` | cause |
| `^` | also_see |
| `$` | verb_group |
| `&` | similar_to |
| `<` | participle_of_verb |
| `\\` | pertainym_or_derived_from_adjective |
Meaning and allowed POS combinations are defined by WordNet documentation.
## Adapter Parsing Behavior
## Variant `data` with `parse_records=true`
Adds structured fields:
- `is_record`
- `offset`, `synset_offset`
- `lex_filenum`
- `ss_type`
- `w_cnt`, `word_count`
- `words` (list of objects)
- `lemmas` (normalized with spaces)
- `p_cnt`, `pointer_count`
- `pointers` (list of objects)
- `frames` (list of verb frame objects; usually empty for non-verbs)
- `gloss`
- `parse_error` (flag when line partially matches but is malformed)
`words` entries include:
- `word`
- `lemma`
- `marker` (for adjective markers like `a`, `p`, `ip`)
- `lex_id`
- `lex_id_int`
- `word_number`
`pointers` entries include:
- `symbol`
- `label`
- `target_offset`
- `pos`
- `source_target`
- `source_word_number`
- `target_word_number`
- `is_semantic`
`frames` entries include:
- `frame_number`
- `word_number`
- `applies_to_all_words`
## Variant `index` with `parse_records=true`
Adds:
- `is_record`
- `lemma`, `lemma_text`
- `pos`
- `synset_cnt`
- `p_cnt`
- `ptr_symbols`
- `sense_cnt`
- `tagsense_cnt`
- `synset_offsets`
- `parse_error`
## Variant `exceptions` with `parse_records=true`
Adds:
- `is_record`
- `inflected_form`, `inflected_form_text`
- `base_forms`, `base_forms_text`
## Header/License Row Handling
Non-record lines are expected at the top of files.
- Parsed output marks these as `is_record=false`.
- WordNet dataset config enables automatic filtering of these rows when parsing.
Project default in `datasets/wordnet/dataset.json`:
```json
{
"adapter_defaults": {
"records_only": true
}
}
```
Practical effect:
- If you run with `--parse-records`, non-record rows are removed by default.
- To keep raw non-record rows, set `records_only=false`.
## Research Workflows
## Quick CLI Usage
Parse data records and keep only true records (default behavior):
```bash
python -m tether.datasets run-dataset wordnet \
--base-dir datasets/wordnet \
--variant data \
--parse-records \
--json
```
Disable filtering for diagnostics/audit:
```bash
python -m tether.datasets run-dataset wordnet \
--base-dir datasets/wordnet \
--variant data \
--parse-records \
--option records_only=false \
--json
```
Build parquet artifact bundle for analysis:
```bash
python -m tether.datasets prepare-dataset wordnet \
--base-dir datasets/wordnet \
--variant data \
--parse-records \
--output-dir artifacts/wordnet-research \
--format parquet \
--json
```
## Example Analysis Ideas
- Build graph edges from `pointers` using `(synset_offset, ss_type)` keys.
- Compare lexical vs semantic relation rates via `is_semantic`.
- Analyze gloss lengths, lemma counts (`word_count`), and pointer density (`pointer_count`).
- Filter by specific pointer symbols (for example `@` hypernym, `~` hyponym).
## Recommended Quality Checks
- Assert no remaining `is_record=false` rows in final analysis datasets when using defaults.
- Track `parse_error=true` counts as ingestion quality telemetry.
- Validate that index `len(synset_offsets) == synset_cnt` for parsed records.
## Known Caveats
- Some WordNet tokens contain punctuation and abbreviations (`A.D.`, `B.C.`).
- Not all pointer symbols apply to all parts of speech.
- Exception files may include forms not present in your selected downstream subset.
- Sense ranking in index files depends on WordNet tag counts and ordering conventions.
## Citation And License
- Citation: Miller (1995), WordNet: A Lexical Database for English.
- License: WordNet 3.0 license (Princeton University), included in source files.
Use source-level attribution and license text in downstream publications and releases as required.
|