Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowTypeError
Message: ("Expected bytes, got a 'list' object", 'Conversion failed for column a with type object')
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 190, in _generate_tables
pa_table = pa.Table.from_pandas(df, preserve_index=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 4795, in pyarrow.lib.Table.from_pandas
File "/usr/local/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 637, in dataframe_to_arrays
arrays = [convert_column(c, f)
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 625, in convert_column
raise e
File "/usr/local/lib/python3.12/site-packages/pyarrow/pandas_compat.py", line 619, in convert_column
result = pa.array(col, type=type_, from_pandas=True, safe=safe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 365, in pyarrow.lib.array
File "pyarrow/array.pxi", line 91, in pyarrow.lib._ndarray_to_array
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowTypeError: ("Expected bytes, got a 'list' object", 'Conversion failed for column a with type object')Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
English OpenList
The largest open-source, validated English word list for NLP and games.
Dataset Description
English OpenList is a comprehensive, continuously updated dictionary of valid English words. It provides:
- 378,666+ validated English words following Scrabble-compatible rules
- Rich metadata including part of speech, definitions, and pronunciation
- Weekly updates from authoritative dictionary sources
- Version history with changelogs for every update
Why Use English OpenList?
| Use Case | Benefit |
|---|---|
| Spell Checking | High-precision word validation |
| Word Games | Scrabble/Wordle compatible list |
| NLP Training | Clean, validated vocabulary |
| Research | Transparent methodology, full provenance |
Dataset Structure
Full Word Lists (data/)
These are the complete, up-to-date word lists that most users will want to download:
data/
βββ merged_valid_words.txt # FULL valid word list (378,666+ words, one per line)
βββ merged_valid_dict.json # FULL dictionary with metadata for all valid words
βββ merged_invalid_words.txt # FULL invalid/rejected entries list
βββ merged_invalid_dict.json # FULL invalid dictionary with rejection reasons
Daily Releases (releases/)
Daily updates with changelog and statistics:
releases/
βββ {YYYY-MM-DD}/
βββ promoted_words.txt # Words promoted from invalid to valid that day
βββ update_stats.json # Statistics for the update
βββ CHANGELOG.md # Changelog for the update
Latest Update Reference (latest/)
Copy of the most recent release for convenience:
latest/
βββ promoted_words.txt
βββ update_stats.json
βββ CHANGELOG.md
Data Fields
Valid Dictionary Entry:
{
"word": "example",
"source": "merriam-webster",
"part_of_speech": "noun",
"definition": "one that serves as a pattern...",
"pronunciation": "ig-Λzam-pΙl",
"validation_status": "valid",
"added_date": "2026-01-12T00:00:00"
}
Validation Rules (Scrabble-Compatible)
Words are included if they:
- β Contain only lowercase letters (a-z)
- β Are recognized by Merriam-Webster Collegiate Dictionary
- β Are 2-45 characters in length
- β Are NOT proper nouns (unless commonly used as verbs)
- β Are NOT abbreviations or acronyms
Dataset Statistics
| Metric | Value |
|---|---|
| Total Valid Words | 378,666+ |
| Total Invalid Entries | 9,275,000+ |
| Update Frequency | Daily (00:00 UTC) |
| Primary Source | Merriam-Webster Collegiate Dictionary |
Usage
Python (Hugging Face Datasets)
from datasets import load_dataset
# Load the valid word list
dataset = load_dataset("english-openlist/english-openlist", split="train")
# Access words
for entry in dataset:
print(entry["word"])
Direct Download
Download the complete word lists:
# Download FULL valid words list (378,666+ words)
wget https://huggingface.co/datasets/ryanjosephkamp/english-openlist/resolve/main/data/merged_valid_words.txt
# Download FULL valid dictionary with metadata
wget https://huggingface.co/datasets/ryanjosephkamp/english-openlist/resolve/main/data/merged_valid_dict.json
# Download FULL invalid words list (for reference)
wget https://huggingface.co/datasets/ryanjosephkamp/english-openlist/resolve/main/data/merged_invalid_words.txt
# Download FULL invalid dictionary
wget https://huggingface.co/datasets/ryanjosephkamp/english-openlist/resolve/main/data/merged_invalid_dict.json
Download daily release files:
# Download a specific day's update
wget https://huggingface.co/datasets/ryanjosephkamp/english-openlist/resolve/main/releases/2026-01-19/CHANGELOG.md
Python (Raw Files)
import json
# Load word list
with open("merged_valid_words.txt", "r") as f:
words = set(line.strip() for line in f)
# Check if a word is valid
print("hello" in words) # True
print("asdf" in words) # False
# Load dictionary for metadata
with open("merged_valid_dict.json", "r") as f:
dictionary = json.load(f)
print(dictionary["example"]["definition"])
Methodology
Phase 1: Corpus Acquisition (December 2025)
Aggregated 9.8 million candidate words from 15+ open sources:
- Wiktionary (6.5M words)
- WordNet 3.1 (150K words)
- SCOWL 2020 (500K words)
- Google Books Ngrams (1M+ words)
- Collins Complete Dictionary (800K words)
Phase 2: Validation Pipeline (December 2025 - January 2026)
Multi-stage AI validation using Gemini 2.0/2.5 Flash:
- Pattern-based screening
- LLM classification with iterative convergence
- Statistical sampling for quality assurance
- Synthetic word generation and validation
Phase 3: Continuous Updates (January 2026 - Ongoing)
Daily automated pipeline:
- Discover new words from Merriam-Webster RSS feed and manual additions
- Validate ~1,000 words from invalid list against dictionary APIs
- Promote validated words to the valid list
- Update full word lists and dictionaries on Hugging Face
- Generate changelog and statistics
Citation
@dataset{english_openlist_2026,
title = {English OpenList: A Comprehensive Validated English Word List},
author = {English OpenList Project Team},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/english-openlist/english-openlist}
}
License
This dataset is released under the MIT License.
The underlying word data is derived from open sources with compatible licenses.
Contact
- Issues: GitHub Issues
- Updates: Check the
releases/folder for version history
Last Updated: January 2026
- Downloads last month
- 268