Datasets:
File size: 1,230 Bytes
9dabfae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- domain-specific
- filtered-corpus
- ontology-guided
size_categories:
- unknown
---
# quantum-physics-corpus
## Dataset Description
This is a domain-specific corpus created using ontology-guided filtering from FineWeb-Edu.
### Dataset Creation
- **Source:** HuggingFaceFW/fineweb-edu
- **Filtering Method:** Semantic similarity to subdomain centroids (embedding-based)
- **Pipeline:** Ontology-Guided Domain Corpus Builder
### Dataset Structure
Each chunk contains:
- `text`: The text content (256-512 tokens)
- `subdomain_id`: Assigned subdomain
- `similarity_score`: Cosine similarity to subdomain centroid
- `token_count`: Number of tokens
- `source_dataset`: Original dataset name
- `source_id`: Original document ID
- `chunk_index`: Position within source document
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("konsman/quantum-physics-corpus")
# Access filtered chunks
for chunk in dataset['train']:
print(chunk['text'])
print(chunk['subdomain_id'])
print(chunk['similarity_score'])
```
### License
MIT License
### Citation
Generated using the Ontology-Guided Domain Corpus Builder pipeline.
|