Hebrew_wikipedia / README.md
YanFren's picture
Update README.md
fa552c4 verified
---
license: mit
language:
- he
size_categories:
- 10K<n<100K
---
# About Dataset
The following dataset is a collection of `JSON` sets that contain most of the Hebrew Wikipedia.
## About Data collection methodology
The strategy behind the methodology for collecting the data is as follows:
- **Crawl Hebrew Wikipedia:** Begin by crawling through Hebrew Wikipedia to collect all redirect links on each page.
- **Breadth-First Search (BFS):** For each page, apply a BFS-like strategy to ensure that every link is scraped.
- **Link Collection:** Collect the links as tuples in a link file, where each tuple contains a page name and its corresponding link (<page_name>, <page_link>).
- **Data Scraping and Cleaning:** After crawling and link collection, iterate over the links and scrape all paragraphs. Apply cleaning methodologies to ensure data cleanliness. Additionally, collect the page summary for each page, if available.
### Description of the data
The dataset file is a JSONL file structured as follows:
#### JSONL File Structure
```jsonl
{
id: <id_num>,
page: <page_name>,
url: <page_url>,
summary: <page_summary>,
paragraphs: [<paragraph1>, <paragraph2>,...]
}
```
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `id` | `string` | Unique identifier for the JSON element. |
| `page` | `string` | Name of the Wikipedia page. |
| `url` | `string` | URL of the associated Wikipedia page. |
| `summary` | `string` | Uncleaned page summary, which may contain `LaTeX` for mathematical formulas or other languages besides Hebrew or English. If no summary is available, the value is `None`. |
| `paragraphs` | `string` | List of strings containing cleaned paragraphs from the page. All paragraphs only contain Hebrew and English characters. |
## Authors
- [@Fren Yan](https://github.com/YanFrenklakh)
- [@Swisa Doron](https://github.com/fkzx8000)
## License
[MIT](https://choosealicense.com/licenses/mit/)