--- license: mit language: - he size_categories: - 10K, ). - **Data Scraping and Cleaning:** After crawling and link collection, iterate over the links and scrape all paragraphs. Apply cleaning methodologies to ensure data cleanliness. Additionally, collect the page summary for each page, if available. ### Description of the data The dataset file is a JSONL file structured as follows: #### JSONL File Structure ```jsonl { id: , page: , url: , summary: , paragraphs: [, ,...] } ``` | Parameter | Type | Description | | :-------- | :------- | :------------------------- | | `id` | `string` | Unique identifier for the JSON element. | | `page` | `string` | Name of the Wikipedia page. | | `url` | `string` | URL of the associated Wikipedia page. | | `summary` | `string` | Uncleaned page summary, which may contain `LaTeX` for mathematical formulas or other languages besides Hebrew or English. If no summary is available, the value is `None`. | | `paragraphs` | `string` | List of strings containing cleaned paragraphs from the page. All paragraphs only contain Hebrew and English characters. | ## Authors - [@Fren Yan](https://github.com/YanFrenklakh) - [@Swisa Doron](https://github.com/fkzx8000) ## License [MIT](https://choosealicense.com/licenses/mit/)