updated dataset descrioption
Browse files
README.md
CHANGED
|
@@ -1,3 +1,58 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# About Dataset
|
| 6 |
+
|
| 7 |
+
The following dataset is a collection of `JSON` sets that contain most of the Hebrew Wikipedia.
|
| 8 |
+
|
| 9 |
+
## About Data collection methodology
|
| 10 |
+
|
| 11 |
+
The strategy behind the methodology for collecting the data is as follows:
|
| 12 |
+
|
| 13 |
+
- **Crawl Hebrew Wikipedia:** Begin by crawling through Hebrew Wikipedia to collect all redirect links on each page.
|
| 14 |
+
- **Breadth-First Search (BFS):** For each page, apply a BFS-like strategy to ensure that every link is scraped.
|
| 15 |
+
- **Link Collection:** Collect the links as tuples in a link file, where each tuple contains a page name and its corresponding link (<page_name>, <page_link>).
|
| 16 |
+
- **Data Scraping and Cleaning:** After crawling and link collection, iterate over the links and scrape all paragraphs. Apply cleaning methodologies to ensure data cleanliness. Additionally, collect the page summary for each page, if available.
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
### Description of the data
|
| 20 |
+
|
| 21 |
+
The dataset file is a JSONL file structured as follows:
|
| 22 |
+
|
| 23 |
+
#### JSONL File Structure
|
| 24 |
+
|
| 25 |
+
```jsonl
|
| 26 |
+
{
|
| 27 |
+
id: <id_num>,
|
| 28 |
+
page: <page_name>,
|
| 29 |
+
url: <page_url>,
|
| 30 |
+
summary: <page_summary>,
|
| 31 |
+
paragraphs: [<paragraph1>, <paragraph2>,...]
|
| 32 |
+
}
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
| Parameter | Type | Description |
|
| 36 |
+
| :-------- | :------- | :------------------------- |
|
| 37 |
+
| `id` | `string` | Unique identifier for the JSON element. |
|
| 38 |
+
| `page` | `string` | Name of the Wikipedia page. |
|
| 39 |
+
| `url` | `string` | URL of the associated Wikipedia page. |
|
| 40 |
+
| `summary` | `string` | Uncleaned page summary, which may contain `LaTeX` for mathematical formulas or other languages besides Hebrew or English. If no summary is available, the value is `None`. |
|
| 41 |
+
| `paragraphs` | `string` | List of strings containing cleaned paragraphs from the page. All paragraphs only contain Hebrew and English characters. |
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
## Authors
|
| 46 |
+
|
| 47 |
+
- [@Fren Yan](https://github.com/YanFrenklakh)
|
| 48 |
+
|
| 49 |
+
- [@Swisa Doron](https://github.com/fkzx8000)
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
## License
|
| 53 |
+
|
| 54 |
+
[MIT](https://choosealicense.com/licenses/mit/)
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
|