Chiquitin commited on
Commit
d12f2e3
·
1 Parent(s): 46433bc

Upload src + bin with data visualizer (visualizer.py)

Browse files
README.md CHANGED
@@ -64,3 +64,96 @@ git clone https://huggingface.co/datasets/Alverciito/wikipedia_articles_es
64
  from datasets import load_from_disk
65
 
66
  ds = load_from_disk("wikipedia-es-A000") # or A001 / A002
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
  from datasets import load_from_disk
65
 
66
  ds = load_from_disk("wikipedia-es-A000") # or A001 / A002
67
+ ```
68
+
69
+ ## 🔧 Dataset Construction Pipeline
70
+
71
+ This dataset was generated in two main stages:
72
+
73
+ ### 1️⃣ Wikipedia Article Extraction (ZIM-based)
74
+
75
+ Raw articles are extracted directly from an offline **Wikipedia ZIM file** using a custom extractor.
76
+ Download the Spanish Wikipedia ZIM file [here](https://download.kiwix.org/zim/wikipedia_es_all_maxi.zim).
77
+
78
+
79
+ The extraction process:
80
+ - Randomly samples Wikipedia articles by internal index
81
+ - Parses HTML content using **BeautifulSoup**
82
+ - Extracts clean paragraph-level text
83
+ - Optionally follows internal Wikipedia links (relation recursion)
84
+ - Assigns unique document IDs
85
+
86
+ Each extracted article contains:
87
+ - `title`
88
+ - `text` (list of paragraphs)
89
+ - internal references (used only during extraction)
90
+
91
+ This approach ensures:
92
+ - No dependency on the live Wikipedia API
93
+ - Full reproducibility from a fixed ZIM snapshot
94
+ - High-throughput extraction suitable for large-scale datasets
95
+
96
+ ---
97
+
98
+ ### 2️⃣ Segmentation into Multi-Article Samples
99
+
100
+ Extracted articles are converted into **segmented documents** using a controlled aggregation strategy.
101
+
102
+ Segmentation rules:
103
+ - Each sample contains **1 to 10 articles**
104
+ - Articles are concatenated until a maximum paragraph limit is reached
105
+ - Paragraphs are preserved as coherent textual units
106
+ - Metadata is accumulated per segment
107
+
108
+ Resulting metadata per sample:
109
+ - `paragraphs`: total number of paragraphs
110
+ - `words`: total word count
111
+ - `articles`: number of Wikipedia articles combined
112
+ - `title`: list of article titles
113
+ - `text`: list of paragraph blocks (order preserved)
114
+
115
+ This structure is designed for:
116
+ - document-level classification
117
+ - segmentation boundary detection
118
+ - long-context language modeling
119
+ - sentence and paragraph similarity tasks
120
+
121
+ ---
122
+
123
+ ## 📊 Dataset Statistics
124
+
125
+ Each sample provides both **local** and **global** structural information:
126
+
127
+ - Document length in paragraphs and words
128
+ - Number of source articles per segment
129
+ - Explicit title grouping for multi-article samples
130
+
131
+ This enables models to reason about **structure and scale**, not just raw text.
132
+
133
+ ---
134
+
135
+ ## 🧪 Visualization & Exploration
136
+
137
+ A **Gradio-based dataset explorer** is included for interactive inspection.
138
+
139
+ Features:
140
+ - Browse samples by index
141
+ - View full segmented text with paragraph numbering
142
+ - Inspect per-sample statistics
143
+ - Visualize global distributions:
144
+ - Paragraph counts
145
+ - Word counts
146
+ - Articles per segment
147
+
148
+ Typical use cases:
149
+ - Dataset sanity checking
150
+ - Length distribution analysis
151
+ - Manual validation of segmentation quality
152
+
153
+ ---
154
+
155
+ ## ▶️ Running the data Visualizer
156
+
157
+ ```bash
158
+ python bin/visualizer.py
159
+ ```
bin/extractor.py ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
2
+ # #
3
+ # This file was created by: Alberto Palomo Alonso #
4
+ # Universidad de Alcalá - Escuela Politécnica Superior #
5
+ # #
6
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
7
+ """
8
+ Wikipedia ZIM extraction and segmentation script.
9
+
10
+ Main workflow:
11
+ 1) Ask the user for a ZIM path and an identifier.
12
+ 2) Extract articles using `WikipediaExtractor`.
13
+ 3) Convert the extracted list to a Hugging Face `datasets.Dataset`.
14
+ 4) Post-process the dataset with `wiki_to_seg` (segmentation).
15
+ 5) Save the resulting dataset to disk and reload it.
16
+
17
+ Notes:
18
+ - This script assumes `src.WikipediaExtractor` and `src.wiki_to_seg` are available.
19
+ - Output is saved under `./wikipedia-es-<identifier>`.
20
+ """
21
+
22
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
23
+ # IMPORT STATEMENTS #
24
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
25
+ import logging
26
+ import datasets
27
+ from src import WikipediaExtractor, wiki_to_seg
28
+
29
+
30
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
31
+ # FUNCTION DEF #
32
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
33
+ def setup_logger() -> logging.Logger:
34
+ """
35
+ Set up the logger for debugging.
36
+
37
+ Creates a module-level logger configured at DEBUG level with a StreamHandler.
38
+
39
+ Returns:
40
+ logging.Logger: Configured logger instance.
41
+
42
+ Notes:
43
+ If this function is called multiple times in the same process, it may attach
44
+ multiple handlers to the same logger. If that is undesirable in your runtime,
45
+ consider checking `logger.handlers` before adding a new handler.
46
+ """
47
+ logger = logging.getLogger(__name__)
48
+ logger.setLevel(logging.DEBUG)
49
+
50
+ handler = logging.StreamHandler()
51
+ handler.setLevel(logging.DEBUG)
52
+
53
+ formatter = logging.Formatter(
54
+ '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
55
+ )
56
+ handler.setFormatter(formatter)
57
+
58
+ logger.addHandler(handler)
59
+ logger.debug('Debugging WikipediaExtractor')
60
+ return logger
61
+
62
+
63
+ def extract(
64
+ zim_path: str,
65
+ relation_recursion: int = 0,
66
+ n_trials: int = 30_000
67
+ ) -> datasets.Dataset:
68
+ """
69
+ Extract Wikipedia articles from a ZIM file and return a Hugging Face Dataset.
70
+
71
+ Args:
72
+ zim_path (str):
73
+ Path to the Wikipedia ZIM file.
74
+ relation_recursion (int, optional):
75
+ Recursion depth for relation/link exploration (as implemented by
76
+ `WikipediaExtractor`). Defaults to 0.
77
+ n_trials (int, optional):
78
+ Trial/iteration budget for extraction (as implemented by the extractor).
79
+ Defaults to 30_000.
80
+
81
+ Returns:
82
+ datasets.Dataset:
83
+ A Hugging Face Dataset built from the extracted articles list.
84
+
85
+ Raises:
86
+ Any exception raised by `WikipediaExtractor` or `datasets.Dataset.from_list`.
87
+ """
88
+ extractor = WikipediaExtractor(
89
+ zim_path,
90
+ encoding='utf-8',
91
+ logger=setup_logger()
92
+ )
93
+
94
+ articles, _ = extractor.get_database(
95
+ relation_recursion=relation_recursion,
96
+ n_trials=n_trials,
97
+ from_cnt=0
98
+ )
99
+
100
+ hf_ds = datasets.Dataset.from_list(articles)
101
+ return hf_ds
102
+
103
+
104
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
105
+ # MAIN #
106
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
107
+ if __name__ == '__main__':
108
+ """
109
+ Script entry point.
110
+
111
+ Prompts for user inputs, runs extraction + segmentation, saves the dataset to disk,
112
+ and reloads it at the end.
113
+
114
+ Inputs:
115
+ - Wikipedia (zim file) path
116
+ - Wikipedia identifier (e.g., B000)
117
+
118
+ Side effects:
119
+ - Creates `./wikipedia-es-<identifier>` containing the saved dataset.
120
+ - Reloads the dataset from disk into the `dataset` variable.
121
+ """
122
+ # Ask user for input data:
123
+ z_path = input("Wikipedia (zim file) path: ")
124
+ identifier = input("Wikipedia (Wikipedia identifier, e.g. B000): ")
125
+
126
+ # Pathing:
127
+ path_to_disk = rf'./wikipedia-es-{identifier}'
128
+ # Extract:
129
+ hf_pre_dataset = extract(z_path)
130
+ # Post-processing:
131
+ segmentation_dataset = wiki_to_seg(hf_pre_dataset, 50)
132
+ # Save the dataset:
133
+ segmentation_dataset.save_to_disk(path_to_disk)
134
+ # Load the dataset:
135
+ dataset = datasets.load_from_disk(path_to_disk)
136
+
137
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
138
+ # END OF FILE #
139
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
bin/visualizer.py ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
2
+ # #
3
+ # This file was created by: Alberto Palomo Alonso #
4
+ # Universidad de Alcalá - Escuela Politécnica Superior #
5
+ # #
6
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
7
+ """
8
+ Gradio-based explorer for inspecting a segmented Wikipedia dataset.
9
+
10
+ Main features:
11
+ - Load a Hugging Face dataset from disk.
12
+ - Compute global statistics for paragraphs, words, and articles.
13
+ - Precompute histograms for dataset-level distributions.
14
+ - Provide an interactive Gradio UI to browse individual samples and
15
+ visualize global statistics.
16
+
17
+ Expected dataset fields:
18
+ - id
19
+ - text (list of paragraphs/segments)
20
+ - paragraphs
21
+ - words
22
+ - articles
23
+ - title
24
+ """
25
+
26
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
27
+ # IMPORT STATEMENTS #
28
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
29
+ import gradio as gr
30
+ import matplotlib.pyplot as plt
31
+ import numpy as np
32
+ from datasets import load_from_disk
33
+
34
+
35
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
36
+ # STATISTICS UTILITIES #
37
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
38
+ def compute_stats(arr: np.ndarray) -> dict:
39
+ """
40
+ Compute basic descriptive statistics for a numeric array.
41
+
42
+ Args:
43
+ arr (np.ndarray):
44
+ Input array of numeric values.
45
+
46
+ Returns:
47
+ dict:
48
+ Dictionary containing mean, median, standard deviation (sample),
49
+ minimum, and maximum values.
50
+ """
51
+ return {
52
+ 'mean': float(np.mean(arr)),
53
+ 'median': float(np.median(arr)),
54
+ 'std': float(np.std(arr, ddof=1)),
55
+ 'min': int(np.min(arr)),
56
+ 'max': int(np.max(arr))
57
+ }
58
+
59
+
60
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
61
+ # PLOTTING UTILITIES #
62
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
63
+ def make_histogram(arr: np.ndarray, title: str):
64
+ """
65
+ Create a histogram plot for a numeric array.
66
+
67
+ Args:
68
+ arr (np.ndarray):
69
+ Input array of numeric values.
70
+ title (str):
71
+ Title label for the histogram (used in title and x-axis).
72
+
73
+ Returns:
74
+ matplotlib.figure.Figure:
75
+ Matplotlib figure object containing the histogram.
76
+ """
77
+ fig, ax = plt.subplots()
78
+ ax.hist(arr, bins=30)
79
+ ax.set_title(f"Distribution of {title}")
80
+ ax.set_xlabel(title)
81
+ ax.set_ylabel("Count")
82
+ fig.tight_layout()
83
+ return fig
84
+
85
+
86
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
87
+ # MAIN #
88
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
89
+ if __name__ == '__main__':
90
+ """
91
+ Script entry point.
92
+
93
+ Loads a dataset from disk, computes global statistics and histograms,
94
+ and launches a Gradio UI to interactively explore dataset samples.
95
+ """
96
+ # Load dataset
97
+ dataset_path = input('Enter dataset path: ')
98
+ ds = load_from_disk(dataset_path)
99
+
100
+ # Extract numeric arrays
101
+ paragraphs_arr = np.array(ds['paragraphs'], dtype=int)
102
+ words_arr = np.array(ds['words'], dtype=int)
103
+ articles_arr = np.array(ds['articles'], dtype=int)
104
+
105
+ # Compute global statistics
106
+ stats = {
107
+ 'paragraphs': compute_stats(paragraphs_arr),
108
+ 'words': compute_stats(words_arr),
109
+ 'articles': compute_stats(articles_arr)
110
+ }
111
+
112
+ # Precompute histogram figures
113
+ par_plot_obj = make_histogram(paragraphs_arr, 'Paragraphs')
114
+ words_plot_obj = make_histogram(words_arr, 'Words')
115
+ articles_plot_obj = make_histogram(articles_arr, 'Articles')
116
+
117
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
118
+ # GRADIO CALLBACK #
119
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
120
+ def show(idx: int):
121
+ """
122
+ Retrieve and format a single dataset sample for display.
123
+
124
+ Args:
125
+ idx (int):
126
+ Index of the document in the dataset.
127
+
128
+ Returns:
129
+ tuple[str, str]:
130
+ - Formatted sample text and metadata.
131
+ - Formatted global statistics and current sample information.
132
+ """
133
+ sample = ds[int(idx)]
134
+ texto = "\n\n".join(
135
+ [f"{i}: {p}" for i, p in enumerate(sample["text"])]
136
+ )
137
+
138
+ sample_info = (
139
+ f"Doc ID: {sample['id']}"
140
+ f"\n\n{texto}"
141
+ )
142
+
143
+ stats_text = (
144
+ "Global Dataset Statistics:\n"
145
+ f"Paragraphs \t- mean: {stats['paragraphs']['mean']:.2f}, "
146
+ f"std: {stats['paragraphs']['std']:.2f}, "
147
+ f"min: {stats['paragraphs']['min']}, "
148
+ f"max: {stats['paragraphs']['max']}\n"
149
+ f"Words \t- mean: {stats['words']['mean']:.2f}, "
150
+ f"std: {stats['words']['std']:.2f}, "
151
+ f"min: {stats['words']['min']}, "
152
+ f"max: {stats['words']['max']}\n"
153
+ f"Articles \t- mean: {stats['articles']['mean']:.2f}, "
154
+ f"std: {stats['articles']['std']:.2f}, "
155
+ f"min: {stats['articles']['min']}, "
156
+ f"max: {stats['articles']['max']}\n"
157
+ f"\nCurrent Sample Information:\n"
158
+ f"\t- Doc ID: {sample['id']}\n"
159
+ f"\t- Paragraphs: {sample['paragraphs']}\n"
160
+ f"\t- Words: {sample['words']}\n"
161
+ f"\t- Articles: {sample['articles']}\n"
162
+ f"\t- Titles: {sample['title']}"
163
+ )
164
+ return sample_info, stats_text
165
+
166
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
167
+ # GRADIO UI #
168
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
169
+ with gr.Blocks(title="Wikipedia Extractor Explorer") as demo:
170
+ gr.Markdown("## Wikipedia Segmentation Explorer")
171
+
172
+ idx_slider = gr.Slider(
173
+ 0, len(ds) - 1, step=1, label="Document Index"
174
+ )
175
+
176
+ with gr.Row():
177
+ with gr.Column(scale=1):
178
+ sample_output = gr.Textbox(
179
+ label="Sample Info", lines=20
180
+ )
181
+ stats_output = gr.Textbox(
182
+ label="Global Statistics", lines=6
183
+ )
184
+ with gr.Column(scale=1):
185
+ gr.Plot(
186
+ label="Paragraphs Histogram",
187
+ value=par_plot_obj
188
+ )
189
+ gr.Plot(
190
+ label="Words Histogram",
191
+ value=words_plot_obj
192
+ )
193
+ gr.Plot(
194
+ label="Articles Histogram",
195
+ value=articles_plot_obj
196
+ )
197
+
198
+ idx_slider.change(
199
+ fn=show,
200
+ inputs=idx_slider,
201
+ outputs=[sample_output, stats_output]
202
+ )
203
+
204
+ demo.launch()
205
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
206
+ # END OF FILE #
207
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
src/__init__.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
2
+ # #
3
+ # This file was created by: Alberto Palomo Alonso #
4
+ # Universidad de Alcalá - Escuela Politécnica Superior #
5
+ # #
6
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
7
+ from .zimclass import WikipediaExtractor
8
+ from .wikipedia_to_segmentation import wiki_to_seg
9
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
10
+ # END OF FILE #
11
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
src/wikipedia_to_segmentation.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
2
+ # #
3
+ # This file was created by: Alberto Palomo Alonso #
4
+ # Universidad de Alcalá - Escuela Politécnica Superior #
5
+ # #
6
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
7
+ # Import statements:
8
+ import random
9
+ import datasets
10
+
11
+
12
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
13
+ # FUNCTION DEF #
14
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
15
+ def wiki_to_seg(dataset: datasets.Dataset, max_paragraphs: int) -> datasets.Dataset:
16
+ """
17
+ Converts a wikipedia dataset to a segmentation dataset.
18
+ :param dataset: A dataset with the structure {'title': str, 'text': list of str, 'id': str, 'paragraphs': int}.
19
+ :param max_paragraphs: Maximum number of paragraphs to segment.
20
+ :return: A dataset with the same structure as the original one, but with the text segmented.
21
+ """
22
+ # Initialize the dataset:
23
+ lines = list()
24
+ n_articles = random.randint(1, 10)
25
+ new_element = {'title': list(), 'text': list(), 'id': 'Unknown', 'paragraphs': 0, 'articles': 0, 'words': 0}
26
+
27
+ # Typing:
28
+ element: dict
29
+ idx: int
30
+ count: int = 0
31
+
32
+ # Iterate over the original dataset:
33
+ for idx, element in enumerate(dataset):
34
+ # Get the text:
35
+ paragraphs = element['text']
36
+ element_paragraphs = len(paragraphs)
37
+ if element_paragraphs + new_element['paragraphs'] > max_paragraphs:
38
+ # In case of exceeding max_paragraphs, we need to split the paragraphs:
39
+ paragraphs = paragraphs[:max_paragraphs - new_element['paragraphs']]
40
+
41
+ if paragraphs:
42
+ # Join the paragraphs
43
+ article_text = '\n'.join(paragraphs) + '\n'
44
+
45
+ # Add the paragraphs to the new element:
46
+ new_element['text'].append(article_text)
47
+ new_element['paragraphs'] += len(paragraphs)
48
+ new_element['title'].append(element['title'])
49
+ new_element['id'] = f'S0-{count:06}' if new_element['id'] == 'Unknown' else new_element['id']
50
+ new_element['words'] += len(article_text.split())
51
+
52
+ # If we reach the end of the generation:
53
+ new_element['articles'] += 1
54
+
55
+ if (new_element['articles'] == n_articles or idx == len(dataset) - 1
56
+ or max_paragraphs <= new_element['paragraphs']):
57
+ n_articles = random.randint(1, 10)
58
+ lines.append(new_element)
59
+ new_element = {'title': list(), 'text': list(), 'id': 'Unknown', 'paragraphs': 0, 'articles': 0, 'words': 0}
60
+ count += 1
61
+
62
+ # Convert to the dataset:
63
+ new_dataset = datasets.Dataset.from_list(lines)
64
+ return new_dataset
65
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
66
+ # END OF FILE #
67
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
src/zimclass.py ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
2
+ # #
3
+ # This file was created by: Alberto Palomo Alonso #
4
+ # Universidad de Alcalá - Escuela Politécnica Superior #
5
+ # #
6
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
7
+ # Import statements:
8
+ import logging
9
+ import zimply
10
+ import os
11
+ import bs4
12
+ import random
13
+ import re
14
+ import tqdm
15
+
16
+
17
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
18
+ # MAIN CLASS #
19
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
20
+ class WikipediaExtractor:
21
+ def __init__(self,
22
+ wikipedia_path: str,
23
+ encoding: str = 'utf-8',
24
+ find: tuple = ('p',),
25
+ logger: logging.Logger = None,
26
+ seed: int = None,
27
+ ):
28
+ """
29
+ :param wikipedia_path: Path to the Wikipedia ZIM file.
30
+ :param encoding: Encoding of the ZIM file. Default is 'utf-8'.
31
+ :param find: The elements of the article to find, refer to BS4.
32
+ :param logger: Logger object for logging. Default is None.
33
+ :param seed: Seed for random number generator. Default is None.
34
+ """
35
+ # Error handlers:
36
+ if not os.path.exists(wikipedia_path):
37
+ raise FileNotFoundError(f"File {wikipedia_path} does not exist.")
38
+
39
+ self.zim = zimply.zimply.ZIMFile(wikipedia_path, encoding=encoding)
40
+ self.logger = logger or logging.getLogger(__name__)
41
+ self.find = find
42
+ self.magic_min = 78
43
+ self.magic_max = 4_113_686
44
+
45
+ # Random seed:
46
+ random.seed(seed)
47
+
48
+ # Avoid repetition:
49
+ self.stacked_refs = {'Wikidata', 'Wikimedia_Commons', 'ISSN'}
50
+ self.logger.info(f'WikipediaExtractor initialized.')
51
+
52
+ def get_database(self, relation_recursion: int = 0, n_trials: int = 100_000, from_cnt: int = 0):
53
+ """
54
+ Gets the database of articles.
55
+ :param relation_recursion: Relation recursion. Default is 0.
56
+ :param n_trials: Number of trials to get articles. Default is 100_000.
57
+ :param from_cnt: Count of articles. Default is 0.
58
+ :return: A list of related (or not) articles and the successful count.
59
+ """
60
+ # Recursion level 0:
61
+ articles = list()
62
+ cnt = from_cnt
63
+
64
+ # Loop through the number of trials:
65
+ for _ in tqdm.tqdm(range(n_trials), desc='Article extraction', unit='article'):
66
+ article = self.get(relation_recursion=relation_recursion)
67
+ # Check if the article is valid:
68
+ if article is not None:
69
+ for entry in article:
70
+ if entry is not None:
71
+ cnt += 1
72
+ entry['id'] = f'L0-{cnt:06}'
73
+ articles.append(entry)
74
+
75
+ return articles, cnt
76
+
77
+ def get(self, relation_recursion: int = 0, generation_policy: str = 'kill'):
78
+ """
79
+ Gets a random article from wikipedia. Gets a random related article per relation_recursion given.
80
+ :param relation_recursion: Relation recursion. Default is 0.
81
+ :param generation_policy: Tells continuing if there is no relationship recursion. Default is 'kill':
82
+ 'kill': Stops generation and returns None
83
+ 'warn': Logs a warning and returns the current generation.
84
+ 'ignore': Ignores the article and returns the current generation.
85
+ :return: A list of Articles.
86
+ """
87
+ articles = list()
88
+ # Random number between min and max:
89
+ random_index = random.randint(self.magic_min, self.magic_max)
90
+ articles.append(self.__get_article_by_index(random_index))
91
+ # Get recursion:
92
+ for recursion in range(relation_recursion):
93
+ # Gather last refs:
94
+ last_refs = articles[-1]['refs']
95
+ # Check if there are valid references:
96
+ if last_refs:
97
+ # Get the random related article:
98
+ random_choice = random.choice(last_refs)
99
+ articles.append(self.__get_article_by_url(random_choice))
100
+ elif generation_policy == 'kill':
101
+ self.logger.error(f'Generation at iteration {recursion + 1} stoped due to lack of references.')
102
+ return None
103
+ elif generation_policy == 'warn':
104
+ self.logger.warning(f'Generation at iteration {recursion + 1} stoped due to lack of references.')
105
+ return articles
106
+ elif generation_policy == 'ignore':
107
+ return articles
108
+ # Return the articles:
109
+ return articles
110
+
111
+ def __get_article_by_index(self, index: int, astype: type = dict):
112
+ """
113
+ Gets an article by its index.
114
+ :param index: Index of the article.
115
+ :param astype: Type of the return article. Dictionary or article.
116
+ :return:
117
+ """
118
+ if index < self.magic_min or index > self.magic_max:
119
+ raise IndexError(f"Index {index} is out of range [{self.magic_min}, {self.magic_max}].")
120
+ # Read the entry:
121
+ dict_entry = self.zim.read_directory_entry_by_index(index)
122
+ # Get the article:
123
+ return self.__get_article_by_url(dict_entry['url'], astype=astype)
124
+
125
+ def __get_article_by_url(self, url: str, astype: type = dict):
126
+ """
127
+ Get article by url
128
+ :param url: The url of the article.
129
+ :param astype: Type of the return article. Dictionary or article.
130
+ :return:
131
+ """
132
+ # Gather article:
133
+ article = self.zim.get_article_by_url('A', url)
134
+ if article is None:
135
+ logging.error(f'Article {url} not found, skipping...')
136
+ return None
137
+ # Avoid loops and using the same article twice from references:
138
+ self.stacked_refs.add(url)
139
+ # Convert to format:
140
+ return self.__article_to_dict(article, self.stacked_refs, self.find) if astype == dict else article
141
+
142
+ @staticmethod
143
+ def __article_to_dict(article: zimply.zimply.Article,
144
+ stacked_refs: set,
145
+ find: tuple = ('p',)) -> dict:
146
+ """
147
+ Converts an article into a dictionary.
148
+ :param article: Article to convert.
149
+ :param stacked_refs: Stacked references of the article to avoid.
150
+ :param find: Elements of the article to find, refer to BS4.
151
+ :return: A dictionary.
152
+ """
153
+ # Extract HTML:
154
+ html = article.data.decode('utf-8')
155
+ soup = bs4.BeautifulSoup(html, 'html.parser')
156
+
157
+ # Title extraction:
158
+ page_title = soup.find('title').text.strip()
159
+
160
+ # Paragraphs extraction:
161
+ paragraphs = soup.find_all(find)
162
+ text = [re.sub(r'\s+', ' ', re.sub(r'\[\d+]', '', p.get_text())).strip()
163
+ for p in paragraphs if p.get_text(strip=True)]
164
+
165
+
166
+ # Extraer referencias internas
167
+ internal_refs = list()
168
+ for a in soup.find_all('a', href=True):
169
+ href = a['href']
170
+ title = a.get('title')
171
+ if (
172
+ href.startswith('/') is False and # Avoid external links
173
+ '://' not in href and # Avoid internal links
174
+ title and len(title) > 1 and # Can be read
175
+ '%' not in href and # Is valid (% is invalid)
176
+ '#' not in href and # Is valid (# is invalid)
177
+ '.svg' not in href and # SVG are along with refs.
178
+ href not in stacked_refs # Avoid loops
179
+ ):
180
+ internal_refs.append(href)
181
+
182
+ # Return as dictionary:
183
+ return {
184
+ 'title': page_title,
185
+ 'text': text,
186
+ 'refs': internal_refs
187
+ }
188
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #
189
+ # END OF FILE #
190
+ # - x - x - x - x - x - x - x - x - x - x - x - x - x - x - #