File size: 2,725 Bytes
10dee57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
"""
Create HuggingFace dataset from scraped news articles.
"""

import json
from pathlib import Path
from datasets import Dataset, DatasetDict

DATA_DIR = Path(__file__).parent.parent / "data"
OUTPUT_DIR = Path(__file__).parent.parent / "dataset"


def load_articles(data_dir: Path) -> list[dict]:
    """Load articles from JSON files in data directory."""
    articles = []
    for json_file in data_dir.glob("**/*.json"):
        with open(json_file, "r", encoding="utf-8") as f:
            data = json.load(f)
            if isinstance(data, list):
                articles.extend(data)
            else:
                articles.append(data)
    return articles


def main():
    OUTPUT_DIR.mkdir(parents=True, exist_ok=True)

    # Load all articles
    all_articles = load_articles(DATA_DIR)
    print(f"Loaded {len(all_articles)} articles total")

    print(f"\nTotal articles: {len(all_articles)}")

    # Create dataset with required fields
    dataset_records = []
    for article in all_articles:
        record = {
            "source": article.get("source", ""),
            "url": article.get("url", ""),
            "category": article.get("category", ""),
            "content": article.get("content", ""),
            "title": article.get("title", ""),
            "description": article.get("description", ""),
            "publish_date": article.get("publish_date", ""),
        }
        dataset_records.append(record)

    # Create HuggingFace dataset
    dataset = Dataset.from_list(dataset_records)

    # Split into train/test (90/10)
    split_dataset = dataset.train_test_split(test_size=0.1, seed=42)

    dataset_dict = DatasetDict({
        "train": split_dataset["train"],
        "test": split_dataset["test"]
    })

    # Save dataset
    dataset_dict.save_to_disk(OUTPUT_DIR / "UVN-1")

    # Print statistics
    print("\n=== Dataset Statistics ===")
    print(f"Train samples: {len(dataset_dict['train'])}")
    print(f"Test samples: {len(dataset_dict['test'])}")

    # Category distribution
    print("\n=== Category Distribution ===")
    categories = {}
    for record in dataset_records:
        cat = record["category"]
        categories[cat] = categories.get(cat, 0) + 1

    for cat, count in sorted(categories.items(), key=lambda x: -x[1]):
        print(f"  {cat}: {count}")

    # Source distribution
    print("\n=== Source Distribution ===")
    sources = {}
    for record in dataset_records:
        src = record["source"]
        sources[src] = sources.get(src, 0) + 1

    for src, count in sorted(sources.items(), key=lambda x: -x[1]):
        print(f"  {src}: {count}")

    print(f"\nDataset saved to: {OUTPUT_DIR / 'UVN-1'}")


if __name__ == "__main__":
    main()