arxiv1m-zeronex / README.md
YoloMG's picture
Update README.md
5562af8 verified
metadata
license: mit

⭐ README — ZERONEX SCIENTIFIC CORPUS (1M CLEAN JSON)

(Made by Zeronex — 2025 Edition)

🚀 Overview

This release contains one of the cleanest scientific corpora ever published. No noise. No XML leftovers. No broken paragraphs. Every file is fully normalized, token-ready, embedding-ready, and AI-training-ready.

All files are professionally structured JSON, signature-stamped, and extracted from scientific metadata with gold-level cleaning rules.

This drop includes:

1️⃣ The MASSIVE 1,000,000 Sample Corpus

A fully cleaned scientific dataset containing:

1,000,000 normalized JSON files

Full abstracts

Metadata

Authors

Categories

Clean paragraphs

Token estimates

Structured fields

Consistent formatting

Unified naming

Signature: "Made by Zeronex"

Every file follows the same perfect schema, with no exceptions.

2️⃣ Sorted Category Files

A second dataset containing category-specific filtered corpora, automatically split by top-level scientific fields:

Examples:

physics.json

quant-ph.json

hep-th.json

astro-ph.json

q-fin.json

etc.

Each file contains only clean, validated entries matching the scientific domain.

This makes it extremely easy to build:

specialized LLMs

domain-specific embeddings

fine-tuned models

retrieval systems

scientific RAG pipelines

3️⃣ Train-Ready Splits

Included:

train.json

valid.json

test.json

Perfectly balanced. No duplicates. No contamination. Totally cleaned. Ready for:

supervised fine-tuning

continued pretraining

embedding model training

scientific QA systems

autoregressive language modeling

These splits are engineered to be plug-and-play for any ML framework (PyTorch, HF Transformers, JAX, etc.).

🔥 Data Quality

This corpus is designed with an industrial-grade cleaning standard, featuring:

✔ No broken text ✔ No incomplete sentences ✔ No XML/HTML noise ✔ No parsing artifacts ✔ Consistent metadata ✔ Robust JSON schema ✔ Perfect normalization ✔ Clean paragraphs ✔ Math extraction support ✔ Subject classification ✔ Zero mixing across domains ✔ Seamless loading at scale ✔ Embedded signature: "Made by Zeronex"

This is not a raw dump. This is not a scraped mess. This is a professional-grade scientific dataset, built to train real models.

⚡ Why This Drop Matters

This dataset can be used immediately for:

LLM pretraining

Domain fine-tuning

RAG systems

Scientific summarization

Research chatbots

Knowledge extraction

Embedding model training

Topic modeling

Graph building

Multi-domain AI assistants

This is the type of corpus used in high-level research labs.

🧬 Signature

Every file includes:

"signature": "Made by Zeronex"

This certifies provenance and protects against dataset plagiarism.

📦 Contents 📁 SampleArXiv_1M/ │── 1M_json_papers/ # main dataset (1,000,000 files) │── categories/ # domain-filtered datasets │── train.json # train split (ready to use) │── valid.json # validation split │── test.json # test split │── README.md # this file

🏁 Final Notes

This is a teaser drop. The full, extended corpus (many millions more) will be released only if this work is shared, supported, and credited properly.

If you use this dataset, please cite the creator:

“Dataset cleaned and structured by Zeronex (2025).”