MacroLens / DATASHEET.md
itouchz's picture
Re-store README + DATASHEET as regular text (un-LFS dataset card)
c13b638 verified

Datasheet for MacroLens

This datasheet follows the Datasheets for Datasets framework (Gebru et al., Communications of the ACM, 2021).


1. Motivation

For what purpose was the dataset created? MacroLens evaluates forecasting and valuation models that must reason over numerical history and contextual information — macroeconomic state, scenarios, and firm text — in a financial setting. It addresses gaps in three existing benchmark families: generic time-series forecasting benchmarks drop text and valuation tasks; financial language benchmarks drop forecasting and event reasoning; recent context-rich forecasting datasets are non-financial or omit valuation.

Who created the dataset and on behalf of which entity? Anonymous (NeurIPS 2026 Datasets & Benchmarks Track double-blind submission). Authors and affiliations to be disclosed after author notification.

Who funded the creation of the dataset? Anonymous (will be disclosed after notification).

Any other comments? The benchmark targets the intersection of contextual time-series forecasting, valuation, and scenario-conditioned event prediction, which prior public benchmarks have not covered jointly.


2. Composition

What do the instances that comprise the dataset represent? A MacroLens instance is the tuple ⟨ticker $i$, timestamp $t$, granularity $g$, lookback panel $x_{i,t-L:t,g}$, static covariates $z_i$, optional scenario $s_t$, optional text $u_{i,\le t}$⟩, paired with a task-specific target $y_{i,t}$.

How many instances are there in total?

  • 4,841,094 daily panel rows (3,219,018 train / 1,622,076 test) over 4,416 tickers and 1,313 trading days.
  • 1,009,314 weekly panel rows.
  • 232,483 monthly panel rows.
  • 23,147 T2 valuation ground truths.
  • 23,147 T5 private-valuation ground truths (same 1,324 holdout tickers, price-stripped).
  • ~14,500 T3 (ticker, fiscal year, field) ground truth tuples (11-field curated dense panel).
  • 4,072,843 T4 scenario-forecast ground-truth rows from 1,622,076 test panel rows × 1,130 events.
  • 11,065 T6 generator-evaluation ground truths.
  • 23,367 T7 real-estate ground truth rows over 23,190 unique addresses.
  • 1,130 macroeconomic scenario events across 49 types.

Does the dataset contain all possible instances or is it a sample (e.g., a sample of a larger set)? The 4,416-ticker universe is the union of: full Russell 2000 (1,923 IWM holdings), full S&P SmallCap 600 (72 IJR-only additions), iShares Micro-Cap (225 IWC additions), and the 2,196 small-cap NASDAQ/NYSE tickers outside all three indices, filtered to company market cap ≤ $7.4B. This is not a sample — it is the complete enumeration of U.S. small/micro-cap equities meeting the universe spec on the trade dates 2021-01-04 through 2026-03-31.

What data does each instance consist of?

  • Numeric panel (131 features per (ticker, date) coordinate): 6 OHLCV + 19 derived valuation ratios + 45 XBRL statement fields with TTM rolling-sum variants + 46 FRED macro + 7 EIA commodity + 1 days-since-filing + 7 index/membership flags.
  • Static covariates: ticker metadata (sector, industry, exchange), security_type (operating / fund / SPAC), index memberships.
  • Scenario object (optional, T4): event_type (49 categories), structured natural-language description, scenario_id.
  • Text (optional): SEC filings (markdown + PDF), financial news articles.
  • Target: per-task, see Section "Splits" below.

Is there a label or target associated with each instance? Yes, per task (T1: horizon-length close trajectory; T2/T5: realized market cap; T3/T6: 11 canonical XBRL field values; T4: 63-day post-event return percentage; T7: rent + price).

Is any information missing from individual instances? Yes, by point-in-time design. Quarterly XBRL facts apply a post-acceptance lag (so they appear in $x_{i,t,g}$ only after the publication timestamp). News articles enter only after publication. The 14 tickers without XBRL or yfinance fundamentals (5 FDIC-only banks + 9 SEC-empty stubs that yfinance also fails) are applicability-masked on T2/T3/T5/T6 (kept for T1, T4 with prices+filings only).

Are relationships between individual instances made explicit? Yes. Tickers are linked to scenarios via dates and event_id. Real-estate addresses link to metros. Filings link to tickers via CIK. All keys are stored as identifier columns, not derived joins.

Are there recommended data splits?

  • T1, T4 (forecasting): chronological 70/30 split at 2024-09-03 (1,622,076 daily test rows).
  • T2, T3, T5, T6 (valuation + generation): 30% company-level holdout = 1,324 tickers, seed = 42. Each ticker contributes its latest valid snapshot. T3, T6 add a per-ticker temporal split (latest fiscal year for test, prior years for train).
  • T7 (real-estate): 30% address-level holdout (random, seeded).

Are there any errors, sources of noise, or redundancies in the dataset?

  • yfinance occasionally yields stale or misaligned quarter-close fundamentals; the loader applies a one-day lag for safety.
  • The 14 fundamentals-empty tickers are applicability-masked, not excluded.
  • T4 events with pre-event price below SEC penny-stock threshold ($0.50) are dropped at build time (~140 rows) because percentage-return arithmetic blows up at the noise floor.
  • T7 has 854 duplicate-address rows in the train pool (53,804 unique vs 54,658 raw); deduplicated at canonical-index time.

Is the dataset self-contained, or does it link to or otherwise rely on external resources? The Hugging Face release is bundled-self-contained for SEC EDGAR (filings + XBRL facts), FRED + EIA macro series, yfinance-derived prices + fundamentals, and the curated benchmark parquets — no user credentials needed for these. Two sources are gated by external licensing and ship as derived features + reconstruction scripts only:

Source Bundled in HF release? User credentials required for raw re-fetch?
SEC EDGAR (filings, XBRL) Yes (public domain) No (free)
FRED, EIA (macro) Yes (public domain) No (free; FRED API key recommended for high rate)
yfinance (prices, fundamentals) Yes (derived features) No (free)
Macroeconomic event scenarios Yes (curated by us, CC-BY-4.0) No
RentCast (real estate raw) NO — derived features only YES — user's own RentCast subscription for collect_real_estate.py
Financial news (~215k articles) NO — derived counts only YES — user's own news-API key for collect_news.py

Does the dataset contain data that might be considered confidential? No. All sources are public regulatory filings (SEC EDGAR), public market data (yfinance), public macroeconomic series (FRED, EIA), and licensed real-estate listings (RentCast, used under their terms).

Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? No, beyond standard financial-news content (corporate disputes, lawsuits, layoffs) which is part of public regulatory disclosure.

Does the dataset relate to people? Indirectly — SEC filings name corporate officers and directors as part of public regulatory disclosure (the same information that appears on EDGAR). No private individuals; no PII beyond what is in public regulatory filings.

Does the dataset identify any subpopulations? The dataset records security_type (operating, fund, SPAC) and Global Industry Classification Standard (GICS) sector for every ticker. No protected demographic categories.


3. Collection Process

How was the data associated with each instance acquired?

  • Universe: iShares IWM/IJR/IWC ETF holdings + NASDAQ Trader symbol directory (filtered to market cap ≤ $7.4B).
  • Prices: Yahoo Finance.
  • Fundamentals: yfinance (3.22M rows) + SEC EDGAR XBRL company-facts API (46.79M facts, 92.6% coverage).
  • Macro: FRED + EIA via the publicly documented APIs.
  • Filings: SEC EDGAR (10-K, 10-Q, 8-K, 20-F, 6-K, N-CSR, N-CSRS).
  • News: provider feed + entity linking.
  • Real estate: RentCast API (100 U.S. metros, 139,855 properties × 544 RentCast variants).

What mechanisms or procedures were used to collect the data? Custom Python scripts (collect_*.py) using each source's official documented API. Rate limits were honored. All scripts are included in the release.

If the dataset is a sample from a larger set, what was the sampling strategy? Not a sample — full enumeration of the universe spec over 2021-01-04 → 2026-03-31. Within that, the 30% company-level valuation holdout uses stratified sampling on (sector, market-cap quartile) at fixed seed = 42.

Who was involved in the data collection process? Anonymous authors. No human annotators (the dataset uses programmatic API queries).

Over what timeframe was the data collected? Source data was published over 2021-01-04 — 2026-03-31. Collection scripts were run in 2025-2026 to assemble the panel.

Were any ethical review processes conducted? N/A — public-records data only.


4. Preprocessing / Cleaning / Labeling

Was any preprocessing/cleaning/labeling of the data done? Yes:

  • Point-in-time alignment: every observation aligns to publication timestamp (filings post-acceptance lag, quarterly XBRL post-acceptance, news post-publication).
  • Algebraic-leakage scrubbing for T2/T5: every input column is auto-tested against $\log y$; any column with $|\text{Pearson}| > 0.99$ to the target is excluded. Largest residual T2 correlation post-scrub is shares-outstanding at $\rho = 0.30$, a legitimate size proxy.
  • APE clipping at 10× (1,000%) on all valuation tasks to prevent a single mispredicted outlier from dominating MAPE-style metrics.
  • Outlier cleanup at source for T4: rows with pre-event price below SEC penny-stock threshold ($0.50) dropped at build time.
  • Address deduplication for T7 at canonical-index time (854 duplicates in train pool, 177 in eval pool removed).
  • TTM rolling-sum variants computed for flow-style XBRL fields (revenue, net income, etc.).

Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data? Yes. The release bundles raw XBRL facts (xbrl/), raw prices (prices/), raw fundamentals (fundamentals/) alongside the curated benchmark/ parquets so downstream researchers can re-derive features.

Is the software that was used to preprocess/clean/label the data available? Yes — preprocess.py, assemble_benchmark.py, build_ontology.py, enrich_benchmark.py, generate_scenarios.py, build_valuation_tasks.py, validate_all.py. All under MIT.


5. Uses

Has the dataset been used for any tasks already? Yes — the accompanying paper reports a 17-method baseline panel across 7 families on T1–T7.

Is there a repository that links to any or all papers or systems that use the dataset? The HF dataset card (this README) will track citations. Currently: the accompanying NeurIPS 2026 D&B paper.

What (other) tasks could the dataset be used for?

  • Multi-modal time-series forecasting research.
  • Macroeconomic-event impact studies.
  • LLM evaluation under domain-specific (financial) tasks.
  • Private-market valuation modeling.
  • Cross-domain transfer (real-estate vs equity valuation).
  • Scenario reasoning + counterfactual forecasting.

Is there anything about the composition of the dataset or the way it was collected that might impact future uses?

  • U.S.-only and English-only — international generalizability not supported.
  • Survivorship bias is partially mitigated by including delisted tickers, but pre-2021 history is not covered.
  • The 30% company-level holdout for T2/T3/T5/T6 evaluates OOD-ticker generalization but not OOD-sector or OOD-industry by construction.

Are there tasks for which the dataset should not be used?

  • Trading decisions: the dataset is a research benchmark; metrics do not include transaction costs, slippage, or execution modeling. Direct trading use is not recommended.
  • International generalizability claims: U.S. equities only.
  • Deployment safety: no adversarial-robustness testing.

6. Distribution

Will the dataset be distributed to third parties outside of the entity on behalf of which the dataset was created? Yes — Hugging Face Datasets, public.

How will the dataset be distributed?

  • Primary: huggingface.co/datasets/macrolens/MacroLens (Croissant-validated, NeurIPS-D&B compliant).
  • Code: same repo.
  • Reconstruction scripts: same repo (raw filings + news re-fetched from official sources).

When will the dataset be distributed? Public at time of NeurIPS 2026 D&B-track submission.

Will the dataset be distributed under a copyright or other intellectual property license, and/or under applicable terms of use (ToU)?

  • Derived features + curated panel: CC-BY-4.0.
  • Code: MIT.
  • Vendored libraries (under methods/_vendored/): TSLib (MIT), ModernTCN (Apache 2.0).
  • Reconstruction scripts: MIT.

Have any third parties imposed IP-based or other restrictions on the data associated with the instances?

  • yfinance, FRED, EIA, RentCast: each has its own ToU; the release ships derived features and reconstruction scripts.
  • SEC EDGAR: public domain.

Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? No.


7. Maintenance

Who is supporting/hosting/maintaining the dataset? Anonymous (NeurIPS 2026 D&B submission). Maintainer-of-record will be disclosed after author notification.

How can the owner / curator / manager of the dataset be contacted? Through the Hugging Face dataset discussions tab (huggingface.co/datasets/macrolens/MacroLens/discussions) or via the corresponding-author email (post-notification).

Is there an erratum? None at submission. Errata will be tracked in the dataset card's Changelog section.

Will the dataset be updated? Yes — minor versioned updates planned to extend the time window and refresh upstream sources. Versioning follows semver; each release tags a Git-style snapshot in the HF repo.

If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances? N/A — public regulatory filings only.

Will older versions of the dataset continue to be supported/hosted/maintained? Yes — older revisions remain accessible via HF dataset revision tags (commit SHAs).

If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? Yes — pull requests via the HF dataset repo or the GitHub mirror. Contributions are reviewed for license compatibility (CC-BY-4.0 compatible only) and benchmark protocol consistency.


This datasheet was prepared at the time of NeurIPS 2026 D&B-track submission. The Croissant metadata file (auto-generated by Hugging Face) at https://huggingface.co/api/datasets/macrolens/MacroLens/croissant is the machine-readable counterpart to this human-readable datasheet.