File size: 11,169 Bytes
c13b638 e6a90e5 c13b638 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 | ---
license: cc-by-4.0
task_categories:
- time-series-forecasting
- tabular-regression
- text-generation
- question-answering
language:
- en
size_categories:
- 1M<n<10M
tags:
- finance
- macroeconomic
- multimodal
- benchmark
- sec-edgar
- xbrl
- rentcast
- small-cap
- russell-2000
- private-valuation
- scenario-conditioned-forecasting
pretty_name: MacroLens
configs:
- config_name: panel_daily
data_files:
- split: train
path: data/daily/panel_train.parquet
- split: test
path: data/daily/panel_test.parquet
- config_name: panel_weekly
data_files:
- split: train
path: data/weekly/panel_train.parquet
- split: test
path: data/weekly/panel_test.parquet
- config_name: panel_monthly
data_files:
- split: train
path: data/monthly/panel_train.parquet
- split: test
path: data/monthly/panel_test.parquet
- config_name: scenarios_daily
data_files: data/daily/scenarios.parquet
- config_name: valuation_inputs_daily
data_files: data/daily/valuation_inputs.parquet
- config_name: private_valuation_inputs_daily
data_files: data/daily/private_valuation_inputs.parquet
- config_name: generation_inputs_daily
data_files: data/daily/generation_inputs.parquet
- config_name: generation_ground_truth_daily
data_files: data/daily/generation_ground_truth.parquet
- config_name: generator_eval_inputs_daily
data_files: data/daily/generator_eval_inputs.parquet
- config_name: generator_eval_ground_truth_daily
data_files: data/daily/generator_eval_ground_truth.parquet
- config_name: scenario_forecast_ground_truth_daily
data_files: data/daily/scenario_forecast_ground_truth.parquet
- config_name: real_estate_train
data_files: data/real_estate/re_train_properties.parquet
- config_name: real_estate_eval
data_files: data/real_estate/re_eval_inputs.parquet
---
# MacroLens
A benchmarking corpus for **contextual financial reasoning under macroeconomic scenarios** across **4,416 U.S. small- and micro-cap equities (2021-01-04 — 2026-03-31)**. MacroLens unifies seven tasks over a single point-in-time panel: contextual time-series forecasting, public valuation, financial-statement generation, scenario-conditioned return forecasting, private-company valuation, generator evaluation from natural-language descriptions, and real-estate valuation.

| Task | Type | Output |
|---|---|---|
| **T1** Contextual Forecasting | Time-series | Horizon-length close trajectory |
| **T2** Public Valuation | Tabular regression | Equity market cap |
| **T3** Financial Statement Generation | Structured generation | 11 canonical XBRL fields per (ticker, fiscal year) |
| **T4** Scenario-Conditioned Return | Event forecasting | 63-day post-event return percentage |
| **T5** Private-Company Valuation | Tabular regression (price-stripped) | Equity value w/o market data |
| **T6** Generator Evaluation | NL→ structured | Same 11 fields from a natural-language company description |
| **T7** Real-Estate Valuation | Cross-domain regression | Rent + price per RentCast address |
Every instance carries a 131-numeric / 141-column point-in-time panel (prices, 46.8M XBRL accounting facts, 53 macroeconomic series, filing recency, derived ratios), an optional macroeconomic scenario object (1,130 events across 49 types), and optional SEC filings + financial-news context. Temporal alignment is strictly point-in-time: every observation visible at prediction timestamp $t$ was publicly available by $t$.
## Quickstart
```python
import macrolens as ml
# 1. Load (X, y, meta) — identical schema across train/test
X_train, y_train, meta_train = ml.load("T1", "train", granularity="daily")
X_test, y_test, meta_test = ml.load("T1", "test")
# 2. Fit + Predict
model = ml.methods.LightGBMRegressor(task="T1")
model.fit(X_train, y_train, seed=42)
y_pred = model.predict(X_test)
# 3. Score (cluster-bootstrap CIs by ticker for T1; adaptive n_boot)
metrics = ml.score("T1", y_test, y_pred)
print(metrics["mse"]["value"], metrics["mse"]["ci_lo"], metrics["mse"]["ci_hi"])
```
10 lines, every task. Swap method on line 7. See `notebooks/quickstart.ipynb` for a full walkthrough with the full 17-method panel.
## Dataset structure
```
data/
├── daily/ # primary granularity (4.84M panel rows)
│ ├── panel_train.parquet # T1, T4 train side
│ ├── panel_test.parquet # T1, T4 eval side
│ ├── scenarios.parquet # 1,130 macroeconomic events
│ ├── valuation_inputs.parquet # T2 features + ground truth
│ ├── private_valuation_inputs.parquet # T5 (price-stripped)
│ ├── generation_inputs.parquet # T3 fundamentals snapshot
│ ├── generation_ground_truth.parquet # T3 long-form (ticker, FY, field, value)
│ ├── generator_eval_inputs.parquet # T6 NL company descriptions
│ ├── generator_eval_ground_truth.parquet
│ └── scenario_forecast_ground_truth.parquet # T4 ground truth
├── weekly/ # Friday-close resampled (1.01M rows)
├── monthly/ # Last-trading-day resampled (232k rows)
├── real_estate/
│ ├── re_train_properties.parquet # T7 train (53,804 unique addresses)
│ └── re_eval_inputs.parquet # T7 eval (23,190 unique addresses)
├── xbrl/ # 46.8M standardized XBRL facts, 92.6% ticker coverage
├── filings/ # 295,860 SEC filings (10-K, 10-Q, 8-K, 20-F, 6-K, N-CSR, N-CSRS) — markdown + PDF
├── prices/ # OHLCV + adjusted close (yfinance)
├── fundamentals/ # quarterly statements (yfinance, ~3.2M rows)
├── macro/ # 46 FRED + 7 EIA series
└── manifest.json # SHA-256 over every parquet (provenance)
```
## Data sources & access requirements
**What's bundled in this HF release** (no user credentials required):
| Source | Bundled artifact | License |
|---|---|---|
| SEC EDGAR | `filings/` (295k docs), `xbrl/` (46.8M facts) | Public domain (US gov) |
| FRED | 46 macroeconomic series | Public domain |
| EIA | 7 commodity series | Public domain |
| yfinance | `prices/` (OHLCV), `fundamentals/` (quarterly) — derived features | Non-commercial (yfinance ToU) |
| RentCast | `real_estate/` (address-level derived features only — rent + price targets, property attributes) | RentCast ToU — derived only |
| Macroeconomic events | `scenarios.parquet` (1,130 events × 49 types) | Curated by us, CC-BY-4.0 |
**What's NOT bundled** (gated — user credentials required for raw re-fetch via `collect_*.py`):
| Source | Status | User-side requirement |
|---|---|---|
| **Financial-news provider** | **Excluded** — provider ToU prohibits redistribution. The release ships derived counts (`filing_8k_count_30d`, `news_count_7d`, `has_press_release_7d`) only. | **User's own news-API key required** for `collect_news.py` |
| **RentCast raw listings** | **Excluded raw** — proprietary. Derived features bundled. | **User's own RentCast subscription** required for `collect_real_estate.py` raw mode |
## Universe
The 4,416-ticker universe combines: full Russell 2000 (1,923 IWM holdings), full S&P SmallCap 600 (72 IJR-only additions), iShares Micro-Cap (225 IWC additions), and 2,196 small-cap NASDAQ/NYSE tickers outside all three indices. The split is **3,857 operating companies + 333 funds + 226 SPACs**, with `security_type` recorded for applicability-aware stratification.
## Splits
- **Forecasting (T1, T4)**: chronological 70/30 split at **2024-09-03**.
- **Valuation + generation (T2, T3, T5, T6)**: **30% company-level holdout = 1,324 tickers** (seed = 42), each contributing its latest valid snapshot.
- **Real-estate (T7)**: 30% address-level holdout (random, seeded), with per-property time-axis features.
Cluster-bootstrap 95% CIs are computed per task: by `ticker` (T1/T2/T3/T5/T6), `scenario_id` (T4), or `address` (T7). Number of bootstrap resamples is adaptive in [1k, 10k] until `(ci_hi - ci_lo) / |mean| < 0.05`.
## Methods (panel)
The release ships a 17-method baseline panel across 7 families: 4 naive, 2 classical, 3 deep sequence, 3 zero-shot TSFM, 2 LLM-adapted multi-task systems, 3 zero-shot frontier LLMs (gpt-oss-120b, gpt-5.1, gemini-3-flash + qwen35). Every method registers via `@register(name=…, family=…, tasks=…)` and exposes the sklearn-style `(fit, predict, save, load)` contract.
```python
ml.list_methods() # all registered methods
ml.list_methods(task="T1") # methods that support T1
ml.list_methods(family="naive") # naive baselines
```
## License
- **Data**: CC-BY-4.0 (derived features + curated panel)
- **Code**: MIT (`macrolens/`, `dataloader/`, `methods/`, `eval.py`, `experiments/`)
- **Vendored libraries** (under `methods/_vendored/`):
- `tslib/` — MIT (DLinear, iTransformer source)
- `moderntcn/` — Apache 2.0 (ModernTCN source)
- **Reconstruction scripts** (`collect_*.py`) provided for sources with redistribution restrictions: SEC filings (re-fetch from EDGAR), financial news (re-fetch from provider), real-estate (re-fetch from RentCast).
## Citation
```bibtex
@inproceedings{macrolens2026,
title = {{MacroLens}: A Multi-Task Benchmark for Contextual Financial Reasoning under Macroeconomic Scenarios},
author = {<authors>},
booktitle = {Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track},
year = {2026}
}
```
## Reproducibility
Every `RunRecord` JSON in `experiments/results/` records: `git_sha`, `lib_versions`, `hardware`, `artifact_sha256` (SHA-256 of every parquet read), `timestamp`, and `deterministic_mode`. Predictions are persisted at `experiments/predictions/<method>_<task>_seed<seed>.pkl` so eval logic can be re-applied via `experiments/re_evaluate.py` without re-running models.
## Reconstruction (raw filings + news)
The release ships derived features and reconstruction scripts; raw artifacts subject to redistribution restrictions remain re-fetchable:
```bash
python collect_universe.py # iShares ETF holdings + NASDAQ Trader directory
python collect_filings.py # SEC EDGAR (10-K, 10-Q, 8-K, 20-F, 6-K, N-CSR, N-CSRS)
python collect_fundamentals.py # XBRL company facts via SEC EDGAR
python collect_prices.py # yfinance OHLCV + adjusted close
python collect_news.py # provider-specific (~215k articles)
python collect_real_estate.py # RentCast (100 metros, 139,855 properties)
python collect_macro.py # FRED + EIA series
python preprocess.py
python assemble_benchmark.py
python generate_scenarios.py
python enrich_benchmark.py
python build_valuation_tasks.py
python validate_all.py
```
## Authors / Contact
Anonymous (NeurIPS 2026 Datasets & Benchmarks Track submission). Contact at `<email>` after author notification.
|