NIPS26Repo's picture
Initial anonymized artifact release
149513d verified
metadata
license: cc-by-nc-4.0
language: en
size_categories:
  - 1K<n<100K
task_categories:
  - tabular-classification
  - time-series-forecasting
tags:
  - finance
  - llm-trading
  - benchmark
  - evaluation
  - mandate-based
  - fund-style-policy
pretty_name: QuantArena Artifact Bundle
configs:
  - config_name: metrics
    default: true
    data_files:
      - split: train
        path: derived/all_metrics.csv
  - config_name: trades
    data_files:
      - split: train
        path: derived/all_trades.csv

QuantArena Artifact Bundle

Reproducibility artifacts for the paper QuantArena: Beat the Market or Be the Market? A Live-Market Evaluation of Investment Paradigms (NeurIPS 2026 Evaluations & Datasets Track submission).

Summary

QuantArena is a controlled live-market evaluation protocol that holds the LLM backend, market data stream, analyst workflow, capital, and execution harness fixed across runs and varies only the investment doctrine (the policy module). This bundle releases the run-level data, comparison aggregates, universe definitions, and provenance manifests required to inspect every figure, table, and quantitative claim in the paper.

The dataset is a structured collection of backtest runs rather than a single tabular file. Each run contains daily portfolio state, the trade log, and pre-computed performance metrics. Two flat tables in derived/ provide a queryable view of the full bundle for users who want to load it as a single DataFrame.

What's in this bundle

release_data/
├── README.md                       # This file
├── LICENSE.md                      # Multi-source license + redistribution notes
├── CHANGELOG.md                    # Version history
├── croissant.json                  # Croissant 1.0 metadata (core + RAI)
├── manifest.json                   # Top-level run inventory (machine-readable)
│
├── runs/                           # 28 individual backtest runs
│   ├── exp1_caseStudy_us_6m/       # US 6M main case study (5 mandates)
│   ├── exp1_caseStudy_cn_6m/       # CN 6M main case study (5 mandates)
│   ├── exp2_reproducibility_us_6m_run2/  # Independent re-run (5 mandates)
│   ├── exp3_mechanism_ablation_us_3m/    # US 3M ablation (8 variants)
│   └── exp4_backend_robustness_us_3m_gpt54/  # GPT-5.4 robustness (5 mandates)
│
├── exp5_efficiency_ablation_cn_10t_6m/   # Documented only (no run artifacts)
├── comparisons/                    # Cross-mandate aggregates per market
├── universe/                       # 5x4 sector/style ticker grid
├── derived/                        # Pre-flattened tables for easy querying
│   ├── all_trades.csv              # Concatenated trade log across all 28 runs
│   ├── all_metrics.csv             # Long-format performance metrics table
│   └── gpt54_robustness/           # Backend-comparison CSVs
│
├── audit/                          # Reproducibility manifest (mirror of paper's latex/audit/)
└── tools/                          # Scripts to rebuild the bundle from raw artifacts

Each runs/<experiment>/<mandate>/ directory contains:

  • metrics.json — summary metrics (return, drawdown, Sharpe, turnover, cash ratio, exposure, …)
  • trades.csv — per-trade log (date, ticker, action, shares, price, value, justification)
  • equity_curve.csv — daily portfolio state (date, total_value, daily_return, cashflow=cash_balance, benchmark_value, benchmark_return)
  • backtest_report.md — human-readable run summary

Loading

As a DataFrame (recommended for most users)

import pandas as pd

# Concatenated trade log across all 28 runs
trades = pd.read_csv("derived/all_trades.csv")

# Long-format metrics table
metrics = pd.read_csv("derived/all_metrics.csv")

# Filter to one experiment
us_6m = metrics.query("experiment == 'exp1_caseStudy_us_6m'")
print(us_6m[["display_name", "total_return", "max_drawdown", "total_trades"]])

Hugging Face Datasets library

from datasets import load_dataset

trades = load_dataset("NIPS26Repo/quantarena-artifacts", "trades", split="train")
metrics = load_dataset("NIPS26Repo/quantarena-artifacts", "metrics", split="train")

Per-run artifacts (when you need the full equity curve or single trade log)

import json, pandas as pd

run = "runs/exp1_caseStudy_us_6m/fundamental_value"
metrics = json.load(open(f"{run}/metrics.json"))
trades  = pd.read_csv(f"{run}/trades.csv")
equity  = pd.read_csv(f"{run}/equity_curve.csv")

Experiment overview

Experiment Window Universe Backend Mandates / variants Purpose in the paper
Exp 1 — Main case study (US) 2025-09-01 to 2026-02-28 (124 trading days) 20 US tickers (5×4 sector/style) DeepSeek-V3.2 5 mandates Q1 returns, Q2 cross-market shift, Q3 fidelity, sector matrix
Exp 1 — Main case study (CN) 2025-09-01 to 2026-02-28 (102 trading days) 20 CN A-share tickers DeepSeek-V3.2 5 mandates Same as above
Exp 2 — Reproducibility R2 (US) 2025-09-01 to 2026-02-28 20 US tickers DeepSeek-V3.2 5 mandates tab:reproducibility (Q4)
Exp 3 — US 3M mechanism ablation 2025-12-01 to 2026-02-28 20 US tickers DeepSeek-V3.2 8 variants (Full + ablated for FV/BM/MT, Reference for LV/EqW) tab:us_3m_ablation_main (Q4)
Exp 4 — Backend robustness (GPT-5.4) 2025-12-01 to 2026-02-28 20 US tickers GPT-5.4 (Macaron Responses API gateway, run dates Apr 23–24, 2026) 5 mandates tab:us_3m_backend_robustness (Q4)
Exp 5 — Execution efficiency (CN 10t) 2025-09-01 to 2026-02-28 10 CN tickers DeepSeek-V3.2 E0 / E1 / E2 execution paths tab:efficiency_ablation (appendix)

Mandates: Fundamental Value, Macro Tactical, Behavioral Momentum, Low-Volatility (Smart Beta), Equal-Weight (Baseline).

Initial capital is $100,000 per run; decision cadence is daily.

How to verify a number from the paper

  1. Locate the figure / table / claim in audit/figures.md, audit/tables.md, or audit/claims.md.
  2. Each entry names the source run ID(s) under runs/<experiment>/<mandate>/.
  3. Open metrics.json (summary metrics), trades.csv (per-trade log), or equity_curve.csv (daily portfolio state) in that run directory.

Data sources used by the underlying runs

Source What it provided Status in this bundle
Yahoo Finance / yfinance US ticker prices and corporate actions Trade-level prices appear in trades.csv; not redistributed in bulk
Tushare China A-share prices, fundamentals, news Same — trade-level only, not bulk
Financial Modeling Prep (FMP) US fundamentals & news Aggregated into per-run summaries only
AKShare CN macro indicators & policy news Same
Tavily Search-API-based news retrieval Sentiment scores aggregated into ISQ signals only
DeepSeek-V3.2 Default reasoning backend LLM outputs in trades.csv justification are short bookkeeping templates only
GPT-5.4 (Macaron API gateway) Backend-robustness reasoning backend Same

See LICENSE.md for redistribution terms.

Citation

@inproceedings{quantarena2026,
  title  = {QuantArena: Beat the Market or Be the Market? A Live-Market Evaluation of Investment Paradigms},
  author = {Anonymous Author(s)},
  booktitle = {Advances in Neural Information Processing Systems Datasets and Benchmarks Track},
  year   = {2026}
}

Limitations

  • The 6M case studies are single-seed runs; Exp 2 quantifies seed sensitivity for the US window, but CN does not have a paired re-run in this bundle.
  • Trading frictions (transaction costs, slippage, market impact) are not modeled.
  • Universe is restricted to 20 liquid tickers per market; results may not transfer to micro-cap or illiquid names.
  • The GPT-5.4 backend identifier follows the API gateway label exposed at run time; it is not a vendor build hash.

Contact

Submitted via OpenReview to NeurIPS 2026 Evaluations & Datasets Track. Authors anonymized for double-blind review.