quantarena-artifacts / tools /write_release_metadata.py
NIPS26Repo's picture
Make Croissant contentUrl resolvable for record generation
bc84154 verified
#!/usr/bin/env python3
"""Write README.md, LICENSE.md, CHANGELOG.md, croissant.json into the release bundle.
Done in Python so we don't fight display-layer path substitution in the
shell tools.
"""
from __future__ import annotations
import json
import hashlib
import os
import shutil
from pathlib import Path
SCRIPT_DIR = Path(__file__).resolve().parent
REPO = Path(os.environ.get("QUANTARENA_REPO", SCRIPT_DIR.parents[1])).resolve()
RELEASE = Path(os.environ.get("QUANTARENA_RELEASE_DIR", REPO / "release_data")).resolve()
HF_RAW_BASE_URL = "https://huggingface.co/datasets/NIPS26Repo/quantarena-artifacts/resolve/main"
README = """\
---
license: cc-by-nc-4.0
language: en
size_categories:
- 1K<n<100K
task_categories:
- tabular-classification
- time-series-forecasting
tags:
- finance
- llm-trading
- benchmark
- evaluation
- mandate-based
- fund-style-policy
pretty_name: QuantArena Artifact Bundle
configs:
- config_name: metrics
default: true
data_files:
- split: train
path: derived/all_metrics.csv
- config_name: trades
data_files:
- split: train
path: derived/all_trades.csv
---
# QuantArena Artifact Bundle
Reproducibility artifacts for the paper *QuantArena: Beat the Market or Be the
Market? A Live-Market Evaluation of Investment Paradigms* (NeurIPS 2026
Evaluations & Datasets Track submission).
## Summary
QuantArena is a controlled live-market evaluation protocol that holds the LLM
backend, market data stream, analyst workflow, capital, and execution harness
fixed across runs and varies only the **investment doctrine** (the policy
module). This bundle releases the run-level data, comparison aggregates,
universe definitions, and provenance manifests required to inspect every
figure, table, and quantitative claim in the paper.
The dataset is a structured **collection of backtest runs** rather than a
single tabular file. Each run contains daily portfolio state, the trade log,
and pre-computed performance metrics. Two flat tables in `derived/` provide a
queryable view of the full bundle for users who want to load it as a single
DataFrame.
## What's in this bundle
```
release_data/
├── README.md # This file
├── LICENSE.md # Multi-source license + redistribution notes
├── CHANGELOG.md # Version history
├── croissant.json # Croissant 1.1 metadata (core + RAI)
├── manifest.json # Top-level run inventory (machine-readable)
├── runs/ # 28 individual backtest runs
│ ├── exp1_caseStudy_us_6m/ # US 6M main case study (5 mandates)
│ ├── exp1_caseStudy_cn_6m/ # CN 6M main case study (5 mandates)
│ ├── exp2_reproducibility_us_6m_run2/ # Independent re-run (5 mandates)
│ ├── exp3_mechanism_ablation_us_3m/ # US 3M ablation (8 variants)
│ └── exp4_backend_robustness_us_3m_gpt54/ # GPT-5.4 robustness (5 mandates)
├── exp5_efficiency_ablation_cn_10t_6m/ # Documented only (no run artifacts)
├── comparisons/ # Cross-mandate aggregates per market
├── universe/ # 5x4 sector/style ticker grid
├── derived/ # Pre-flattened tables for easy querying
│ ├── all_trades.csv # Concatenated trade log across all 28 runs
│ ├── all_metrics.csv # Long-format performance metrics table
│ └── gpt54_robustness/ # Backend-comparison CSVs
├── audit/ # Reproducibility manifest (mirror of paper's latex/audit/)
└── tools/ # Scripts to rebuild the bundle from raw artifacts
```
Each `runs/<experiment>/<mandate>/` directory contains:
- `metrics.json` — summary metrics (return, drawdown, Sharpe, turnover, cash ratio, exposure, …)
- `trades.csv` — per-trade log (date, ticker, action, shares, price, value, justification)
- `equity_curve.csv` — daily portfolio state (date, total_value, daily_return, cashflow=cash_balance, benchmark_value, benchmark_return)
- `backtest_report.md` — human-readable run summary
## Loading
### As a DataFrame (recommended for most users)
```python
import pandas as pd
# Concatenated trade log across all 28 runs
trades = pd.read_csv("derived/all_trades.csv")
# Long-format metrics table
metrics = pd.read_csv("derived/all_metrics.csv")
# Filter to one experiment
us_6m = metrics.query("experiment == 'exp1_caseStudy_us_6m'")
print(us_6m[["display_name", "total_return", "max_drawdown", "total_trades"]])
```
### Hugging Face Datasets library
```python
from datasets import load_dataset
trades = load_dataset("NIPS26Repo/quantarena-artifacts", "trades", split="train")
metrics = load_dataset("NIPS26Repo/quantarena-artifacts", "metrics", split="train")
```
### Per-run artifacts (when you need the full equity curve or single trade log)
```python
import json, pandas as pd
run = "runs/exp1_caseStudy_us_6m/fundamental_value"
metrics = json.load(open(f"{run}/metrics.json"))
trades = pd.read_csv(f"{run}/trades.csv")
equity = pd.read_csv(f"{run}/equity_curve.csv")
```
## Experiment overview
| Experiment | Window | Universe | Backend | Mandates / variants | Purpose in the paper |
|---|---|---|---|---|---|
| **Exp 1 — Main case study (US)** | 2025-09-01 to 2026-02-28 (124 trading days) | 20 US tickers (5×4 sector/style) | DeepSeek-V3.2 | 5 mandates | Q1 returns, Q2 cross-market shift, Q3 fidelity, sector matrix |
| **Exp 1 — Main case study (CN)** | 2025-09-01 to 2026-02-28 (102 trading days) | 20 CN A-share tickers | DeepSeek-V3.2 | 5 mandates | Same as above |
| **Exp 2 — Reproducibility R2 (US)** | 2025-09-01 to 2026-02-28 | 20 US tickers | DeepSeek-V3.2 | 5 mandates | `tab:reproducibility` (Q4) |
| **Exp 3 — US 3M mechanism ablation** | 2025-12-01 to 2026-02-28 | 20 US tickers | DeepSeek-V3.2 | 8 variants (Full + ablated for FV/BM/MT, Reference for LV/EqW) | `tab:us_3m_ablation_main` (Q4) |
| **Exp 4 — Backend robustness (GPT-5.4)** | 2025-12-01 to 2026-02-28 | 20 US tickers | GPT-5.4 (Macaron Responses API gateway, run dates Apr 23–24, 2026) | 5 mandates | `tab:us_3m_backend_robustness` (Q4) |
| **Exp 5 — Execution efficiency (CN 10t)** | 2025-09-01 to 2026-02-28 | 10 CN tickers | DeepSeek-V3.2 | E0 / E1 / E2 execution paths | `tab:efficiency_ablation` (appendix) |
Mandates: **Fundamental Value**, **Macro Tactical**, **Behavioral Momentum**,
**Low-Volatility (Smart Beta)**, **Equal-Weight (Baseline)**.
Initial capital is $100,000 per run; decision cadence is daily.
## How to verify a number from the paper
1. Locate the figure / table / claim in `audit/figures.md`, `audit/tables.md`,
or `audit/claims.md`.
2. Each entry names the source run ID(s) under
`runs/<experiment>/<mandate>/`.
3. Open `metrics.json` (summary metrics), `trades.csv` (per-trade log), or
`equity_curve.csv` (daily portfolio state) in that run directory.
## Data sources used by the underlying runs
| Source | What it provided | Status in this bundle |
|---|---|---|
| Yahoo Finance / yfinance | US ticker prices and corporate actions | Trade-level prices appear in `trades.csv`; not redistributed in bulk |
| Tushare | China A-share prices, fundamentals, news | Same — trade-level only, not bulk |
| Financial Modeling Prep (FMP) | US fundamentals & news | Aggregated into per-run summaries only |
| AKShare | CN macro indicators & policy news | Same |
| Tavily | Search-API-based news retrieval | Sentiment scores aggregated into ISQ signals only |
| DeepSeek-V3.2 | Default reasoning backend | LLM outputs in trades.csv `justification` are short bookkeeping templates only |
| GPT-5.4 (Macaron API gateway) | Backend-robustness reasoning backend | Same |
See `LICENSE.md` for redistribution terms.
## Citation
```bibtex
@inproceedings{quantarena2026,
title = {QuantArena: Beat the Market or Be the Market? A Live-Market Evaluation of Investment Paradigms},
author = {Anonymous Author(s)},
booktitle = {Advances in Neural Information Processing Systems Datasets and Benchmarks Track},
year = {2026}
}
```
## Limitations
- The 6M case studies are single-seed runs; Exp 2 quantifies seed sensitivity for the US window, but CN does not have a paired re-run in this bundle.
- Trading frictions (transaction costs, slippage, market impact) are not modeled.
- Universe is restricted to 20 liquid tickers per market; results may not transfer to micro-cap or illiquid names.
- The GPT-5.4 backend identifier follows the API gateway label exposed at run time; it is not a vendor build hash.
## Contact
Submitted via OpenReview to NeurIPS 2026 Evaluations & Datasets Track. Authors anonymized for double-blind review.
"""
LICENSE = """\
# License and Redistribution Notes
## This bundle's contributed content
The QuantArena artifact bundle, as a curated collection of run-level metrics,
(metrics, trade logs, equity curves), provenance documentation, and the
build scripts under `tools/`, is released under
**Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)**.
You may use, share, and adapt the contributed content for **academic
research, education, and non-commercial purposes**, provided you cite the
paper and preserve attribution to the authors.
## Underlying market data (see `README.md` "Data sources" table)
The bundle's `trades.csv` and the per-mandate selection files contain
trade-level prices and ticker actions derived from third-party data providers.
We redistribute only the **derived, sparse, decision-level
records** required to verify the paper's quantitative claims.
We do not redistribute bulk market data from any provider. Specifically:
- **Yahoo Finance / yfinance**: bulk redistribution is forbidden by Yahoo
TOS. We include only the per-trade prices observed on each decision date.
Users who need full price history should obtain it directly from Yahoo or
a licensed redistributor.
- **Tushare**: redistribution of raw data is restricted by Tushare's
user agreement. Users wishing to reproduce or extend the runs must
register and obtain their own Tushare API token. The trade-level prices
included here are derived facts about portfolio decisions rather than
bulk feed redistribution.
- **Financial Modeling Prep (FMP)**, **AKShare**, **Tavily**: aggregated
signals and per-run summaries only; no raw feeds redistributed.
If you redistribute or build derivative products on top of this bundle,
**you remain responsible for complying with each upstream provider's terms**.
## LLM outputs
Short bookkeeping strings in `trades.csv` `justification` columns
(e.g., "Target allocation: 12.0% (current: 0 shares)") are template
strings emitted by the trading workflow rather than free-form LLM
generation. We treat them as part of the bundle's contributed content
under the CC BY-NC 4.0 license.
The DeepSeek-V3.2 and GPT-5.4 backends used during runs are accessed
under the providers' commercial API terms; the bundle does not
redistribute model weights or proprietary system prompts.
## Disclaimers
- **Not financial advice.** This bundle is released for research and
evaluation purposes only. It is not a recommendation to trade any
security. Past simulated performance does not guarantee future results.
- **Reproduction depends on third-party APIs.** Re-running the bundled
experiments requires market data access (Yahoo, Tushare, FMP, AKShare,
Tavily) and an LLM backend (DeepSeek-V3.2 or GPT-5.4); these are not
shipped with the bundle.
- **Anonymized release.** Per NeurIPS double-blind review policy,
authorship and institutional affiliation are not disclosed in this
release.
"""
CHANGELOG = """\
# Changelog
## v1.0 — 2026-04-28
Initial public release prepared for NeurIPS 2026 Evaluations & Datasets
Track double-blind submission.
- 28 backtest runs across 4 paper-cited experiments (Exp 1 case study US/CN,
Exp 2 reproducibility R2, Exp 3 mechanism ablation, Exp 4 GPT-5.4 backend
robustness). Exp 5 efficiency ablation is documented but its run
artifacts are not redistributed.
- Two flattened derived tables (`derived/all_metrics.csv`,
`derived/all_trades.csv`) for easy querying.
- Reproducibility manifest mirrored from the paper's `latex/audit/`.
- Croissant 1.1 metadata with core and RAI fields.
"""
def write_text(name: str, content: str) -> None:
(RELEASE / name).write_text(content)
print(f" wrote {name} ({len(content)} chars)")
def copy_build_script() -> None:
tools = RELEASE / "tools"
tools.mkdir(exist_ok=True)
for name in ["build_release_bundle.py", "write_release_metadata.py"]:
src = SCRIPT_DIR / name
dst = tools / name
if src.exists() and src.resolve() != dst.resolve():
shutil.copy2(src, dst)
print(" verified build scripts in tools/")
def write_croissant() -> None:
"""Croissant 1.1 metadata with core + RAI fields."""
def content_url(path: str) -> str:
return f"{HF_RAW_BASE_URL}/{path}"
def md5(path: str) -> str:
digest = hashlib.md5()
with (RELEASE / path).open("rb") as f:
for chunk in iter(lambda: f.read(1024 * 1024), b""):
digest.update(chunk)
return digest.hexdigest()
def sha256(path: str) -> str:
digest = hashlib.sha256()
with (RELEASE / path).open("rb") as f:
for chunk in iter(lambda: f.read(1024 * 1024), b""):
digest.update(chunk)
return digest.hexdigest()
croissant = {
"@context": {
"@language": "en",
"@vocab": "https://schema.org/",
"citeAs": "cr:citeAs",
"column": "cr:column",
"conformsTo": "dct:conformsTo",
"cr": "http://mlcommons.org/croissant/",
"rai": "http://mlcommons.org/croissant/RAI/",
"data": "cr:data",
"dataType": {"@id": "cr:dataType", "@type": "@vocab"},
"dct": "http://purl.org/dc/terms/",
"equivalentProperty": "cr:equivalentProperty",
"examples": {"@id": "cr:examples", "@type": "@json"},
"extract": "cr:extract",
"field": "cr:field",
"fileProperty": "cr:fileProperty",
"fileObject": "cr:fileObject",
"fileSet": "cr:fileSet",
"format": "cr:format",
"includes": "cr:includes",
"isLiveDataset": "cr:isLiveDataset",
"jsonPath": "cr:jsonPath",
"key": "cr:key",
"md5": "cr:md5",
"parentField": "cr:parentField",
"path": "cr:path",
"prov": "http://www.w3.org/ns/prov#",
"recordSet": "cr:recordSet",
"references": "cr:references",
"regex": "cr:regex",
"repeated": "cr:repeated",
"replace": "cr:replace",
"sc": "https://schema.org/",
"samplingRate": "cr:samplingRate",
"separator": "cr:separator",
"sha256": "sc:sha256",
"source": "cr:source",
"subField": "cr:subField",
"transform": "cr:transform"
},
"@type": "sc:Dataset",
"conformsTo": "http://mlcommons.org/croissant/1.1",
"name": "QuantArena Artifact Bundle",
"description": (
"Reproducibility artifacts for QuantArena, a controlled live-market "
"evaluation protocol that fixes the LLM backend, market data stream, "
"analyst workflow, and execution harness while varying only the "
"investment doctrine (policy module) across five operational paradigms: "
"Fundamental Value, Macro Tactical, Behavioral Momentum, "
"Low-Volatility (Smart Beta), and a rule-based Equal-Weight baseline. "
"The bundle contains 28 individual backtest runs across four paper-cited "
"experiments spanning US and CN A-share equities, plus comparison "
"aggregates, universe definitions, and provenance documentation."
),
"license": "https://creativecommons.org/licenses/by-nc/4.0/",
"url": "https://huggingface.co/datasets/NIPS26Repo/quantarena-artifacts",
"version": "1.0",
"datePublished": "2026-04-28",
"citeAs": (
"@inproceedings{quantarena2026, title={QuantArena: Beat the Market or "
"Be the Market? A Live-Market Evaluation of Investment Paradigms}, "
"author={Anonymous Author(s)}, booktitle={NeurIPS 2026 Evaluations and "
"Datasets Track}, year={2026}}"
),
"keywords": [
"finance", "LLM trading", "evaluation benchmark", "investment doctrine",
"controlled intervention", "live-market", "fund-style policy",
"double-blind", "NeurIPS Datasets and Benchmarks"
],
"creator": {"@type": "Organization", "name": "Anonymous (under double-blind review)"},
"publisher": {"@type": "Organization", "name": "Anonymous (under double-blind review)"},
"isLiveDataset": False,
# ----- Core: file objects + record sets -----
"distribution": [
{
"@type": "cr:FileObject",
"@id": "manifest-json",
"name": "manifest.json",
"description": "Top-level run inventory (machine-readable)",
"encodingFormat": "application/json",
"contentUrl": content_url("manifest.json"),
"md5": md5("manifest.json"),
"sha256": sha256("manifest.json")
},
{
"@type": "cr:FileObject",
"@id": "all-metrics-csv",
"name": "all_metrics.csv",
"description": "Long-format flat table of summary metrics across all 28 runs",
"encodingFormat": "text/csv",
"contentUrl": content_url("derived/all_metrics.csv"),
"md5": md5("derived/all_metrics.csv"),
"sha256": sha256("derived/all_metrics.csv")
},
{
"@type": "cr:FileObject",
"@id": "all-trades-csv",
"name": "all_trades.csv",
"description": "Concatenated per-trade log across all 28 runs",
"encodingFormat": "text/csv",
"contentUrl": content_url("derived/all_trades.csv"),
"md5": md5("derived/all_trades.csv"),
"sha256": sha256("derived/all_trades.csv")
},
{
"@type": "cr:FileObject",
"@id": "universe-csv",
"name": "sector_style_universe.csv",
"description": "5x4 sector/style universe (20 US + 20 CN tickers)",
"encodingFormat": "text/csv",
"contentUrl": content_url("universe/sector_style_universe.csv"),
"md5": md5("universe/sector_style_universe.csv"),
"sha256": sha256("universe/sector_style_universe.csv")
},
{
"@type": "cr:FileSet",
"@id": "run-metrics-files",
"name": "Per-run metrics files",
"description": "Summary metrics JSON per individual backtest run",
"encodingFormat": "application/json",
"includes": "runs/*/*/metrics.json"
},
{
"@type": "cr:FileSet",
"@id": "run-trades-files",
"name": "Per-run trade logs",
"description": "Per-trade CSV per individual backtest run",
"encodingFormat": "text/csv",
"includes": "runs/*/*/trades.csv"
},
{
"@type": "cr:FileSet",
"@id": "run-equity-files",
"name": "Per-run equity curves",
"description": "Daily portfolio state CSV per individual backtest run",
"encodingFormat": "text/csv",
"includes": "runs/*/*/equity_curve.csv"
}
],
"recordSet": [
{
"@type": "cr:RecordSet",
"@id": "metrics-records",
"name": "metrics_records",
"description": "Long-format performance metrics across all 28 runs",
"field": [
{"@type": "cr:Field", "@id": "metrics-records/experiment",
"name": "experiment", "description": "Experiment family (exp1_caseStudy_us_6m, exp2_reproducibility_us_6m_run2, etc.)",
"dataType": "sc:Text",
"source": {"fileObject": {"@id": "all-metrics-csv"}, "extract": {"column": "experiment"}}},
{"@type": "cr:Field", "@id": "metrics-records/market",
"name": "market", "description": "Market identifier (us or cn)",
"dataType": "sc:Text",
"source": {"fileObject": {"@id": "all-metrics-csv"}, "extract": {"column": "market"}}},
{"@type": "cr:Field", "@id": "metrics-records/mandate-dir",
"name": "mandate_dir", "description": "Mandate identifier in the bundle directory layout",
"dataType": "sc:Text",
"source": {"fileObject": {"@id": "all-metrics-csv"}, "extract": {"column": "mandate_dir"}}},
{"@type": "cr:Field", "@id": "metrics-records/total-return",
"name": "total_return", "description": "Total return over the run window in percent",
"dataType": "sc:Float",
"source": {"fileObject": {"@id": "all-metrics-csv"}, "extract": {"column": "total_return"}}},
{"@type": "cr:Field", "@id": "metrics-records/max-drawdown",
"name": "max_drawdown", "description": "Maximum drawdown in percent",
"dataType": "sc:Float",
"source": {"fileObject": {"@id": "all-metrics-csv"}, "extract": {"column": "max_drawdown"}}},
{"@type": "cr:Field", "@id": "metrics-records/sharpe",
"name": "sharpe_ratio", "description": "Annualized Sharpe ratio",
"dataType": "sc:Float",
"source": {"fileObject": {"@id": "all-metrics-csv"}, "extract": {"column": "sharpe_ratio"}}},
{"@type": "cr:Field", "@id": "metrics-records/total-trades",
"name": "total_trades", "description": "Number of executed trades over the run window",
"dataType": "sc:Integer",
"source": {"fileObject": {"@id": "all-metrics-csv"}, "extract": {"column": "total_trades"}}}
]
},
{
"@type": "cr:RecordSet",
"@id": "trades-records",
"name": "trades_records",
"description": "Per-trade records concatenated across all 28 runs",
"field": [
{"@type": "cr:Field", "@id": "trades-records/experiment",
"name": "experiment", "dataType": "sc:Text",
"source": {"fileObject": {"@id": "all-trades-csv"}, "extract": {"column": "experiment"}}},
{"@type": "cr:Field", "@id": "trades-records/mandate-dir",
"name": "mandate_dir", "dataType": "sc:Text",
"source": {"fileObject": {"@id": "all-trades-csv"}, "extract": {"column": "mandate_dir"}}},
{"@type": "cr:Field", "@id": "trades-records/date",
"name": "date", "description": "Trading date (YYYY-MM-DD)",
"dataType": "sc:Date",
"source": {"fileObject": {"@id": "all-trades-csv"}, "extract": {"column": "date"}}},
{"@type": "cr:Field", "@id": "trades-records/ticker",
"name": "ticker", "dataType": "sc:Text",
"source": {"fileObject": {"@id": "all-trades-csv"}, "extract": {"column": "ticker"}}},
{"@type": "cr:Field", "@id": "trades-records/action",
"name": "action", "description": "BUY or SELL",
"dataType": "sc:Text",
"source": {"fileObject": {"@id": "all-trades-csv"}, "extract": {"column": "action"}}},
{"@type": "cr:Field", "@id": "trades-records/shares",
"name": "shares", "dataType": "sc:Integer",
"source": {"fileObject": {"@id": "all-trades-csv"}, "extract": {"column": "shares"}}},
{"@type": "cr:Field", "@id": "trades-records/price",
"name": "price", "description": "Trade price on the decision date (USD for US runs, CNY for CN runs)",
"dataType": "sc:Float",
"source": {"fileObject": {"@id": "all-trades-csv"}, "extract": {"column": "price"}}},
{"@type": "cr:Field", "@id": "trades-records/value",
"name": "value", "description": "Trade notional (shares * price)",
"dataType": "sc:Float",
"source": {"fileObject": {"@id": "all-trades-csv"}, "extract": {"column": "value"}}}
]
}
],
# ----- Responsible AI fields -----
"rai:dataCollection": (
"Backtest runs were executed against a closed evaluation harness using "
"live-style market data from licensed third-party APIs (Yahoo Finance / "
"yfinance, Tushare, FMP, AKShare, Tavily). Each run records the LLM's "
"decisions and the resulting portfolio state without modification. No "
"human subjects are involved at any stage."
),
"rai:dataCollectionType": "Programmatic",
"rai:dataCollectionRawData": (
"Raw market data from third-party providers; redistributed only as the "
"derived per-decision price and decision records embedded in trades.csv. "
"See LICENSE.md for per-source redistribution terms."
),
"rai:dataCollectionTimeframe": "2025-09-01 / 2026-02-28 (run windows); 2026-04-09 to 2026-04-25 (run execution dates)",
"rai:dataImputationProtocol": (
"No data imputation. Trading days are defined by the upstream price "
"provider's calendar; non-trading days are skipped naturally."
),
"rai:dataPreprocessingProtocol": (
"Raw API responses are normalized into a fixed schema (OHLCV daily bars, "
"fundamentals snapshots, news lists) inside the trading harness; only "
"the trades emitted and per-day portfolio state are persisted to disk. "
"See `tools/build_release_bundle.py` for how the public artifact is "
"assembled from raw run directories."
),
"rai:dataAnnotationProtocol": "Not applicable — no human annotation.",
"rai:dataUseCases": [
"Reproducing or extending the QuantArena evaluation protocol.",
"Auditing how investment doctrine alone (with LLM backend held fixed) shapes portfolio behavior.",
"Studying behavioral fidelity of LLM trading agents under matched execution.",
"Methodological research on controlled-intervention evaluation in finance."
],
"rai:dataBiases": (
"Universe is restricted to 20 liquid US tickers and 20 liquid CN A-share "
"tickers (5 sectors x 4 size-style cells per market); micro-cap, "
"illiquid, or non-equity assets are excluded by design. Performance "
"patterns observed here may not transfer to broader universes. "
"Single-seed case-study runs (with one paired re-run for the US window) "
"imply LLM-sampling stochasticity is bounded but not fully characterized; "
"see the Reproducibility table in the paper."
),
"rai:dataLimitations": (
"Trading frictions (transaction costs, market impact, short-selling, "
"borrowing constraints) are not modeled in the primary runs; the paper "
"reports a deterministic post-hoc transaction-cost sensitivity sweep "
"over the released trade logs. The 6M live-market window is much "
"shorter than typical institutional evaluation horizons. The GPT-5.4 "
"backend identifier reflects the API gateway label exposed at run time "
"and not a vendor build hash; exact backend reproducibility depends on "
"the gateway's routing decisions at run time. The bundle redistributes "
"only the derived decision-level data, not the raw third-party feeds."
),
"rai:dataReleaseMaintenancePlan": (
"v1.0 corresponds to the NeurIPS 2026 E&D Track submission. Future "
"versions will add longer windows, additional asset classes, and "
"trading-friction modeling. Bundle versioning is tracked in CHANGELOG.md."
),
"rai:personalSensitiveInformation": "Not applicable — bundle contains no personally identifiable or sensitive information.",
"rai:dataSocialImpact": (
"The bundle supports academic study of LLM-driven trading agents under "
"controlled conditions. Deploying such agents to real markets without "
"additional safeguards (frictions, risk controls, auditing) carries "
"substantial risk, including financial loss and potential amplification "
"of herd behavior. The bundle is explicitly scoped for research and "
"evaluation; it is not financial advice and should not be interpreted "
"as evidence of safe or profitable real-world deployment."
),
"rai:hasSyntheticData": False,
"prov:wasDerivedFrom": [
"Yahoo Finance / yfinance US equity price and corporate-action data",
"Tushare China A-share price, fundamental, and news data",
"Financial Modeling Prep US fundamentals and news data",
"AKShare China macro and policy indicators",
"Tavily search-API news retrieval metadata",
"LLM backend decision outputs from the controlled QuantArena execution harness"
],
"prov:wasGeneratedBy": [
{
"@type": "prov:Activity",
"name": "Controlled backtest execution",
"description": (
"The QuantArena harness executed daily decisions from 2025-09-01 "
"to 2026-02-28 for matched US and CN 20-ticker universes, fixing "
"backend, analyst workflow, initial capital, portfolio accounting, "
"and execution rules while varying only the mandate module."
)
},
{
"@type": "prov:Activity",
"name": "Artifact derivation and anonymized release packaging",
"description": (
"Raw run directories were filtered into decision-level artifacts: "
"per-run metrics.json, trades.csv, equity_curve.csv, comparison "
"aggregates, universe definitions, audit manifests, and flattened "
"all_metrics/all_trades tables. Raw third-party OHLCV/news bodies, "
"private credentials, author-identifying repository metadata, and "
"live LLM response logs are excluded."
)
}
]
}
out = RELEASE / "croissant.json"
out.write_text(json.dumps(croissant, indent=2, ensure_ascii=False))
print(f" wrote croissant.json ({len(json.dumps(croissant))} chars)")
def main() -> None:
print(f"Writing metadata into {RELEASE}")
write_text("README.md", README)
write_text("LICENSE.md", LICENSE)
write_text("CHANGELOG.md", CHANGELOG)
write_croissant()
copy_build_script()
print()
# Final size summary
total = sum(os.path.getsize(os.path.join(r, f)) for r, _, fs in os.walk(RELEASE) for f in fs)
nfiles = sum(len(fs) for _, _, fs in os.walk(RELEASE))
print(f"Final bundle: {nfiles} files, {total/1024/1024:.2f} MB")
if __name__ == "__main__":
main()