AlphaHack Trained Models

Three trained scikit-learn artifacts for the AlphaHack quantitative hackathon-strategy engine. Each lives in its own subdirectory with a dedicated model card.

Subdirectory Artifact Purpose Size
model1-regime-classifier/ regime_classifier.pkl Event โ†’ winner-archetype classifier (Model 1) 2.2 MB
model2-winner-predictor/ winner_predictor_final.pkl Project โ†’ ex-ante prize-probability classifier (Model 2) 493 KB
text-embedder/ text_embedder.pkl TF-IDF + TruncatedSVD text embedder (auxiliary) 4.2 MB

Companion artifacts:

Honest framing

These models were validated retrospectively (Model 2 sponsor-prize AUC 0.908, 95% CI [0.859, 0.947]) and tested in one prospective trial in April 2026 that did not produce a prize. Treat as a research artifact, not a guaranteed winning strategy. Each model card documents the known failure modes specific to that artifact.

Quickstart

pip install hackalpha
python -c "
import joblib
m2 = joblib.load('model2-winner-predictor/winner_predictor_final.pkl')
print(m2['features'])  # the 23 features it expects
"

The text-embedder/text_embedder.pkl requires the hackalpha package to be importable at load time (it pickles a hackalpha.research.text_embeddings.TextEmbeddingFeatures instance). The other two models are pure scikit-learn and load without any project-specific deps.

License

CC BY 4.0 โ€” see LICENSE.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support