mle-bench-tabular / src /about.py
Sunmarinup's picture
Load leaderboard from GitHub (#1)
f2c74ae verified
TITLE = """<h1 align="center" id="space-title">Upgini MLE-Bench Tabular Leaderboard</h1>"""
INTRODUCTION_TEXT = """
This leaderboard mirrors the latest changes to [Upgini's MLE-Bench](https://github.com/upgini/mle-bench) leaderboard. It is a version of [MLE-bench](https://github.com/openai/mle-bench) that compares agent performance on tabular data. It uses exactly the same setup and differs just in the leaderboard view. We focus on tabular tasks and use [normalized score](https://github.com/upgini/mle-bench/?tab=readme-ov-file#mean-normalized-score) instead of medal percentage to compare differently scaled scores. The leaderboard is recomputed upon updating submitted runs from OpenAI repo.
"""