File size: 3,290 Bytes
c129f53 c330598 c129f53 c330598 3c47c7b c129f53 c330598 c129f53 c330598 f86c505 c330598 f86c505 c330598 f86c505 c330598 f86c505 c330598 f86c505 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | ---
title: ModelLens
emoji: 🔭
colorFrom: indigo
colorTo: pink
sdk: gradio
sdk_version: 4.44.0
python_version: '3.11'
app_file: app.py
pinned: false
license: mit
short_description: Finding the Best Model for Your Task from Myriads of Models
---
# ModelLens — Finding the Best Model for Your Task from Myriads of Models
Describe your dataset → pick a task and metric → get a ranked list of HuggingFace
models likely to perform well on it. Backed by the `MLPMetricFull` checkpoint
trained on the cleaned + expanded `unified_augmented_v2` corpus, with a candidate
pool of ~47k HuggingFace models. The full model uses learned model-id /
model-description / dataset-id embeddings on top of the dataset-description and
task/metric signals.
## How it works
1. Your dataset description is embedded with OpenAI `text-embedding-3-small`
(1536-dim, the same encoder used during training).
2. The MLPMetric scores every candidate model conditioned on the embedding +
chosen task + chosen metric.
3. We return the top-k, optionally filtered by parameter count, "official
pretrained only", or "HuggingFace-hosted only".
## Bring your own OpenAI key
This Space does **not** ship with a baked-in OpenAI key. Paste your own
`sk-...` key into the "OpenAI API key" field — it is sent directly to OpenAI
for that single request and is **not stored, logged, or reused** by this Space.
A query costs roughly **$0.000001** on your account (about a millionth of a
dollar).
If you don't have a key yet: https://platform.openai.com/api-keys
## Files in this Space
```
app.py Gradio entry point
recommend.py Recommender (loads checkpoint + model pool, embeds dataset desc)
inference_lib.py Self-contained MLPMetric implementation (no module/ tree needed)
build_model_pool.py Offline helper to (re)build assets/model_pool.npz
requirements.txt Pinned deps
assets/
model_pool.npz Pre-computed candidate pool (47k models, size+family ids, popularity, HF urls)
checkpoint/
MLPMetricFull.pt ~709 MB trained weights (slim: parent-class dead weights + train-set dataset_desc_matrix stripped)
args.json Training-time hyperparameters (model dims, num_*)
data/
task2id.json Task vocab
metric2id.json Metric vocab
```
The Space looks for the checkpoint at `checkpoint/MLPMetricFull.pt` (or the
legacy `checkpoint/MLPMetric.pt`) and the data JSONs at `data/`. Override with
env vars `MODEL_CKPT`, `MODEL_ARGS`, `DATA_DIR`, `POOL_PATH` if you lay things
out differently.
## Running locally
```bash
cd web
pip install -r requirements.txt
# either set OPENAI_API_KEY in env, or paste it into the UI at runtime
python app.py
# open http://localhost:7860
```
## Rebuilding the model pool
When you bump the candidate set (e.g. add new HF models to `model2id.json` /
`model_profile.json`):
```bash
python web/build_model_pool.py \
--data-dir data/unified_augmented_v2 \
--profile-dir data/unified_augmented \
--args checkpoint/mlp/unified_augmented_v2/FinalModel_v2_full_data_deployment/args.json \
--out web/assets/model_pool.npz \
--min-popularity 0
```
(`--profile-dir` falls back to v1's `model_profile.json` / `model_popularity.json`
for the ~21k v2 model names that v2 doesn't yet ship a profile for.)
|