sairaj2 commited on
Commit
9696c2d
·
1 Parent(s): 1482893

Fix README.md frontmatter for HF Spaces

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -4,7 +4,6 @@ emoji: 🧼
4
  colorFrom: blue
5
  colorTo: green
6
  sdk: docker
7
- app_port: 7860
8
  pinned: false
9
  tags:
10
  - fastapi
@@ -12,6 +11,7 @@ tags:
12
  - openenv
13
  - data-cleaning
14
  - data-validation
 
15
 
16
  ## What this is
17
 
@@ -25,11 +25,11 @@ It is suitable for **Hugging Face Spaces (Docker)**. Inference Endpoints are not
25
 
26
  ## Web UI (optional)
27
 
28
- Open `\/web` for a lightweight dashboard to reset/step and view the table preview.
29
 
30
  ## Real-world task
31
 
32
- Simulates a common data engineering workflow: **cleaning a dirty table** so downstream analytics/ML wont break.
33
  Agents must iteratively apply safe transformations (imputation, deduplication, normalization, format standardization, range/outlier handling) and then **submit**.
34
 
35
  ## Tasks (3 levels, deterministic grading)
@@ -121,7 +121,7 @@ The baseline script is `inference.py` (repo root). It uses an **OpenAI-compatibl
121
  Required environment variables (per submission rules):
122
 
123
  - `API_BASE_URL`: OpenAI-compatible endpoint base URL (optional if using OpenAI default)
124
- - `MODEL_NAME`: model id (e.g. `gpt-4.1-mini`, or your providers model name)
125
  - `OPENAI_API_KEY`: API key (preferred)
126
  - `HF_TOKEN`: API key fallback (used if `OPENAI_API_KEY` is not set)
127
 
@@ -154,5 +154,4 @@ docker exec -it $(docker ps -q --filter ancestor=datacleanser | head -n 1) \
154
  ## Notes
155
 
156
  - The server generates datasets on startup (see `app.py` startup event).
157
- - For baseline agent runs (outside Spaces), set `OPENAI_API_KEY` and use `inference.py`.
158
-
 
4
  colorFrom: blue
5
  colorTo: green
6
  sdk: docker
 
7
  pinned: false
8
  tags:
9
  - fastapi
 
11
  - openenv
12
  - data-cleaning
13
  - data-validation
14
+ ---
15
 
16
  ## What this is
17
 
 
25
 
26
  ## Web UI (optional)
27
 
28
+ Open `/web` for a lightweight dashboard to reset/step and view the table preview.
29
 
30
  ## Real-world task
31
 
32
+ Simulates a common data engineering workflow: **cleaning a dirty table** so downstream analytics/ML won't break.
33
  Agents must iteratively apply safe transformations (imputation, deduplication, normalization, format standardization, range/outlier handling) and then **submit**.
34
 
35
  ## Tasks (3 levels, deterministic grading)
 
121
  Required environment variables (per submission rules):
122
 
123
  - `API_BASE_URL`: OpenAI-compatible endpoint base URL (optional if using OpenAI default)
124
+ - `MODEL_NAME`: model id (e.g. `gpt-4.1-mini`, or your provider's model name)
125
  - `OPENAI_API_KEY`: API key (preferred)
126
  - `HF_TOKEN`: API key fallback (used if `OPENAI_API_KEY` is not set)
127
 
 
154
  ## Notes
155
 
156
  - The server generates datasets on startup (see `app.py` startup event).
157
+ - For baseline agent runs (outside Spaces), set `OPENAI_API_KEY` and use `inference.py`.