shreyas231219 commited on
Commit
6ea2f5b
Β·
verified Β·
1 Parent(s): 1f008d6

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +32 -22
  2. inference.py +2 -2
  3. inference_groq.py +4 -4
  4. openenv.yaml +1 -1
  5. server/app.py +1 -1
README.md CHANGED
@@ -51,17 +51,28 @@ using SQL queries and Python code.
51
 
52
  ### Local Development
53
 
 
54
  ```bash
 
 
 
 
55
  # Install dependencies
56
- pip install openenv-core
 
 
 
 
57
 
58
- # Run the server (defaults to the 'easy' task)
59
- cd sql_sandbox
60
  TASK_ID=easy python -m server.app
 
61
 
62
- # Switch tasks via env var
63
- TASK_ID=medium python -m server.app
64
- TASK_ID=hard python -m server.app
 
65
  ```
66
 
67
  ### Docker (Hugging Face Spaces Ready)
@@ -76,29 +87,28 @@ docker run -p 7860:7860 sql-sandbox:latest
76
 
77
  ## Baseline Inference
78
 
79
- Runs GPT-4o on all three tasks and prints reproducible scores:
80
 
81
- ```bash
82
- export HF_TOKEN=sk-...
83
- export MODEL_NAME=gpt-4o
 
84
  python inference.py --url http://localhost:7860
85
  ```
86
 
87
  ## Project Structure
88
 
89
  ```
90
- sql_sandbox/
91
- β”œβ”€β”€ init.py # Package exports
92
- β”œβ”€β”€ models.py # Action & Observation Pydantic models
93
- β”œβ”€β”€ client.py # EnvClient subclass
94
- β”œβ”€β”€ openenv.yaml # OpenEnv manifest
95
- β”œβ”€β”€ pyproject.toml # Dependencies
96
- β”œβ”€β”€ inference.py # GPT-4o baseline script
97
  β”œβ”€β”€ README.md # This file
 
 
98
  └── server/
99
- β”œβ”€β”€ init.py
100
- β”œβ”€β”€ app.py # FastAPI application
101
- β”œβ”€β”€ environment.py # Core environment logic + graders
102
- β”œβ”€β”€ requirements.txt
103
- └── Dockerfile
104
  ```
 
51
 
52
  ### Local Development
53
 
54
+ 1. **Clone and Install**
55
  ```bash
56
+ # Clone the repository
57
+ git clone https://github.com/shreyas231219/Meta-Pytorch-Openenv.git
58
+ cd Meta-Pytorch-Openenv
59
+
60
  # Install dependencies
61
+ pip install -e .
62
+ ```
63
+
64
+ 2. **Run the Server**
65
+ The server will default to port **7860**.
66
 
67
+ **Bash (Linux/macOS):**
68
+ ```bash
69
  TASK_ID=easy python -m server.app
70
+ ```
71
 
72
+ **PowerShell (Windows):**
73
+ ```powershell
74
+ $env:TASK_ID='easy'
75
+ python -m server.app
76
  ```
77
 
78
  ### Docker (Hugging Face Spaces Ready)
 
87
 
88
  ## Baseline Inference
89
 
90
+ Runs GPT-4o on all three tasks and prints reproducible scores.
91
 
92
+ ```powershell
93
+ # For local testing in PowerShell (Windows)
94
+ $env:HF_TOKEN='sk-...'
95
+ $env:MODEL_NAME='gpt-4o'
96
  python inference.py --url http://localhost:7860
97
  ```
98
 
99
  ## Project Structure
100
 
101
  ```
102
+ .
103
+ β”œβ”€β”€ Dockerfile # Root Dockerfile for HF Spaces
104
+ β”œβ”€β”€ openenv.yaml # OpenEnv manifest (port 7860)
105
+ β”œβ”€β”€ pyproject.toml # Package dependencies
106
+ β”œβ”€β”€ inference.py # baseline inference script
107
+ β”œβ”€β”€ inference_groq.py # groq inference script
 
108
  β”œβ”€β”€ README.md # This file
109
+ β”œβ”€β”€ client.py # EnvClient helper
110
+ β”œβ”€β”€ models.py # Action & Observation models
111
  └── server/
112
+ β”œβ”€β”€ app.py # FastAPI server entry point
113
+ └── environment.py # Core environment logic + graders
 
 
 
114
  ```
inference.py CHANGED
@@ -167,8 +167,8 @@ def main():
167
  )
168
  parser.add_argument(
169
  "--url",
170
- default="http://localhost:8000",
171
- help="Base URL of the running environment server (default: http://localhost:8000)",
172
  )
173
  parser.add_argument(
174
  "--max-turns",
 
167
  )
168
  parser.add_argument(
169
  "--url",
170
+ default="http://localhost:7860",
171
+ help="Base URL of the running environment server (default: http://localhost:7860)",
172
  )
173
  parser.add_argument(
174
  "--max-turns",
inference_groq.py CHANGED
@@ -7,8 +7,8 @@ reproducible scores via the OpenEnv WebSocket client.
7
  Usage:
8
  set GROQ_API_KEY=gsk-... # Windows
9
  export GROQ_API_KEY=gsk-... # Linux/macOS
10
- python baseline_inference_groq.py # local server
11
- python baseline_inference_groq.py --url https://... # remote server
12
  """
13
 
14
  import argparse
@@ -163,8 +163,8 @@ def main():
163
  )
164
  parser.add_argument(
165
  "--url",
166
- default="http://localhost:8000",
167
- help="Base URL of the running environment server (default: http://localhost:8000)",
168
  )
169
  parser.add_argument(
170
  "--max-turns",
 
7
  Usage:
8
  set GROQ_API_KEY=gsk-... # Windows
9
  export GROQ_API_KEY=gsk-... # Linux/macOS
10
+ python inference_groq.py # local server
11
+ python inference_groq.py --url https://... # remote server
12
  """
13
 
14
  import argparse
 
163
  )
164
  parser.add_argument(
165
  "--url",
166
+ default="http://localhost:7860",
167
+ help="Base URL of the running environment server (default: http://localhost:7860)",
168
  )
169
  parser.add_argument(
170
  "--max-turns",
openenv.yaml CHANGED
@@ -3,7 +3,7 @@ name: sql_sandbox
3
  type: space
4
  runtime: fastapi
5
  app: server.app:app
6
- port: 8000
7
 
8
  description: >
9
  SQL/Data Cleaning Sandbox - a real-world OpenEnv environment where AI agents
 
3
  type: space
4
  runtime: fastapi
5
  app: server.app:app
6
+ port: 7860
7
 
8
  description: >
9
  SQL/Data Cleaning Sandbox - a real-world OpenEnv environment where AI agents
server/app.py CHANGED
@@ -69,7 +69,7 @@ def main():
69
  """
70
  import uvicorn
71
 
72
- uvicorn.run(app, host="0.0.0.0", port=8000)
73
 
74
 
75
  if __name__ == "__main__":
 
69
  """
70
  import uvicorn
71
 
72
+ uvicorn.run(app, host="0.0.0.0", port=7860)
73
 
74
 
75
  if __name__ == "__main__":