Reja1 Claude Opus 4.6 commited on
Commit
b5a5090
·
1 Parent(s): aee3e08

Migrate from pip to uv for dependency management

Browse files

Replace requirements.txt with pyproject.toml and uv.lock. Update all
commands in README.md and CLAUDE.md to use uv sync / uv run.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Files changed (5) hide show
  1. CLAUDE.md +7 -8
  2. README.md +7 -10
  3. pyproject.toml +17 -0
  4. requirements.txt +0 -26
  5. uv.lock +0 -0
CLAUDE.md CHANGED
@@ -10,15 +10,14 @@ A benchmark for evaluating vision-capable LLMs on Indian competitive exam questi
10
 
11
  ```bash
12
  # Setup
13
- python3 -m venv hf-env && source hf-env/bin/activate
14
- pip install -r requirements.txt
15
  echo "OPENROUTER_API_KEY=your_key" > .env
16
 
17
  # Must run from project root (paths are resolved relative to cwd)
18
- python src/benchmark_runner.py --model "google/gemini-2.5-pro-preview-03-25" --exam_name JEE_ADVANCED --exam_year 2025
19
 
20
  # Filter by question IDs
21
- python src/benchmark_runner.py --model "openai/o3" --question_ids "N24T3001,N24T3002"
22
  ```
23
 
24
  CLI args: `--model` (required), `--exam_name` (all/NEET/JEE_ADVANCED/JEE_MAIN), `--exam_year` (all/2024/2025), `--question_ids`, `--output_dir`, `--config`, `--resume`.
@@ -27,12 +26,12 @@ CLI args: `--model` (required), `--exam_name` (all/NEET/JEE_ADVANCED/JEE_MAIN),
27
 
28
  ```bash
29
  # Run the full pytest suite (68 tests)
30
- python -m pytest tests/ -v
31
 
32
  # Run individual module self-tests
33
- python src/utils.py # answer parsing logic
34
- python src/evaluation.py # scoring logic
35
- python src/llm_interface.py # API calls (requires .env and network)
36
  ```
37
 
38
  ## Architecture
 
10
 
11
  ```bash
12
  # Setup
13
+ uv sync
 
14
  echo "OPENROUTER_API_KEY=your_key" > .env
15
 
16
  # Must run from project root (paths are resolved relative to cwd)
17
+ uv run python src/benchmark_runner.py --model "google/gemini-2.5-pro-preview-03-25" --exam_name JEE_ADVANCED --exam_year 2025
18
 
19
  # Filter by question IDs
20
+ uv run python src/benchmark_runner.py --model "openai/o3" --question_ids "N24T3001,N24T3002"
21
  ```
22
 
23
  CLI args: `--model` (required), `--exam_name` (all/NEET/JEE_ADVANCED/JEE_MAIN), `--exam_year` (all/2024/2025), `--question_ids`, `--output_dir`, `--config`, `--resume`.
 
26
 
27
  ```bash
28
  # Run the full pytest suite (68 tests)
29
+ uv run pytest tests/ -v
30
 
31
  # Run individual module self-tests
32
+ uv run python src/utils.py # answer parsing logic
33
+ uv run python src/evaluation.py # scoring logic
34
+ uv run python src/llm_interface.py # API calls (requires .env and network)
35
  ```
36
 
37
  ## Architecture
README.md CHANGED
@@ -112,10 +112,7 @@ This repository contains scripts to run the benchmark evaluation directly:
112
 
113
  2. **Install dependencies:**
114
  ```bash
115
- # It's recommended to use a virtual environment
116
- python -m venv venv
117
- source venv/bin/activate # On Windows: venv\Scripts\activate
118
- pip install -r requirements.txt
119
  ```
120
 
121
  3. **Configure API Key:**
@@ -142,33 +139,33 @@ This repository contains scripts to run the benchmark evaluation directly:
142
 
143
  **Basic usage (run all available models on all questions):**
144
  ```bash
145
- python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25"
146
  ```
147
 
148
  **Filter by exam and year:**
149
  ```bash
150
  # Run only NEET 2024 questions
151
- python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/o3" --exam_name NEET --exam_year 2024
152
 
153
  # Run only JEE Advanced 2025 questions
154
- python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "anthropic/claude-sonnet-4" --exam_name JEE_ADVANCED --exam_year 2025
155
  ```
156
 
157
  **Run specific questions:**
158
  ```bash
159
  # Run specific question IDs
160
- python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25" --question_ids "N24T3001,N24T3002,JA24P1M01"
161
  ```
162
 
163
  **Resume an interrupted run:**
164
  ```bash
165
  # Resume from an existing results directory (skips already-completed questions)
166
- python src/benchmark_runner.py --model "google/gemini-2.5-pro-preview-03-25" --resume results/google_gemini-2.5-pro-preview-03-25_NEET_2024_20250524_141230
167
  ```
168
 
169
  **Custom output directory:**
170
  ```bash
171
- python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --output_dir my_custom_results
172
  ```
173
 
174
  **Available options:**
 
112
 
113
  2. **Install dependencies:**
114
  ```bash
115
+ uv sync
 
 
 
116
  ```
117
 
118
  3. **Configure API Key:**
 
139
 
140
  **Basic usage (run all available models on all questions):**
141
  ```bash
142
+ uv run python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25"
143
  ```
144
 
145
  **Filter by exam and year:**
146
  ```bash
147
  # Run only NEET 2024 questions
148
+ uv run python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/o3" --exam_name NEET --exam_year 2024
149
 
150
  # Run only JEE Advanced 2025 questions
151
+ uv run python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "anthropic/claude-sonnet-4" --exam_name JEE_ADVANCED --exam_year 2025
152
  ```
153
 
154
  **Run specific questions:**
155
  ```bash
156
  # Run specific question IDs
157
+ uv run python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25" --question_ids "N24T3001,N24T3002,JA24P1M01"
158
  ```
159
 
160
  **Resume an interrupted run:**
161
  ```bash
162
  # Resume from an existing results directory (skips already-completed questions)
163
+ uv run python src/benchmark_runner.py --model "google/gemini-2.5-pro-preview-03-25" --resume results/google_gemini-2.5-pro-preview-03-25_NEET_2024_20250524_141230
164
  ```
165
 
166
  **Custom output directory:**
167
  ```bash
168
+ uv run python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --output_dir my_custom_results
169
  ```
170
 
171
  **Available options:**
pyproject.toml ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "jee-neet-benchmark"
3
+ version = "0.1.0"
4
+ description = "Add your description here"
5
+ readme = "README.md"
6
+ requires-python = ">=3.12"
7
+ dependencies = [
8
+ "datasets==3.5.1",
9
+ "huggingface-hub==0.30.2",
10
+ "pillow==11.2.1",
11
+ "pytest==9.0.2",
12
+ "python-dotenv==1.1.0",
13
+ "pyyaml==6.0.2",
14
+ "requests==2.32.3",
15
+ "tenacity==9.1.2",
16
+ "tqdm==4.67.1",
17
+ ]
requirements.txt DELETED
@@ -1,26 +0,0 @@
1
- # Core Hugging Face library for dataset loading
2
- datasets==3.5.1
3
-
4
- # For interacting with the Hugging Face Hub
5
- huggingface_hub==0.30.2
6
-
7
- # Image processing library (required by datasets.Image)
8
- Pillow==11.2.1
9
-
10
- # For making API calls (e.g., to OpenRouter)
11
- requests==2.32.3
12
-
13
- # For handling YAML configuration files
14
- PyYAML==6.0.2
15
-
16
- # For managing environment variables (e.g., API keys)
17
- python-dotenv==1.1.0
18
-
19
- # For handling retries during API calls
20
- tenacity==9.1.2
21
-
22
- # For progress bars in benchmark execution
23
- tqdm==4.67.1
24
-
25
- # For running tests
26
- pytest==9.0.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
uv.lock ADDED
The diff for this file is too large to render. See raw diff