anonymousUser2 commited on
Commit
34b4da3
·
verified ·
1 Parent(s): 17e8875

Initial submission

Browse files
Files changed (1) hide show
  1. README.md +115 -3
README.md CHANGED
@@ -1,3 +1,115 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ - feature-extraction
6
+ language:
7
+ - en
8
+ tags:
9
+ - eda
10
+ - analog
11
+ - vlm
12
+ pretty_name: Analog Layouts Dataset for Vision Language Models (VLMs)
13
+ ---
14
+
15
+ # A VLM Framework to Optimize the Analysis of Analog Circuit Layouts
16
+
17
+ ***ICML 2026 Submission - Under Review***
18
+
19
+ This repository contains the dataset presented in the paper *"A VLM Framework to Optimize the Analysis of Analog Circuit Layouts"*, along with the code for training and evaluating Visual Language Models (VLMs) on analog circuit layouts analysis tasks.
20
+
21
+ The project addresses the challenge of interpreting technical diagrams by benchmarking VLMs on tasks ranging from single device identification to component counting in complex mixed circuits.
22
+
23
+ ## Dataset Overview
24
+
25
+ The dataset comprises over **30,000 circuits** and **77,000+ Question-Answer pairs**, organized into a comprehensive benchmark suite.
26
+
27
+ ### Circuit Categories
28
+ - **Single Devices** (19,997 images): PMOS, NMOS, Capacitors, Resistors.
29
+ - **Base Circuits** (5,894 images): Ahuja OTA, Gate Driver, HPF, LDO, LPF, Miller OTA.
30
+ - **Mixed Circuits** (4,140 images): Complex combinations of base circuits.
31
+
32
+ ### Benchmark Tasks
33
+ The dataset defines 5 core tasks for evaluation:
34
+ | Task | Description | Size |
35
+ |------|-------------|------|
36
+ | **Task A** | Single device identification | 19,997 samples |
37
+ | **Task B** | Base circuit identification | 5,894 samples |
38
+ | **Task C** | Component counting (base circuits) | 27,475 samples |
39
+ | **Task D** | Component counting (mixed circuits) | 19,848 samples |
40
+ | **Task E** | Base circuit identification in mixed circuits | 4,140 samples |
41
+
42
+ *For detailed statistics, please refer to [DATASET_STATISTICS.md](DATASET_STATISTICS.md).*
43
+
44
+ ## Repository Structure
45
+ Once `code.zip` and `dataset.zip` have been unzipped, the structure is as follows:
46
+ ```
47
+ .
48
+ ├── code/ # Source code for fine-tuning and inference
49
+ ├── base_circuits/ # Base circuit datasets and templates
50
+ ├── mixed_circuits/ # Mixed circuit datasets
51
+ ├── single_devices/ # Single device datasets
52
+ ├── tasks/ # Task definitions and data splits
53
+ └── DATASET_STATISTICS.md
54
+ ```
55
+
56
+ ## Getting Started
57
+
58
+ ### Prerequisites
59
+ All execution scripts are located in the `code/` directory.
60
+
61
+ ```bash
62
+ cd code
63
+ pip install -r requirements.txt
64
+ ```
65
+
66
+ ### Fine-Tuning
67
+ The repository provides a sequential fine-tuning launcher to handle dataset ablations and multiple tasks.
68
+
69
+ **Basic Usage:**
70
+ ```bash
71
+ # Dry-run to view planned training jobs
72
+ python VLM_finetune/run_ablation_sequential_ft.py --dry_run
73
+
74
+ # Train Task A (Single device identification) with 100% of dataset
75
+ python VLM_finetune/run_ablation_sequential_ft.py --task a1 --perc 100
76
+ ```
77
+
78
+ **Advanced Usage:**
79
+ Train multiple tasks with specific data percentages:
80
+ ```bash
81
+ python VLM_finetune/run_ablation_sequential_ft.py --tasks a1,b1,c1 --percs 25,50,75,100
82
+ ```
83
+
84
+ ### Evaluation
85
+ The inference pipeline supports evaluating both base models and fine-tuned LoRA adapters.
86
+
87
+ **Batch Evaluation (Ablation Study):**
88
+ Evaluate many adapters across different tasks and splits:
89
+ ```bash
90
+ python VLM_inference/run_ft_eval_ablation.py \
91
+ --splits-root /path/to/dataset/ablation_splits \
92
+ --adapter-root /path/to/outputs/finetune_lora \
93
+ --cache-dir /path/to/cache
94
+ ```
95
+
96
+ **Result Reorganization:**
97
+ Map raw evaluation results from training tasks (A1/B1/C1) to the final benchmark tasks (A-E) and compute aggregated metrics:
98
+ ```bash
99
+ python reorganize_results.py \
100
+ --input-root /path/to/raw_results \
101
+ --output-root /path/to/final_results
102
+ ```
103
+
104
+ **Single Task Evaluation:**
105
+ Run inference on a single task/circuit:
106
+ ```bash
107
+ # Evaluate Task A (Task A1)
108
+ python VLM_inference/test_base_models/run_ft_eval_update.py --task a1 --num-samples 200
109
+
110
+ # Evaluate with a specific adapter
111
+ python VLM_inference/test_base_models/run_ft_eval_update.py \
112
+ --task a1 \
113
+ --num-samples 200 \
114
+ --adapter /path/to/adapter/checkpoint
115
+ ```