Luis Kalckstein commited on
Commit
32e8dbc
·
unverified ·
1 Parent(s): 3b26c91

V1 including mock results

Browse files
Makefile DELETED
@@ -1,13 +0,0 @@
1
- .PHONY: style format
2
-
3
-
4
- style:
5
- python -m black --line-length 119 .
6
- python -m isort .
7
- ruff check --fix .
8
-
9
-
10
- quality:
11
- python -m black --check --line-length 119 .
12
- python -m isort --check-only .
13
- ruff check .
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -11,36 +11,103 @@ short_description: Duplicate this leaderboard to initialize your own!
11
  sdk_version: 5.19.0
12
  ---
13
 
14
- # Start the configuration
15
-
16
- Most of the variables to change for a default leaderboard are in `src/env.py` (replace the path for your leaderboard) and `src/about.py` (for tasks).
17
-
18
- Results files should have the following format and be stored as json files:
19
- ```json
20
- {
21
- "config": {
22
- "model_dtype": "torch.float16", # or torch.bfloat16 or 8bit or 4bit
23
- "model_name": "path of the model on the hub: org/model",
24
- "model_sha": "revision on the hub",
25
- },
26
- "results": {
27
- "task_name": {
28
- "metric_name": score,
29
- },
30
- "task_name2": {
31
- "metric_name": score,
32
- }
33
- }
34
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
- Request files are created automatically by this tool.
38
 
39
- If you encounter problem on the space, don't hesitate to restart it to remove the create eval-queue, eval-queue-bk, eval-results and eval-results-bk created folder.
40
 
41
- # Code logic for more complex edits
42
 
43
- You'll find
44
- - the main table' columns names and properties in `src/display/utils.py`
45
- - the logic to read all results and request files, then convert them in dataframe lines, in `src/leaderboard/read_evals.py`, and `src/populate.py`
46
- - the logic to allow or filter submissions in `src/submission/submit.py` and `src/submission/check_validity.py`
 
11
  sdk_version: 5.19.0
12
  ---
13
 
14
+ # 🔒 LLM PII Detection Leaderboard
15
+
16
+ A comprehensive benchmark for evaluating language models' performance in detecting and handling personally identifiable information (PII) across various document types and scenarios.
17
+
18
+ ## Features
19
+
20
+ - **Beautiful Modern UI**: Elegant dark theme with gradient styling and smooth animations
21
+ - **Comprehensive Metrics**: Precision, Recall, F1 Score, Over-detection Rate, Processing Time, and Cost
22
+ - **Domain-Specific Analysis**: Specialized evaluation across Healthcare, Financial, Government, Legal, and Personal documents
23
+ - **Performance Cards**: Professional model performance cards perfect for presentations and reports
24
+ - **Interactive Filtering**: Filter by model type, document type, and sort by any metric
25
+ - **Real-time Updates**: Dynamic table updates and score visualizations
26
+
27
+ ## 🚀 Quick Start
28
+
29
+ ### Installation
30
+
31
+ ```bash
32
+ git clone https://github.com/your-username/LLM-PII-Detection-Leaderboard.git
33
+ cd LLM-PII-Detection-Leaderboard
34
+ pip install -r requirements.txt
35
+ ```
36
+
37
+ ### Run the Application
38
+
39
+ ```bash
40
+ python app.py
41
+ ```
42
+
43
+ The leaderboard will be available at `http://localhost:7860`
44
+
45
+ ## 📊 Key Metrics
46
+
47
+ - **Overall Accuracy**: Percentage of correctly identified and classified PII entities
48
+ - **Precision**: Of all flagged items, how many were actually PII (avoiding false positives)
49
+ - **Recall**: Of all PII present, how many were successfully detected (avoiding false negatives)
50
+ - **F1 Score**: Harmonic mean balancing precision and recall
51
+ - **Over-detection Rate**: Percentage of non-PII incorrectly flagged (lower is better)
52
+
53
+ ## 🏗️ Project Structure
54
+
55
  ```
56
+ LLM-PII-Detection-Leaderboard/
57
+ ├── app.py # Main application entry point
58
+ ├── pii_leaderboard.py # Core leaderboard functionality
59
+ ├── data_loader.py # Data loading and styling configuration
60
+ ├── requirements.txt # Python dependencies
61
+ └── README.md # This file
62
+ ```
63
+
64
+ ## 🎨 Design Philosophy
65
+
66
+ This leaderboard combines the slim architecture of agent-leaderboard with the beautiful design elements from DocumentProcessing Leaderboard Nutrient, featuring:
67
+
68
+ - **Minimal Dependencies**: Only essential packages (Gradio, Pandas, NumPy)
69
+ - **Clean Architecture**: Simple, maintainable code structure
70
+ - **Professional Styling**: Modern dark theme with custom color palette
71
+ - **Interactive Elements**: Score bars, rank badges, and performance cards
72
+ - **Responsive Design**: Works beautifully on all screen sizes
73
+
74
+ ## 🔧 Customization
75
+
76
+ ### Adding New Models
77
+
78
+ Update the `sample_data` dictionary in `data_loader.py` with your model's performance metrics.
79
+
80
+ ### Changing Colors
81
+
82
+ Modify the `COLORS` dictionary in `data_loader.py` to customize the color scheme.
83
+
84
+ ### Adding New Metrics
85
+
86
+ 1. Add the metric to your data structure
87
+ 2. Update the table generation in `pii_leaderboard.py`
88
+ 3. Add appropriate styling and score bars
89
+
90
+ ## 📈 Performance
91
+
92
+ The leaderboard currently evaluates 8 leading language models across:
93
+ - **5 Document Types**: Healthcare, Financial, Government, Legal, Personal
94
+ - **6 Key Metrics**: Accuracy, Precision, Recall, F1, Over-detection Rate, Cost & Time
95
+ - **Real-world Scenarios**: Synthetic industry documents with embedded PII
96
+
97
+ ## 🤝 Contributing
98
+
99
+ 1. Fork the repository
100
+ 2. Create a feature branch
101
+ 3. Make your changes
102
+ 4. Test thoroughly
103
+ 5. Submit a pull request
104
 
105
+ ## 📄 License
106
 
107
+ This project is licensed under the MIT License - see the LICENSE file for details.
108
 
109
+ ## 🙏 Acknowledgments
110
 
111
+ - Inspired by the elegant design of DocumentProcessing Leaderboard Nutrient
112
+ - Built with the slim architecture approach of agent-leaderboard
113
+ - Powered by Gradio for the beautiful web interface
 
app.py CHANGED
@@ -1,204 +1,14 @@
1
- import gradio as gr
2
- from gradio_leaderboard import Leaderboard, ColumnFilter, SelectColumns
3
- import pandas as pd
4
- from apscheduler.schedulers.background import BackgroundScheduler
5
- from huggingface_hub import snapshot_download
6
-
7
- from src.about import (
8
- CITATION_BUTTON_LABEL,
9
- CITATION_BUTTON_TEXT,
10
- EVALUATION_QUEUE_TEXT,
11
- INTRODUCTION_TEXT,
12
- LLM_BENCHMARKS_TEXT,
13
- TITLE,
14
- )
15
- from src.display.css_html_js import custom_css
16
- from src.display.utils import (
17
- BENCHMARK_COLS,
18
- COLS,
19
- EVAL_COLS,
20
- EVAL_TYPES,
21
- AutoEvalColumn,
22
- ModelType,
23
- fields,
24
- WeightType,
25
- Precision
26
- )
27
- from src.envs import API, EVAL_REQUESTS_PATH, EVAL_RESULTS_PATH, QUEUE_REPO, REPO_ID, RESULTS_REPO, TOKEN
28
- from src.populate import get_evaluation_queue_df, get_leaderboard_df
29
- from src.submission.submit import add_new_eval
30
-
31
 
32
- def restart_space():
33
- API.restart_space(repo_id=REPO_ID)
34
-
35
- ### Space initialisation
36
- try:
37
- print(EVAL_REQUESTS_PATH)
38
- snapshot_download(
39
- repo_id=QUEUE_REPO, local_dir=EVAL_REQUESTS_PATH, repo_type="dataset", tqdm_class=None, etag_timeout=30, token=TOKEN
40
- )
41
- except Exception:
42
- restart_space()
43
- try:
44
- print(EVAL_RESULTS_PATH)
45
- snapshot_download(
46
- repo_id=RESULTS_REPO, local_dir=EVAL_RESULTS_PATH, repo_type="dataset", tqdm_class=None, etag_timeout=30, token=TOKEN
47
- )
48
- except Exception:
49
- restart_space()
50
 
51
-
52
- LEADERBOARD_DF = get_leaderboard_df(EVAL_RESULTS_PATH, EVAL_REQUESTS_PATH, COLS, BENCHMARK_COLS)
53
-
54
- (
55
- finished_eval_queue_df,
56
- running_eval_queue_df,
57
- pending_eval_queue_df,
58
- ) = get_evaluation_queue_df(EVAL_REQUESTS_PATH, EVAL_COLS)
59
-
60
- def init_leaderboard(dataframe):
61
- if dataframe is None or dataframe.empty:
62
- raise ValueError("Leaderboard DataFrame is empty or None.")
63
- return Leaderboard(
64
- value=dataframe,
65
- datatype=[c.type for c in fields(AutoEvalColumn)],
66
- select_columns=SelectColumns(
67
- default_selection=[c.name for c in fields(AutoEvalColumn) if c.displayed_by_default],
68
- cant_deselect=[c.name for c in fields(AutoEvalColumn) if c.never_hidden],
69
- label="Select Columns to Display:",
70
- ),
71
- search_columns=[AutoEvalColumn.model.name, AutoEvalColumn.license.name],
72
- hide_columns=[c.name for c in fields(AutoEvalColumn) if c.hidden],
73
- filter_columns=[
74
- ColumnFilter(AutoEvalColumn.model_type.name, type="checkboxgroup", label="Model types"),
75
- ColumnFilter(AutoEvalColumn.precision.name, type="checkboxgroup", label="Precision"),
76
- ColumnFilter(
77
- AutoEvalColumn.params.name,
78
- type="slider",
79
- min=0.01,
80
- max=150,
81
- label="Select the number of parameters (B)",
82
- ),
83
- ColumnFilter(
84
- AutoEvalColumn.still_on_hub.name, type="boolean", label="Deleted/incomplete", default=True
85
- ),
86
- ],
87
- bool_checkboxgroup_label="Hide models",
88
- interactive=False,
89
  )
90
-
91
-
92
- demo = gr.Blocks(css=custom_css)
93
- with demo:
94
- gr.HTML(TITLE)
95
- gr.Markdown(INTRODUCTION_TEXT, elem_classes="markdown-text")
96
-
97
- with gr.Tabs(elem_classes="tab-buttons") as tabs:
98
- with gr.TabItem("🏅 LLM Benchmark", elem_id="llm-benchmark-tab-table", id=0):
99
- leaderboard = init_leaderboard(LEADERBOARD_DF)
100
-
101
- with gr.TabItem("📝 About", elem_id="llm-benchmark-tab-table", id=2):
102
- gr.Markdown(LLM_BENCHMARKS_TEXT, elem_classes="markdown-text")
103
-
104
- with gr.TabItem("🚀 Submit here! ", elem_id="llm-benchmark-tab-table", id=3):
105
- with gr.Column():
106
- with gr.Row():
107
- gr.Markdown(EVALUATION_QUEUE_TEXT, elem_classes="markdown-text")
108
-
109
- with gr.Column():
110
- with gr.Accordion(
111
- f"✅ Finished Evaluations ({len(finished_eval_queue_df)})",
112
- open=False,
113
- ):
114
- with gr.Row():
115
- finished_eval_table = gr.components.Dataframe(
116
- value=finished_eval_queue_df,
117
- headers=EVAL_COLS,
118
- datatype=EVAL_TYPES,
119
- row_count=5,
120
- )
121
- with gr.Accordion(
122
- f"🔄 Running Evaluation Queue ({len(running_eval_queue_df)})",
123
- open=False,
124
- ):
125
- with gr.Row():
126
- running_eval_table = gr.components.Dataframe(
127
- value=running_eval_queue_df,
128
- headers=EVAL_COLS,
129
- datatype=EVAL_TYPES,
130
- row_count=5,
131
- )
132
-
133
- with gr.Accordion(
134
- f"⏳ Pending Evaluation Queue ({len(pending_eval_queue_df)})",
135
- open=False,
136
- ):
137
- with gr.Row():
138
- pending_eval_table = gr.components.Dataframe(
139
- value=pending_eval_queue_df,
140
- headers=EVAL_COLS,
141
- datatype=EVAL_TYPES,
142
- row_count=5,
143
- )
144
- with gr.Row():
145
- gr.Markdown("# ✉️✨ Submit your model here!", elem_classes="markdown-text")
146
-
147
- with gr.Row():
148
- with gr.Column():
149
- model_name_textbox = gr.Textbox(label="Model name")
150
- revision_name_textbox = gr.Textbox(label="Revision commit", placeholder="main")
151
- model_type = gr.Dropdown(
152
- choices=[t.to_str(" : ") for t in ModelType if t != ModelType.Unknown],
153
- label="Model type",
154
- multiselect=False,
155
- value=None,
156
- interactive=True,
157
- )
158
-
159
- with gr.Column():
160
- precision = gr.Dropdown(
161
- choices=[i.value.name for i in Precision if i != Precision.Unknown],
162
- label="Precision",
163
- multiselect=False,
164
- value="float16",
165
- interactive=True,
166
- )
167
- weight_type = gr.Dropdown(
168
- choices=[i.value.name for i in WeightType],
169
- label="Weights type",
170
- multiselect=False,
171
- value="Original",
172
- interactive=True,
173
- )
174
- base_model_name_textbox = gr.Textbox(label="Base model (for delta or adapter weights)")
175
-
176
- submit_button = gr.Button("Submit Eval")
177
- submission_result = gr.Markdown()
178
- submit_button.click(
179
- add_new_eval,
180
- [
181
- model_name_textbox,
182
- base_model_name_textbox,
183
- revision_name_textbox,
184
- precision,
185
- weight_type,
186
- model_type,
187
- ],
188
- submission_result,
189
- )
190
-
191
- with gr.Row():
192
- with gr.Accordion("📙 Citation", open=False):
193
- citation_button = gr.Textbox(
194
- value=CITATION_BUTTON_TEXT,
195
- label=CITATION_BUTTON_LABEL,
196
- lines=20,
197
- elem_id="citation-button",
198
- show_copy_button=True,
199
- )
200
-
201
- scheduler = BackgroundScheduler()
202
- scheduler.add_job(restart_space, "interval", seconds=1800)
203
- scheduler.start()
204
- demo.queue(default_concurrency_limit=40).launch()
 
1
+ import warnings
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
+ warnings.filterwarnings("ignore")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
+ import gradio as gr
6
+ from pii_leaderboard import create_app
7
+
8
+ if __name__ == "__main__":
9
+ demo = create_app()
10
+ demo.launch(
11
+ server_name="0.0.0.0",
12
+ server_port=7860,
13
+ share=False
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data_loader.py ADDED
@@ -0,0 +1,428 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import os
3
+
4
+ # PII Detection Categories
5
+ PII_CATEGORIES = {
6
+ "Overall": ["Overall Accuracy"],
7
+ "Entity Types": ["Names", "Addresses", "Phone Numbers", "SSN", "Medical IDs", "Financial Info"],
8
+ "Document Types": ["Healthcare", "Financial", "Government", "Legal", "Personal"],
9
+ "Performance": ["Precision", "Recall", "F1 Score"],
10
+ "Efficiency": ["Processing Time", "Cost per Document"]
11
+ }
12
+
13
+ # Model type definitions
14
+ MODEL_TYPES = {
15
+ "Proprietary": "🔒",
16
+ "Open Source": "🔓"
17
+ }
18
+
19
+ def load_data():
20
+ """Load and preprocess the PII detection evaluation data from CSV file."""
21
+
22
+ # Load from CSV file
23
+ csv_path = "results/pii_detection_results.csv"
24
+
25
+ if not os.path.exists(csv_path):
26
+ raise FileNotFoundError(f"Results file not found: {csv_path}. Please ensure the CSV file exists in the results folder.")
27
+
28
+ df = pd.read_csv(csv_path)
29
+
30
+ # Clean and prepare data
31
+ df = df.fillna('')
32
+
33
+ # Round numeric columns for better display
34
+ numeric_cols = [
35
+ 'Overall Accuracy', 'Precision', 'Recall', 'F1 Score', 'Over-redaction Rate',
36
+ 'Processing Time (s)', 'Cost per Document ($)',
37
+ 'Healthcare Accuracy', 'Financial Accuracy', 'Government Accuracy',
38
+ 'Legal Accuracy', 'Personal Accuracy'
39
+ ]
40
+
41
+ for col in numeric_cols:
42
+ if col in df.columns:
43
+ df[col] = pd.to_numeric(df[col], errors='coerce').round(3)
44
+
45
+ return df
46
+
47
+ # Color palette matching DocumentProcessing style
48
+ COLORS = {
49
+ # Light mode colors
50
+ "white": "#FFFFFF",
51
+ "disc_pink": "#DE9DCC",
52
+ "code_coral": "#F25E45",
53
+ "data_green": "#6EB579",
54
+ "digital_pollen": "#F0C968",
55
+ "warm_black": "#1A1414",
56
+ "off_white": "#EFEBE7",
57
+ "pixel_mist": "#E2DBD9",
58
+ "soft_grey": "#C2B8AE",
59
+ "warm_grey": "#67594B",
60
+
61
+ # Dark mode colors
62
+ "disc_pink_dm": "#4F2B45",
63
+ "code_coral_dm": "#672D23",
64
+ "data_green_dm": "#2B412F",
65
+ "digital_pollen_dm": "#5B481A",
66
+ }
67
+
68
+ # Header content with PII detection focus
69
+ HEADER_CONTENT = f"""
70
+ <style>
71
+ /* Import fonts */
72
+ @import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700;800&display=swap');
73
+
74
+ /* Root variables with custom color palette */
75
+ :root {{
76
+ --bg-primary: #1A1414;
77
+ --bg-secondary: rgba(239, 235, 231, 0.03);
78
+ --bg-card: rgba(239, 235, 231, 0.02);
79
+ --border-subtle: rgba(239, 235, 231, 0.08);
80
+ --border-default: rgba(239, 235, 231, 0.12);
81
+ --border-strong: rgba(239, 235, 231, 0.2);
82
+ --text-primary: #EFEBE7;
83
+ --text-secondary: #C2B8AE;
84
+ --text-muted: #67594B;
85
+ --accent-primary: #DE9DCC;
86
+ --accent-secondary: #F25E45;
87
+ --accent-tertiary: #6EB579;
88
+ --accent-quaternary: #F0C968;
89
+ --glow-primary: rgba(222, 157, 204, 0.4);
90
+ --glow-secondary: rgba(242, 94, 69, 0.4);
91
+ --glow-tertiary: rgba(110, 181, 121, 0.4);
92
+ }}
93
+
94
+ /* Global font and background */
95
+ .gradio-container {{
96
+ font-family: 'Inter', -apple-system, BlinkMacSystemFont, sans-serif !important;
97
+ background: var(--bg-primary) !important;
98
+ color: var(--text-primary) !important;
99
+ }}
100
+
101
+ /* Headers and text */
102
+ h1, h2, h3, h4 {{
103
+ color: var(--text-primary) !important;
104
+ font-weight: 700 !important;
105
+ font-family: 'Inter', sans-serif !important;
106
+ }}
107
+
108
+ p, span, div {{
109
+ color: var(--text-primary) !important;
110
+ font-family: 'Inter', sans-serif !important;
111
+ }}
112
+
113
+ /* Dark containers */
114
+ .dark-container {{
115
+ background: var(--bg-card);
116
+ border: 1px solid var(--border-subtle);
117
+ border-radius: 20px;
118
+ padding: 28px;
119
+ box-shadow: 0 8px 32px rgba(0, 0, 0, 0.4);
120
+ backdrop-filter: blur(10px);
121
+ position: relative;
122
+ overflow: hidden;
123
+ }}
124
+
125
+ /* Section headers */
126
+ .section-header {{
127
+ display: flex;
128
+ align-items: center;
129
+ gap: 12px;
130
+ margin-bottom: 24px;
131
+ }}
132
+
133
+ .section-icon {{
134
+ filter: drop-shadow(0 0 12px currentColor);
135
+ transition: all 0.3s ease;
136
+ }}
137
+
138
+ /* Enhanced table styling */
139
+ .v2-table-container {{
140
+ background: var(--bg-card);
141
+ border-radius: 16px;
142
+ overflow: hidden;
143
+ border: 1px solid var(--border-subtle);
144
+ box-shadow: 0 8px 32px rgba(0, 0, 0, 0.3);
145
+ backdrop-filter: blur(10px);
146
+ }}
147
+
148
+ .v2-styled-table {{
149
+ width: 100%;
150
+ border-collapse: collapse;
151
+ font-family: 'Inter', sans-serif;
152
+ font-size: 14px;
153
+ }}
154
+
155
+ .v2-styled-table thead {{
156
+ background: linear-gradient(135deg, var(--accent-primary), var(--accent-secondary));
157
+ }}
158
+
159
+ .v2-styled-table th {{
160
+ padding: 16px 12px;
161
+ text-align: left;
162
+ color: white;
163
+ font-weight: 600;
164
+ font-size: 13px;
165
+ text-transform: uppercase;
166
+ letter-spacing: 0.05em;
167
+ border: none;
168
+ position: relative;
169
+ }}
170
+
171
+ .v2-styled-table td {{
172
+ padding: 14px 12px;
173
+ border-bottom: 1px solid var(--border-subtle);
174
+ color: var(--text-primary);
175
+ transition: all 0.2s ease;
176
+ vertical-align: middle;
177
+ }}
178
+
179
+ .v2-styled-table tbody tr {{
180
+ transition: all 0.3s ease;
181
+ background: var(--bg-secondary);
182
+ }}
183
+
184
+ .v2-styled-table tbody tr:nth-child(even) {{
185
+ background: var(--bg-card);
186
+ }}
187
+
188
+ .v2-styled-table tbody tr:hover {{
189
+ background: rgba(222, 157, 204, 0.1);
190
+ box-shadow: 0 0 20px var(--glow-primary);
191
+ transform: scale(1.01);
192
+ }}
193
+
194
+ .model-name {{
195
+ font-weight: 600;
196
+ color: var(--accent-primary);
197
+ transition: all 0.2s ease;
198
+ }}
199
+
200
+ .numeric-cell {{
201
+ text-align: center;
202
+ font-family: 'SF Mono', monospace;
203
+ font-weight: 500;
204
+ }}
205
+
206
+ .score-cell {{
207
+ padding: 8px 12px;
208
+ }}
209
+
210
+ /* Scrollbar styling */
211
+ ::-webkit-scrollbar {{
212
+ width: 8px;
213
+ height: 8px;
214
+ }}
215
+
216
+ ::-webkit-scrollbar-track {{
217
+ background: var(--bg-secondary);
218
+ border-radius: 4px;
219
+ }}
220
+
221
+ ::-webkit-scrollbar-thumb {{
222
+ background: var(--accent-secondary);
223
+ border-radius: 4px;
224
+ }}
225
+
226
+ ::-webkit-scrollbar-thumb:hover {{
227
+ background: var(--accent-primary);
228
+ }}
229
+ </style>
230
+
231
+ <div style="
232
+ background: var(--bg-primary);
233
+ padding: 4rem 2rem;
234
+ border-radius: 16px;
235
+ margin-bottom: 0;
236
+ transition: all 0.3s ease;
237
+ position: relative;
238
+ ">
239
+ <div style="max-width: 72rem; margin: 0 auto;">
240
+ <div style="text-align: center; margin-bottom: 4rem;">
241
+ <h1 style="
242
+ font-size: 4rem;
243
+ font-weight: 800;
244
+ line-height: 1.1;
245
+ background: linear-gradient(45deg, var(--accent-primary), var(--accent-secondary));
246
+ -webkit-background-clip: text;
247
+ -webkit-text-fill-color: transparent;
248
+ margin-bottom: 0.5rem;
249
+ ">
250
+ 🔒 LLM PII Detection Leaderboard
251
+ </h1>
252
+
253
+ <p style="
254
+ color: var(--text-secondary);
255
+ font-size: 1.25rem;
256
+ line-height: 1.75;
257
+ max-width: 800px;
258
+ margin: 0 auto;
259
+ text-align: center;
260
+ ">
261
+ Comprehensive benchmark for language models' performance in detecting and redacting
262
+ personally identifiable information (PII) across various document types and scenarios.
263
+ <span style="
264
+ background: linear-gradient(to right, var(--accent-tertiary), var(--accent-quaternary));
265
+ -webkit-background-clip: text;
266
+ -webkit-text-fill-color: transparent;
267
+ display: block;
268
+ margin-top: 1rem;
269
+ font-size: 1.5rem;
270
+ font-weight: 500;
271
+ ">
272
+ "How well do LLMs protect sensitive information?"
273
+ </span>
274
+ </p>
275
+ </div>
276
+
277
+ <div style="
278
+ display: grid;
279
+ grid-template-columns: repeat(3, 1fr);
280
+ gap: 1.5rem;
281
+ margin-top: 4rem;
282
+ ">
283
+ <div style="
284
+ background: var(--bg-secondary);
285
+ border: 1px solid var(--border-subtle);
286
+ border-radius: 1rem;
287
+ padding: 2rem;
288
+ transition: all 0.3s ease;
289
+ text-align: center;
290
+ ">
291
+ <div style="
292
+ font-size: 4rem;
293
+ font-weight: 800;
294
+ margin-bottom: 1rem;
295
+ background: linear-gradient(45deg, var(--accent-primary), var(--accent-secondary));
296
+ -webkit-background-clip: text;
297
+ -webkit-text-fill-color: transparent;
298
+ ">8</div>
299
+ <div style="color: var(--text-secondary); font-size: 1.5rem; margin-bottom: 1.5rem;">
300
+ Language Models
301
+ </div>
302
+ <div style="font-size: 1.125rem; line-height: 1.75; color: var(--text-primary);">
303
+ Leading proprietary & open source
304
+ </div>
305
+ <div style="color: var(--text-secondary); margin-top: 0.5rem;">
306
+ GPT-4o, Claude, Gemini, LLaMA, Mistral
307
+ </div>
308
+ </div>
309
+
310
+ <div style="
311
+ background: var(--bg-secondary);
312
+ border: 1px solid var(--border-subtle);
313
+ border-radius: 1rem;
314
+ padding: 2rem;
315
+ transition: all 0.3s ease;
316
+ text-align: center;
317
+ ">
318
+ <div style="
319
+ font-size: 4rem;
320
+ font-weight: 800;
321
+ margin-bottom: 1rem;
322
+ background: linear-gradient(45deg, var(--accent-tertiary), var(--accent-quaternary));
323
+ -webkit-background-clip: text;
324
+ -webkit-text-fill-color: transparent;
325
+ ">5</div>
326
+ <div style="color: var(--text-secondary); font-size: 1.5rem; margin-bottom: 1.5rem;">
327
+ Document Types
328
+ </div>
329
+ <div style="font-size: 1.125rem; line-height: 1.75; color: var(--text-primary);">
330
+ Real-world scenarios
331
+ </div>
332
+ <div style="color: var(--text-secondary); margin-top: 0.5rem;">
333
+ Healthcare, Financial, Government, Legal, Personal
334
+ </div>
335
+ </div>
336
+
337
+ <div style="
338
+ background: var(--bg-secondary);
339
+ border: 1px solid var(--border-subtle);
340
+ border-radius: 1rem;
341
+ padding: 2rem;
342
+ transition: all 0.3s ease;
343
+ text-align: center;
344
+ ">
345
+ <div style="
346
+ font-size: 4rem;
347
+ font-weight: 800;
348
+ margin-bottom: 1rem;
349
+ background: linear-gradient(45deg, var(--accent-secondary), var(--accent-primary));
350
+ -webkit-background-clip: text;
351
+ -webkit-text-fill-color: transparent;
352
+ ">94.1%</div>
353
+ <div style="color: var(--text-secondary); font-size: 1.5rem; margin-bottom: 1.5rem;">
354
+ Best Accuracy
355
+ </div>
356
+ <div style="font-size: 1.125rem; line-height: 1.75; color: var(--text-primary);">
357
+ State-of-the-art performance
358
+ </div>
359
+ <div style="color: var(--text-secondary); margin-top: 0.5rem;">
360
+ GPT-4o leading precision & recall
361
+ </div>
362
+ </div>
363
+ </div>
364
+ </div>
365
+ </div>
366
+ """
367
+
368
+ # Methodology section adapted for PII detection
369
+ METHODOLOGY = """
370
+ <div style="max-width: 1200px; margin: 0 auto; padding: 2rem; color: var(--text-secondary); line-height: 1.7; font-size: 1rem;">
371
+ <h1 style="font-size: 2.5rem; font-weight: 700; margin: 3rem 0 1.5rem; color: var(--text-primary);
372
+ background: linear-gradient(to right, var(--accent-primary), var(--accent-secondary));
373
+ -webkit-background-clip: text; -webkit-text-fill-color: transparent;">
374
+ Methodology
375
+ </h1>
376
+
377
+ <p>Our evaluation methodology assesses language models' capabilities in detecting and handling personally identifiable information (PII) across realistic document scenarios. Each model is tested on synthetic documents containing embedded PII entities across 5 document categories.</p>
378
+
379
+ <h2 style="font-size: 1.75rem; font-weight: 600; margin: 2rem 0 1rem; color: var(--text-primary);">
380
+ Evaluation Process
381
+ </h2>
382
+
383
+ <ul style="list-style: none; padding: 0; margin: 1rem 0;">
384
+ <li style="padding-left: 2rem; position: relative; margin: 1rem 0; display: flex; align-items: flex-start;">
385
+ <span style="content: ''; position: absolute; left: 0; top: 0.75rem; width: 8px; height: 8px;
386
+ background: var(--accent-primary); border-radius: 50%;
387
+ box-shadow: 0 0 0 2px rgba(222, 157, 204, 0.2);"></span>
388
+ <span style="color: var(--accent-primary); font-weight: 600;">Model Selection:</span>
389
+ We evaluate leading language models across proprietary and open-source categories
390
+ </li>
391
+ <li style="padding-left: 2rem; position: relative; margin: 1rem 0; display: flex; align-items: flex-start;">
392
+ <span style="content: ''; position: absolute; left: 0; top: 0.75rem; width: 8px; height: 8px;
393
+ background: var(--accent-primary); border-radius: 50%;
394
+ box-shadow: 0 0 0 2px rgba(222, 157, 204, 0.2);"></span>
395
+ <span style="color: var(--accent-primary); font-weight: 600;">PII Detection:</span>
396
+ Each model processes documents with instructions to identify and classify PII entities
397
+ </li>
398
+ <li style="padding-left: 2rem; position: relative; margin: 1rem 0; display: flex; align-items: flex-start;">
399
+ <span style="content: ''; position: absolute; left: 0; top: 0.75rem; width: 8px; height: 8px;
400
+ background: var(--accent-primary); border-radius: 50%;
401
+ box-shadow: 0 0 0 2px rgba(222, 157, 204, 0.2);"></span>
402
+ <span style="color: var(--accent-primary); font-weight: 600;">Performance Metrics:</span>
403
+ Precision, Recall, F1 Score, Over-detection Rate, Processing Time, and Cost
404
+ </li>
405
+ <li style="padding-left: 2rem; position: relative; margin: 1rem 0; display: flex; align-items: flex-start;">
406
+ <span style="content: ''; position: absolute; left: 0; top: 0.75rem; width: 8px; height: 8px;
407
+ background: var(--accent-primary); border-radius: 50%;
408
+ box-shadow: 0 0 0 2px rgba(222, 157, 204, 0.2);"></span>
409
+ <span style="color: var(--accent-primary); font-weight: 600;">Domain Analysis:</span>
410
+ Specialized evaluation across Healthcare, Financial, Government, Legal, and Personal documents
411
+ </li>
412
+ </ul>
413
+
414
+ <h2 style="font-size: 1.75rem; font-weight: 600; margin: 2rem 0 1rem; color: var(--text-primary);">
415
+ Key Metrics Explained
416
+ </h2>
417
+
418
+ <div style="background: var(--bg-secondary); border: 1px solid var(--border-subtle); border-radius: 12px; padding: 1.5rem; margin: 1.5rem 0;">
419
+ <ul style="list-style: none; padding: 0; margin: 0;">
420
+ <li style="margin: 1rem 0;"><span style="color: var(--accent-tertiary); font-weight: 600;">Overall Accuracy:</span> Percentage of correctly identified and classified PII entities</li>
421
+ <li style="margin: 1rem 0;"><span style="color: var(--accent-tertiary); font-weight: 600;">Precision:</span> Of all flagged items, how many were actually PII (avoiding false positives)</li>
422
+ <li style="margin: 1rem 0;"><span style="color: var(--accent-tertiary); font-weight: 600;">Recall:</span> Of all PII present, how many were successfully detected (avoiding false negatives)</li>
423
+ <li style="margin: 1rem 0;"><span style="color: var(--accent-tertiary); font-weight: 600;">F1 Score:</span> Harmonic mean balancing precision and recall</li>
424
+ <li style="margin: 1rem 0;"><span style="color: var(--accent-secondary); font-weight: 600;">Over-detection Rate:</span> Percentage of non-PII incorrectly flagged (lower is better)</li>
425
+ </ul>
426
+ </div>
427
+ </div>
428
+ """
pii_leaderboard.py ADDED
@@ -0,0 +1,976 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import pandas as pd
3
+ import tempfile
4
+ import os
5
+ from data_loader import (
6
+ load_data,
7
+ PII_CATEGORIES,
8
+ HEADER_CONTENT,
9
+ METHODOLOGY,
10
+ COLORS,
11
+ MODEL_TYPES
12
+ )
13
+
14
+ def get_rank_badge(rank):
15
+ """Generate HTML for rank badge with appropriate styling"""
16
+ badge_styles = {
17
+ 1: ("1st", f"linear-gradient(145deg, {COLORS['digital_pollen']}, {COLORS['digital_pollen']})", COLORS['warm_black']),
18
+ 2: ("2nd", f"linear-gradient(145deg, {COLORS['soft_grey']}, {COLORS['warm_grey']})", COLORS['white']),
19
+ 3: ("3rd", f"linear-gradient(145deg, {COLORS['code_coral']}, {COLORS['code_coral_dm']})", COLORS['white']),
20
+ }
21
+
22
+ if rank in badge_styles:
23
+ label, gradient, text_color = badge_styles[rank]
24
+ return f"""
25
+ <div style="
26
+ display: inline-flex;
27
+ align-items: center;
28
+ justify-content: center;
29
+ min-width: 48px;
30
+ padding: 4px 12px;
31
+ background: {gradient};
32
+ color: {text_color};
33
+ border-radius: 6px;
34
+ font-weight: 600;
35
+ font-size: 0.9em;
36
+ box-shadow: 0 2px 4px rgba(0, 0, 0, 0.2);
37
+ ">
38
+ {label}
39
+ </div>
40
+ """
41
+ return f"""
42
+ <div style="
43
+ display: inline-flex;
44
+ align-items: center;
45
+ justify-content: center;
46
+ min-width: 28px;
47
+ color: var(--text-secondary);
48
+ font-weight: 500;
49
+ ">
50
+ {rank}
51
+ </div>
52
+ """
53
+
54
+ def get_type_badge(model_type):
55
+ """Generate HTML for model type badge"""
56
+ bg_color = COLORS['disc_pink'] if model_type == 'Proprietary' else COLORS['data_green']
57
+ return f"""
58
+ <div style="
59
+ display: inline-flex;
60
+ align-items: center;
61
+ padding: 4px 8px;
62
+ background: {bg_color};
63
+ color: white;
64
+ border-radius: 4px;
65
+ font-size: 0.85em;
66
+ font-weight: 500;
67
+ ">
68
+ {model_type}
69
+ </div>
70
+ """
71
+
72
+ def get_score_bar(score, is_inverse=False):
73
+ """Generate HTML for score bar with gradient styling"""
74
+ if pd.isna(score) or score == '':
75
+ score = 0
76
+ else:
77
+ score = float(score)
78
+
79
+ width = score * 100
80
+
81
+ # For over-detection rate, use inverse coloring (lower is better)
82
+ if is_inverse:
83
+ gradient = f"linear-gradient(90deg, {COLORS['data_green']}, {COLORS['code_coral']})"
84
+ else:
85
+ gradient = f"linear-gradient(90deg, {COLORS['code_coral']}, {COLORS['data_green']})"
86
+
87
+ return f"""
88
+ <div style="display: flex; align-items: center; gap: 12px; width: 100%;">
89
+ <div style="
90
+ flex-grow: 1;
91
+ height: 8px;
92
+ background: rgba(239, 235, 231, 0.1);
93
+ border-radius: 4px;
94
+ overflow: hidden;
95
+ max-width: 200px;
96
+ ">
97
+ <div style="
98
+ width: {width}%;
99
+ height: 100%;
100
+ background: {gradient};
101
+ border-radius: 4px;
102
+ transition: width 0.3s ease;
103
+ "></div>
104
+ </div>
105
+ <span style="
106
+ font-family: 'SF Mono', monospace;
107
+ font-weight: 600;
108
+ color: var(--text-primary);
109
+ min-width: 60px;
110
+ ">{score:.3f}</span>
111
+ </div>
112
+ """
113
+
114
+ def create_pii_leaderboard():
115
+ """Create the main PII detection leaderboard interface"""
116
+
117
+ def load_leaderboard_data():
118
+ """Load and prepare the leaderboard data"""
119
+ return load_data()
120
+
121
+ def generate_html_table(filtered_df, document_type, sort_by):
122
+ """Generate styled HTML table with rank badges and score bars"""
123
+ table_html = """
124
+ <div class="v2-table-container">
125
+ <table class="v2-styled-table">
126
+ <thead>
127
+ <tr>
128
+ <th style="width: 80px;">Rank</th>
129
+ <th>Model</th>
130
+ <th style="width: 120px;">Type</th>
131
+ <th>Vendor</th>
132
+ <th style="width: 200px;">Overall Accuracy</th>
133
+ <th style="width: 150px;">Precision</th>
134
+ <th style="width: 150px;">Recall</th>
135
+ <th style="width: 150px;">F1 Score</th>
136
+ <th style="width: 160px;">Over-detection Rate</th>
137
+ <th>Cost/Doc ($)</th>
138
+ <th>Time (s)</th>
139
+ </tr>
140
+ </thead>
141
+ <tbody>
142
+ """
143
+
144
+ # Generate table rows
145
+ for idx, (_, row) in enumerate(filtered_df.iterrows()):
146
+ rank = idx + 1
147
+ table_html += f"""
148
+ <tr>
149
+ <td>{get_rank_badge(rank)}</td>
150
+ <td class="model-name">{row['Model']}</td>
151
+ <td>{get_type_badge(row['Model Type'])}</td>
152
+ <td>{row['Vendor']}</td>
153
+ """
154
+
155
+ # Get appropriate values based on document type filter
156
+ if document_type != "All":
157
+ # For specific document type, show domain-specific scores
158
+ accuracy_col = f'{document_type} Accuracy'
159
+ accuracy = row.get(accuracy_col, row.get('Overall Accuracy', ''))
160
+ else:
161
+ # For "All", show overall accuracy
162
+ accuracy = row.get('Overall Accuracy', '')
163
+
164
+ precision = row.get('Precision', '')
165
+ recall = row.get('Recall', '')
166
+ f1 = row.get('F1 Score', '')
167
+ over_detection = row.get('Over-redaction Rate', '')
168
+ cost = row.get('Cost per Document ($)', '')
169
+ time = row.get('Processing Time (s)', '')
170
+
171
+ # Add score bars
172
+ if accuracy != '':
173
+ table_html += f'<td class="score-cell">{get_score_bar(accuracy)}</td>'
174
+ else:
175
+ table_html += '<td class="numeric-cell">-</td>'
176
+
177
+ if precision != '':
178
+ table_html += f'<td class="score-cell">{get_score_bar(precision)}</td>'
179
+ else:
180
+ table_html += '<td class="numeric-cell">-</td>'
181
+
182
+ if recall != '':
183
+ table_html += f'<td class="score-cell">{get_score_bar(recall)}</td>'
184
+ else:
185
+ table_html += '<td class="numeric-cell">-</td>'
186
+
187
+ if f1 != '':
188
+ table_html += f'<td class="score-cell">{get_score_bar(f1)}</td>'
189
+ else:
190
+ table_html += '<td class="numeric-cell">-</td>'
191
+
192
+ if over_detection != '':
193
+ table_html += f'<td class="score-cell">{get_score_bar(over_detection, is_inverse=True)}</td>'
194
+ else:
195
+ table_html += '<td class="numeric-cell">-</td>'
196
+
197
+ # Format cost and time
198
+ if cost != '':
199
+ cost_display = f'${float(cost):.3f}'
200
+ else:
201
+ cost_display = '-'
202
+
203
+ if time != '':
204
+ time_display = f'{float(time):.1f}'
205
+ else:
206
+ time_display = '-'
207
+
208
+ table_html += f"""
209
+ <td class="numeric-cell">{cost_display}</td>
210
+ <td class="numeric-cell">{time_display}</td>
211
+ </tr>
212
+ """
213
+
214
+ table_html += """
215
+ </tbody>
216
+ </table>
217
+ </div>
218
+ """
219
+
220
+ return table_html
221
+
222
+ def filter_and_sort_data(document_type, model_type_filter, sort_by, sort_order):
223
+ """Filter and sort the leaderboard data"""
224
+ df = load_leaderboard_data()
225
+ filtered_df = df.copy()
226
+
227
+ # Document type filtering
228
+ if document_type != "All":
229
+ # Only show models that have data for this document type
230
+ doc_col = f'{document_type} Accuracy'
231
+ if doc_col in filtered_df.columns:
232
+ filtered_df = filtered_df[filtered_df[doc_col] != '']
233
+
234
+ # Model type filtering
235
+ if model_type_filter != "All":
236
+ if model_type_filter == "Open Source":
237
+ filtered_df = filtered_df[filtered_df['Model Type'] == 'Open Source']
238
+ elif model_type_filter == "Proprietary":
239
+ filtered_df = filtered_df[filtered_df['Model Type'] == 'Proprietary']
240
+
241
+ # Sorting
242
+ sort_column = sort_by
243
+ if document_type != "All" and sort_by == 'Overall Accuracy':
244
+ sort_column = f'{document_type} Accuracy'
245
+
246
+ if sort_column in filtered_df.columns:
247
+ ascending = (sort_order == "Ascending")
248
+ # For over-detection rate, flip the logic (lower is better)
249
+ if sort_by == "Over-redaction Rate":
250
+ ascending = not ascending
251
+ filtered_df = filtered_df.sort_values(by=sort_column, ascending=ascending, na_position='last')
252
+
253
+ return generate_html_table(filtered_df, document_type, sort_by)
254
+
255
+ def generate_performance_card(model_name):
256
+ """Generate HTML for the model performance card"""
257
+ if not model_name:
258
+ return """<div style="text-align: center; color: var(--text-secondary); padding: 40px;">
259
+ Please select a model to generate its performance card
260
+ </div>"""
261
+
262
+ df = load_leaderboard_data()
263
+ model_data = df[df['Model'] == model_name]
264
+
265
+ if model_data.empty:
266
+ return """<div style="text-align: center; color: var(--text-secondary); padding: 40px;">
267
+ Model not found in the database
268
+ </div>"""
269
+
270
+ row = model_data.iloc[0]
271
+
272
+ # Get overall rank
273
+ df_with_accuracy = df[df['Overall Accuracy'] != ''].copy()
274
+ df_with_accuracy['Overall Accuracy'] = pd.to_numeric(df_with_accuracy['Overall Accuracy'], errors='coerce')
275
+ df_sorted = df_with_accuracy.sort_values('Overall Accuracy', ascending=False).reset_index(drop=True)
276
+ try:
277
+ rank = df_sorted[df_sorted['Model'] == model_name].index[0] + 1
278
+ except:
279
+ rank = 'N/A'
280
+
281
+ # Format values
282
+ def format_value(val, decimals=3, prefix='', suffix=''):
283
+ if pd.isna(val) or val == '':
284
+ return 'N/A'
285
+ return f"{prefix}{float(val):.{decimals}f}{suffix}"
286
+
287
+ # Determine model type icon
288
+ type_icon = "🔓" if row['Model Type'] == 'Open Source' else "🔒"
289
+
290
+ # Calculate performance stars
291
+ def get_performance_stars(value, max_val=1.0):
292
+ if pd.isna(value) or value == '':
293
+ return '⭐' * 0
294
+ score = float(value) / max_val
295
+ if score >= 0.9:
296
+ return '⭐' * 5
297
+ elif score >= 0.8:
298
+ return '⭐' * 4
299
+ elif score >= 0.7:
300
+ return '⭐' * 3
301
+ elif score >= 0.6:
302
+ return '⭐' * 2
303
+ else:
304
+ return '⭐' * 1
305
+
306
+ # Create HTML
307
+ card_html = f"""
308
+ <div class="performance-card">
309
+ <div class="card-header">
310
+ <h1 class="card-model-name">{model_name}</h1>
311
+ <div class="card-stars">
312
+ {get_performance_stars(row['Overall Accuracy'])}
313
+ </div>
314
+ </div>
315
+
316
+ <div class="metrics-grid" style="margin-bottom: 24px;">
317
+ <div class="metric-item">
318
+ <div class="metric-icon" style="color: var(--accent-primary);">🏆</div>
319
+ <div class="metric-label">Overall Rank</div>
320
+ <div class="metric-value">#{rank}</div>
321
+ </div>
322
+
323
+ <div class="metric-item">
324
+ <div class="metric-icon" style="color: var(--accent-primary);">🎯</div>
325
+ <div class="metric-label">Overall Accuracy</div>
326
+ <div class="metric-value">{format_value(row['Overall Accuracy'])}</div>
327
+ </div>
328
+
329
+ <div class="metric-item">
330
+ <div class="metric-icon" style="color: var(--accent-secondary);">📊</div>
331
+ <div class="metric-label">Precision</div>
332
+ <div class="metric-value">{format_value(row['Precision'])}</div>
333
+ </div>
334
+
335
+ <div class="metric-item">
336
+ <div class="metric-icon" style="color: var(--accent-tertiary);">🔍</div>
337
+ <div class="metric-label">Recall</div>
338
+ <div class="metric-value">{format_value(row['Recall'])}</div>
339
+ </div>
340
+
341
+ <div class="metric-item">
342
+ <div class="metric-icon" style="color: var(--accent-quaternary);">💰</div>
343
+ <div class="metric-label">Cost/Doc</div>
344
+ <div class="metric-value">{format_value(row['Cost per Document ($)'], 3, '$')}</div>
345
+ </div>
346
+
347
+ <div class="metric-item">
348
+ <div class="metric-icon" style="color: var(--text-primary);">⚡</div>
349
+ <div class="metric-label">Processing Time</div>
350
+ <div class="metric-value">{format_value(row['Processing Time (s)'], 1, '', 's')}</div>
351
+ </div>
352
+ </div>
353
+
354
+ <div class="domains-section" style="margin-top: 24px;">
355
+ <h3 class="domains-title">📄 Document Type Performance</h3>
356
+ <div class="domains-grid">
357
+ """
358
+
359
+ # Add document type scores
360
+ doc_types = [
361
+ ('🏥', 'Healthcare'),
362
+ ('💰', 'Financial'),
363
+ ('🏛️', 'Government'),
364
+ ('⚖️', 'Legal'),
365
+ ('👤', 'Personal')
366
+ ]
367
+
368
+ for doc_icon, doc_type in doc_types:
369
+ accuracy_col = f'{doc_type} Accuracy'
370
+ accuracy_value = row.get(accuracy_col, '')
371
+
372
+ if accuracy_value != '' and not pd.isna(accuracy_value):
373
+ score_display = f"{float(accuracy_value):.3f}"
374
+ score_color = "var(--accent-primary)"
375
+ else:
376
+ score_display = "N/A"
377
+ score_color = "var(--text-muted)"
378
+
379
+ card_html += f"""
380
+ <div class="domain-item">
381
+ <div class="domain-name">{doc_icon}</div>
382
+ <div style="font-size: 0.7rem; color: var(--text-secondary); margin-bottom: 2px;">{doc_type}</div>
383
+ <div class="domain-score" style="color: {score_color};">{score_display}</div>
384
+ </div>
385
+ """
386
+
387
+ card_html += f"""
388
+ </div>
389
+ </div>
390
+
391
+ <div class="card-footer">
392
+ <div class="card-url">
393
+ <strong>LLM PII Detection Leaderboard</strong>
394
+ </div>
395
+ </div>
396
+ </div>
397
+ """
398
+
399
+ return card_html
400
+
401
+ def download_performance_card(model_name):
402
+ """Generate and return downloadable HTML file for the model performance card"""
403
+ if not model_name:
404
+ return None
405
+
406
+ card_html = generate_performance_card(model_name)
407
+
408
+ # Create a complete HTML document
409
+ full_html = f"""
410
+ <!DOCTYPE html>
411
+ <html lang="en">
412
+ <head>
413
+ <meta charset="UTF-8">
414
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
415
+ <title>{model_name} - Performance Card</title>
416
+ <style>
417
+ @import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700;800&display=swap');
418
+
419
+ :root {{
420
+ --bg-primary: #1A1414;
421
+ --bg-secondary: rgba(239, 235, 231, 0.03);
422
+ --bg-card: rgba(239, 235, 231, 0.02);
423
+ --border-subtle: rgba(239, 235, 231, 0.08);
424
+ --text-primary: #EFEBE7;
425
+ --text-secondary: #C2B8AE;
426
+ --text-muted: #67594B;
427
+ --accent-primary: #DE9DCC;
428
+ --accent-secondary: #F25E45;
429
+ --accent-tertiary: #6EB579;
430
+ --accent-quaternary: #F0C968;
431
+ --glow-primary: rgba(222, 157, 204, 0.4);
432
+ }}
433
+
434
+ body {{
435
+ margin: 0;
436
+ padding: 40px;
437
+ background: var(--bg-primary);
438
+ font-family: 'Inter', sans-serif;
439
+ color: var(--text-primary);
440
+ }}
441
+
442
+ .performance-card {{
443
+ background: linear-gradient(145deg, rgba(26, 20, 20, 0.98) 0%, rgba(222, 157, 204, 0.05) 100%);
444
+ border: 2px solid var(--accent-primary);
445
+ border-radius: 24px;
446
+ padding: 32px;
447
+ max-width: 700px;
448
+ margin: 0 auto;
449
+ box-shadow:
450
+ 0 20px 40px rgba(0, 0, 0, 0.5),
451
+ 0 0 80px rgba(222, 157, 204, 0.2);
452
+ }}
453
+
454
+ .card-header {{
455
+ text-align: center;
456
+ margin-bottom: 24px;
457
+ }}
458
+
459
+ .card-model-name {{
460
+ font-size: 2rem;
461
+ font-weight: 800;
462
+ background: linear-gradient(135deg, var(--accent-primary) 0%, var(--accent-secondary) 100%);
463
+ -webkit-background-clip: text;
464
+ -webkit-text-fill-color: transparent;
465
+ margin-bottom: 8px;
466
+ line-height: 1.2;
467
+ }}
468
+
469
+ .card-stars {{
470
+ font-size: 1.2rem;
471
+ margin: 8px 0;
472
+ }}
473
+
474
+ .metrics-grid {{
475
+ display: grid;
476
+ grid-template-columns: repeat(2, 1fr);
477
+ gap: 16px;
478
+ margin: 24px 0;
479
+ }}
480
+
481
+ .metric-item {{
482
+ display: flex;
483
+ flex-direction: column;
484
+ align-items: center;
485
+ padding: 16px;
486
+ background: rgba(239, 235, 231, 0.05);
487
+ border-radius: 12px;
488
+ border: 1px solid var(--border-subtle);
489
+ }}
490
+
491
+ .metric-icon {{
492
+ font-size: 1.5rem;
493
+ margin-bottom: 8px;
494
+ }}
495
+
496
+ .metric-label {{
497
+ font-size: 0.85rem;
498
+ color: var(--text-secondary);
499
+ margin-bottom: 4px;
500
+ text-align: center;
501
+ }}
502
+
503
+ .metric-value {{
504
+ font-size: 1.1rem;
505
+ font-weight: 700;
506
+ color: var(--text-primary);
507
+ text-align: center;
508
+ }}
509
+
510
+ .domains-section {{
511
+ margin-top: 24px;
512
+ }}
513
+
514
+ .domains-title {{
515
+ color: var(--text-primary);
516
+ font-size: 1.2rem;
517
+ margin-bottom: 16px;
518
+ text-align: center;
519
+ }}
520
+
521
+ .domains-grid {{
522
+ display: grid;
523
+ grid-template-columns: repeat(5, 1fr);
524
+ gap: 12px;
525
+ }}
526
+
527
+ .domain-item {{
528
+ display: flex;
529
+ flex-direction: column;
530
+ align-items: center;
531
+ padding: 12px;
532
+ background: rgba(239, 235, 231, 0.03);
533
+ border-radius: 8px;
534
+ border: 1px solid var(--border-subtle);
535
+ }}
536
+
537
+ .domain-name {{
538
+ font-size: 1.2rem;
539
+ margin-bottom: 4px;
540
+ }}
541
+
542
+ .domain-score {{
543
+ font-size: 0.9rem;
544
+ font-weight: 600;
545
+ }}
546
+
547
+ .card-footer {{
548
+ text-align: center;
549
+ margin-top: 24px;
550
+ padding-top: 16px;
551
+ border-top: 1px solid var(--border-subtle);
552
+ }}
553
+
554
+ .card-url {{
555
+ color: var(--text-secondary);
556
+ font-size: 0.9rem;
557
+ }}
558
+ </style>
559
+ </head>
560
+ <body>
561
+ {card_html}
562
+ </body>
563
+ </html>
564
+ """
565
+
566
+ # Create a temporary file
567
+ with tempfile.NamedTemporaryFile(mode='w', delete=False, suffix=f'_{model_name.replace(" ", "_")}_performance_card.html', encoding='utf-8') as f:
568
+ f.write(full_html)
569
+ return f.name
570
+
571
+ # Load initial data
572
+ initial_df = load_leaderboard_data()
573
+ initial_table = filter_and_sort_data("All", "All", "Overall Accuracy", "Descending")
574
+
575
+ # Display header
576
+ gr.HTML(HEADER_CONTENT)
577
+
578
+ # Document type filter section
579
+ gr.HTML("""
580
+ <div class="dark-container" style="margin-bottom: 32px;">
581
+ <div class="section-header">
582
+ <span class="section-icon" style="color: var(--accent-primary);">📄</span>
583
+ <h3 style="margin: 0; color: var(--text-primary); font-size: 1.5rem; font-family: 'Inter', sans-serif; font-weight: 700;">
584
+ Document Type Analysis
585
+ </h3>
586
+ </div>
587
+ <p style="color: var(--text-secondary); margin-bottom: 20px; font-size: 1.1rem; font-family: 'Inter', sans-serif;">
588
+ Select a document type to see specialized PII detection performance
589
+ </p>
590
+ """)
591
+
592
+ with gr.Row():
593
+ document_type_filter = gr.Radio(
594
+ choices=["All", "Healthcare", "Financial", "Government", "Legal", "Personal"],
595
+ value="All",
596
+ label="",
597
+ interactive=True,
598
+ elem_classes=["document-type-radio"]
599
+ )
600
+
601
+ gr.HTML("</div>")
602
+
603
+ # Filter controls
604
+ gr.HTML("""
605
+ <div class="dark-container" style="margin-bottom: 24px;">
606
+ <div class="section-header">
607
+ <span class="section-icon" style="color: var(--accent-secondary);">🔍</span>
608
+ <h3 style="margin: 0; color: var(--text-primary); font-size: 1.5rem; font-family: 'Inter', sans-serif; font-weight: 700;">
609
+ Filters & Sorting
610
+ </h3>
611
+ </div>
612
+ """)
613
+
614
+ with gr.Row():
615
+ with gr.Column(scale=1):
616
+ model_type_filter = gr.Radio(
617
+ choices=["All", "Open Source", "Proprietary"],
618
+ value="All",
619
+ label="🔓 Model Access",
620
+ elem_classes=["compact-radio"]
621
+ )
622
+
623
+ with gr.Column(scale=1):
624
+ sort_by = gr.Dropdown(
625
+ choices=["Overall Accuracy", "Precision", "Recall", "F1 Score", "Over-redaction Rate", "Cost per Document ($)", "Processing Time (s)"],
626
+ value="Overall Accuracy",
627
+ label="📊 Sort By",
628
+ elem_classes=["dropdown"]
629
+ )
630
+
631
+ with gr.Column(scale=1):
632
+ sort_order = gr.Radio(
633
+ choices=["Descending", "Ascending"],
634
+ value="Descending",
635
+ label="🔄 Sort Order",
636
+ elem_classes=["compact-radio"]
637
+ )
638
+
639
+ gr.HTML("</div>")
640
+
641
+ # Main leaderboard table
642
+ gr.HTML("""
643
+ <div class="dark-container" style="margin-bottom: 24px;">
644
+ <div class="section-header">
645
+ <span class="section-icon" style="color: var(--accent-primary);">📈</span>
646
+ <h3 style="margin: 0; color: var(--text-primary); font-size: 1.5rem; font-family: 'Inter', sans-serif; font-weight: 700;">
647
+ PII Detection Performance Leaderboard
648
+ </h3>
649
+ </div>
650
+ <div class="dataframe-container">
651
+ """)
652
+
653
+ leaderboard_table = gr.HTML(initial_table)
654
+
655
+ gr.HTML("""
656
+ </div>
657
+ </div>""")
658
+
659
+ # Performance Card Section
660
+ gr.HTML("""
661
+ <div class="dark-container" style="margin-top: 32px;">
662
+ <div class="section-header">
663
+ <span class="section-icon" style="color: var(--accent-primary);">🎯</span>
664
+ <h3 style="margin: 0; color: var(--text-primary); font-size: 1.5rem; font-family: 'Inter', sans-serif; font-weight: 700;">
665
+ Model Performance Card
666
+ </h3>
667
+ </div>
668
+ <p style="color: var(--text-secondary); margin-bottom: 20px; font-size: 1.1rem; font-family: 'Inter', sans-serif;">
669
+ Comprehensive performance card for any model - perfect for presentations and reports
670
+ </p>
671
+
672
+ <div style="display: flex; gap: 24px; align-items: flex-start;">
673
+ <div style="flex: 0 0 280px;">
674
+ <div style="background: rgba(239, 235, 231, 0.03); border: 1px solid var(--border-subtle);
675
+ border-radius: 16px; padding: 20px; position: sticky; top: 20px;">
676
+ """)
677
+
678
+ card_model_selector = gr.Dropdown(
679
+ choices=initial_df['Model'].tolist(),
680
+ value=initial_df['Model'].tolist()[0] if len(initial_df) > 0 else None,
681
+ label="🤖 Select Model",
682
+ info="Choose a model to view its performance card",
683
+ elem_classes=["dropdown"]
684
+ )
685
+
686
+ gr.HTML("""
687
+ </div>
688
+ </div>
689
+
690
+ <div style="flex: 1; min-width: 0;" id="card-display-container">
691
+ """)
692
+
693
+ # Card display area
694
+ initial_model = initial_df['Model'].tolist()[0] if len(initial_df) > 0 else None
695
+ initial_card_html = generate_performance_card(initial_model) if initial_model else ""
696
+ card_display = gr.HTML(value=initial_card_html, elem_id="performance-card-html")
697
+
698
+ # Download button below the card
699
+ gr.HTML("""
700
+ <div style="margin-top: 24px; text-align: center;">
701
+ """)
702
+
703
+ download_button = gr.DownloadButton(
704
+ label="📥 Download Performance Card",
705
+ value=None,
706
+ variant="primary",
707
+ elem_classes=["download-card-btn"]
708
+ )
709
+
710
+ gr.HTML("""
711
+ </div>
712
+ </div>
713
+ </div>
714
+ </div>""")
715
+
716
+ # Add performance card CSS
717
+ gr.HTML(f"""
718
+ <style>
719
+ .performance-card {{
720
+ background: linear-gradient(145deg, rgba(26, 20, 20, 0.98) 0%, rgba(222, 157, 204, 0.05) 100%);
721
+ border: 2px solid var(--accent-primary);
722
+ border-radius: 24px;
723
+ padding: 32px;
724
+ max-width: 700px;
725
+ margin: 0 auto;
726
+ position: relative;
727
+ overflow: hidden;
728
+ box-shadow:
729
+ 0 20px 40px rgba(0, 0, 0, 0.5),
730
+ 0 0 80px rgba(222, 157, 204, 0.2),
731
+ inset 0 0 120px rgba(222, 157, 204, 0.05);
732
+ }}
733
+
734
+ .card-header {{
735
+ text-align: center;
736
+ margin-bottom: 24px;
737
+ position: relative;
738
+ z-index: 1;
739
+ }}
740
+
741
+ .card-model-name {{
742
+ font-size: 2rem;
743
+ font-weight: 800;
744
+ background: linear-gradient(135deg, var(--accent-primary) 0%, var(--accent-secondary) 100%);
745
+ -webkit-background-clip: text;
746
+ -webkit-text-fill-color: transparent;
747
+ margin-bottom: 8px;
748
+ text-shadow: 0 0 40px var(--glow-primary);
749
+ line-height: 1.2;
750
+ }}
751
+
752
+ .card-stars {{
753
+ font-size: 1.2rem;
754
+ margin: 8px 0;
755
+ }}
756
+
757
+ .metrics-grid {{
758
+ display: grid;
759
+ grid-template-columns: repeat(2, 1fr);
760
+ gap: 16px;
761
+ margin: 24px 0;
762
+ }}
763
+
764
+ .metric-item {{
765
+ display: flex;
766
+ flex-direction: column;
767
+ align-items: center;
768
+ padding: 16px;
769
+ background: rgba(239, 235, 231, 0.05);
770
+ border-radius: 12px;
771
+ border: 1px solid var(--border-subtle);
772
+ transition: all 0.3s ease;
773
+ }}
774
+
775
+ .metric-item:hover {{
776
+ transform: translateY(-2px);
777
+ border-color: var(--accent-primary);
778
+ box-shadow: 0 8px 16px rgba(222, 157, 204, 0.3);
779
+ }}
780
+
781
+ .metric-icon {{
782
+ font-size: 1.5rem;
783
+ margin-bottom: 8px;
784
+ }}
785
+
786
+ .metric-label {{
787
+ font-size: 0.85rem;
788
+ color: var(--text-secondary);
789
+ margin-bottom: 4px;
790
+ text-align: center;
791
+ }}
792
+
793
+ .metric-value {{
794
+ font-size: 1.1rem;
795
+ font-weight: 700;
796
+ color: var(--text-primary);
797
+ text-align: center;
798
+ }}
799
+
800
+ .domains-section {{
801
+ margin-top: 24px;
802
+ }}
803
+
804
+ .domains-title {{
805
+ color: var(--text-primary);
806
+ font-size: 1.2rem;
807
+ margin-bottom: 16px;
808
+ text-align: center;
809
+ }}
810
+
811
+ .domains-grid {{
812
+ display: grid;
813
+ grid-template-columns: repeat(5, 1fr);
814
+ gap: 12px;
815
+ }}
816
+
817
+ .domain-item {{
818
+ display: flex;
819
+ flex-direction: column;
820
+ align-items: center;
821
+ padding: 12px;
822
+ background: rgba(239, 235, 231, 0.03);
823
+ border-radius: 8px;
824
+ border: 1px solid var(--border-subtle);
825
+ transition: all 0.3s ease;
826
+ }}
827
+
828
+ .domain-item:hover {{
829
+ border-color: var(--accent-primary);
830
+ transform: scale(1.02);
831
+ }}
832
+
833
+ .domain-name {{
834
+ font-size: 1.2rem;
835
+ margin-bottom: 4px;
836
+ }}
837
+
838
+ .domain-score {{
839
+ font-size: 0.9rem;
840
+ font-weight: 600;
841
+ }}
842
+
843
+ .card-footer {{
844
+ text-align: center;
845
+ margin-top: 24px;
846
+ padding-top: 16px;
847
+ border-top: 1px solid var(--border-subtle);
848
+ }}
849
+
850
+ .card-url {{
851
+ color: var(--text-secondary);
852
+ font-size: 0.9rem;
853
+ }}
854
+
855
+ /* Additional styling for radio buttons and specific components */
856
+ .document-type-radio .wrap {{
857
+ display: flex !important;
858
+ gap: 12px !important;
859
+ flex-wrap: wrap !important;
860
+ justify-content: center !important;
861
+ }}
862
+
863
+ .document-type-radio .wrap > label {{
864
+ flex: 1 !important;
865
+ min-width: 140px !important;
866
+ max-width: 180px !important;
867
+ padding: 12px 16px !important;
868
+ background: var(--bg-card) !important;
869
+ border: 2px solid var(--border-default) !important;
870
+ border-radius: 12px !important;
871
+ cursor: pointer !important;
872
+ transition: all 0.3s ease !important;
873
+ text-align: center !important;
874
+ font-weight: 500 !important;
875
+ }}
876
+
877
+ .document-type-radio .wrap > label:hover {{
878
+ border-color: var(--accent-primary) !important;
879
+ transform: translateY(-2px) !important;
880
+ }}
881
+
882
+ .document-type-radio .wrap > label:has(input[type="radio"]:checked) {{
883
+ background: transparent !important;
884
+ border-color: var(--accent-primary) !important;
885
+ color: var(--text-primary) !important;
886
+ font-weight: 600 !important;
887
+ box-shadow: 0 8px 16px var(--glow-primary) !important;
888
+ }}
889
+
890
+ .document-type-radio input[type="radio"] {{
891
+ display: none !important;
892
+ }}
893
+
894
+ .compact-radio .wrap > label {{
895
+ padding: 8px 12px !important;
896
+ font-size: 0.85rem !important;
897
+ min-width: auto !important;
898
+ max-width: 120px !important;
899
+ }}
900
+
901
+ .download-card-btn {{
902
+ background: linear-gradient(135deg, var(--accent-primary), var(--accent-secondary)) !important;
903
+ color: white !important;
904
+ border: none !important;
905
+ padding: 12px 24px !important;
906
+ border-radius: 12px !important;
907
+ font-weight: 600 !important;
908
+ font-size: 0.95rem !important;
909
+ transition: all 0.3s ease !important;
910
+ box-shadow: 0 4px 16px rgba(222, 157, 204, 0.4) !important;
911
+ }}
912
+
913
+ .download-card-btn:hover {{
914
+ transform: translateY(-2px) !important;
915
+ box-shadow: 0 6px 20px rgba(222, 157, 204, 0.6) !important;
916
+ }}
917
+ </style>
918
+ """)
919
+
920
+ # Update functions
921
+ def update_table(*args):
922
+ return filter_and_sort_data(*args)
923
+
924
+ def update_card(model_name):
925
+ return generate_performance_card(model_name)
926
+
927
+ # Connect update functions to components
928
+ filter_inputs = [document_type_filter, model_type_filter, sort_by, sort_order]
929
+
930
+ for input_component in filter_inputs:
931
+ input_component.change(
932
+ fn=update_table,
933
+ inputs=filter_inputs,
934
+ outputs=[leaderboard_table]
935
+ )
936
+
937
+ # Update card when model selection changes
938
+ card_model_selector.change(
939
+ fn=update_card,
940
+ inputs=[card_model_selector],
941
+ outputs=[card_display]
942
+ )
943
+
944
+ # Download card functionality
945
+ def update_download_button(model_name):
946
+ if model_name:
947
+ file_path = download_performance_card(model_name)
948
+ return file_path
949
+ return None
950
+
951
+ card_model_selector.change(
952
+ fn=update_download_button,
953
+ inputs=[card_model_selector],
954
+ outputs=[download_button]
955
+ )
956
+
957
+ # Methodology section
958
+ gr.HTML(f"""
959
+ <div class="dark-container" style="margin-top: 32px;">
960
+ {METHODOLOGY}
961
+ </div>
962
+ """)
963
+
964
+ def create_app():
965
+ """Create the main Gradio application"""
966
+ with gr.Blocks(
967
+ theme=gr.themes.Default(),
968
+ title="🔒 LLM PII Detection Leaderboard"
969
+ ) as app:
970
+ create_pii_leaderboard()
971
+
972
+ return app
973
+
974
+ if __name__ == "__main__":
975
+ demo = create_app()
976
+ demo.launch()
pyproject.toml DELETED
@@ -1,13 +0,0 @@
1
- [tool.ruff]
2
- # Enable pycodestyle (`E`) and Pyflakes (`F`) codes by default.
3
- select = ["E", "F"]
4
- ignore = ["E501"] # line too long (black is taking care of this)
5
- line-length = 119
6
- fixable = ["A", "B", "C", "D", "E", "F", "G", "I", "N", "Q", "S", "T", "W", "ANN", "ARG", "BLE", "COM", "DJ", "DTZ", "EM", "ERA", "EXE", "FBT", "ICN", "INP", "ISC", "NPY", "PD", "PGH", "PIE", "PL", "PT", "PTH", "PYI", "RET", "RSE", "RUF", "SIM", "SLF", "TCH", "TID", "TRY", "UP", "YTT"]
7
-
8
- [tool.isort]
9
- profile = "black"
10
- line_length = 119
11
-
12
- [tool.black]
13
- line-length = 119
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements.txt CHANGED
@@ -1,16 +1,3 @@
1
- APScheduler
2
- black
3
- datasets
4
  gradio
5
- gradio[oauth]
6
- gradio_leaderboard==0.0.13
7
- gradio_client
8
- huggingface-hub>=0.18.0
9
- matplotlib
10
- numpy
11
  pandas
12
- python-dateutil
13
- tqdm
14
- transformers
15
- tokenizers>=0.15.0
16
- sentencepiece
 
 
 
 
1
  gradio
 
 
 
 
 
 
2
  pandas
3
+ numpy
 
 
 
 
results/pii_detection_results.csv ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ Model,Model Type,Vendor,Overall Accuracy,Precision,Recall,F1 Score,Over-redaction Rate,Processing Time (s),Cost per Document ($),Healthcare Accuracy,Financial Accuracy,Government Accuracy,Legal Accuracy,Personal Accuracy
2
+ GPT-4o,Proprietary,OpenAI,0.941,0.945,0.938,0.941,0.023,2.3,0.012,0.952,0.938,0.933,0.941,0.940
3
+ Claude-3.5-Sonnet,Proprietary,Anthropic,0.928,0.932,0.924,0.928,0.031,3.1,0.015,0.939,0.925,0.920,0.928,0.927
4
+ Gemini-1.5-Pro,Proprietary,Google,0.915,0.919,0.911,0.915,0.038,2.8,0.008,0.926,0.912,0.907,0.915,0.914
5
+ LLaMA-3.1-70B,Open Source,Meta,0.882,0.887,0.877,0.882,0.052,4.2,0.003,0.893,0.879,0.874,0.882,0.881
6
+ Mistral-Large,Proprietary,Mistral AI,0.871,0.875,0.867,0.871,0.048,3.7,0.011,0.882,0.868,0.863,0.871,0.870
7
+ GPT-4o-mini,Proprietary,OpenAI,0.856,0.860,0.852,0.856,0.061,1.8,0.002,0.867,0.853,0.848,0.856,0.855
8
+ Claude-3-Haiku,Proprietary,Anthropic,0.834,0.838,0.830,0.834,0.078,2.1,0.006,0.845,0.831,0.826,0.834,0.833
9
+ Gemini-1.5-Flash,Proprietary,Google,0.821,0.825,0.817,0.821,0.085,2.4,0.004,0.832,0.818,0.813,0.821,0.820
src/about.py DELETED
@@ -1,72 +0,0 @@
1
- from dataclasses import dataclass
2
- from enum import Enum
3
-
4
- @dataclass
5
- class Task:
6
- benchmark: str
7
- metric: str
8
- col_name: str
9
-
10
-
11
- # Select your tasks here
12
- # ---------------------------------------------------
13
- class Tasks(Enum):
14
- # task_key in the json file, metric_key in the json file, name to display in the leaderboard
15
- task0 = Task("anli_r1", "acc", "ANLI")
16
- task1 = Task("logiqa", "acc_norm", "LogiQA")
17
-
18
- NUM_FEWSHOT = 0 # Change with your few shot
19
- # ---------------------------------------------------
20
-
21
-
22
-
23
- # Your leaderboard name
24
- TITLE = """<h1 align="center" id="space-title">Demo leaderboard</h1>"""
25
-
26
- # What does your leaderboard evaluate?
27
- INTRODUCTION_TEXT = """
28
- Intro text
29
- """
30
-
31
- # Which evaluations are you running? how can people reproduce what you have?
32
- LLM_BENCHMARKS_TEXT = f"""
33
- ## How it works
34
-
35
- ## Reproducibility
36
- To reproduce our results, here is the commands you can run:
37
-
38
- """
39
-
40
- EVALUATION_QUEUE_TEXT = """
41
- ## Some good practices before submitting a model
42
-
43
- ### 1) Make sure you can load your model and tokenizer using AutoClasses:
44
- ```python
45
- from transformers import AutoConfig, AutoModel, AutoTokenizer
46
- config = AutoConfig.from_pretrained("your model name", revision=revision)
47
- model = AutoModel.from_pretrained("your model name", revision=revision)
48
- tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)
49
- ```
50
- If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded.
51
-
52
- Note: make sure your model is public!
53
- Note: if your model needs `use_remote_code=True`, we do not support this option yet but we are working on adding it, stay posted!
54
-
55
- ### 2) Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index)
56
- It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`!
57
-
58
- ### 3) Make sure your model has an open license!
59
- This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model 🤗
60
-
61
- ### 4) Fill up your model card
62
- When we add extra information about models to the leaderboard, it will be automatically taken from the model card
63
-
64
- ## In case of model failure
65
- If your model is displayed in the `FAILED` category, its execution stopped.
66
- Make sure you have followed the above steps first.
67
- If everything is done, check you can launch the EleutherAIHarness on your model locally, using the above command without modifications (you can add `--limit` to limit the number of examples per task).
68
- """
69
-
70
- CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
71
- CITATION_BUTTON_TEXT = r"""
72
- """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/display/css_html_js.py DELETED
@@ -1,105 +0,0 @@
1
- custom_css = """
2
-
3
- .markdown-text {
4
- font-size: 16px !important;
5
- }
6
-
7
- #models-to-add-text {
8
- font-size: 18px !important;
9
- }
10
-
11
- #citation-button span {
12
- font-size: 16px !important;
13
- }
14
-
15
- #citation-button textarea {
16
- font-size: 16px !important;
17
- }
18
-
19
- #citation-button > label > button {
20
- margin: 6px;
21
- transform: scale(1.3);
22
- }
23
-
24
- #leaderboard-table {
25
- margin-top: 15px
26
- }
27
-
28
- #leaderboard-table-lite {
29
- margin-top: 15px
30
- }
31
-
32
- #search-bar-table-box > div:first-child {
33
- background: none;
34
- border: none;
35
- }
36
-
37
- #search-bar {
38
- padding: 0px;
39
- }
40
-
41
- /* Limit the width of the first AutoEvalColumn so that names don't expand too much */
42
- #leaderboard-table td:nth-child(2),
43
- #leaderboard-table th:nth-child(2) {
44
- max-width: 400px;
45
- overflow: auto;
46
- white-space: nowrap;
47
- }
48
-
49
- .tab-buttons button {
50
- font-size: 20px;
51
- }
52
-
53
- #scale-logo {
54
- border-style: none !important;
55
- box-shadow: none;
56
- display: block;
57
- margin-left: auto;
58
- margin-right: auto;
59
- max-width: 600px;
60
- }
61
-
62
- #scale-logo .download {
63
- display: none;
64
- }
65
- #filter_type{
66
- border: 0;
67
- padding-left: 0;
68
- padding-top: 0;
69
- }
70
- #filter_type label {
71
- display: flex;
72
- }
73
- #filter_type label > span{
74
- margin-top: var(--spacing-lg);
75
- margin-right: 0.5em;
76
- }
77
- #filter_type label > .wrap{
78
- width: 103px;
79
- }
80
- #filter_type label > .wrap .wrap-inner{
81
- padding: 2px;
82
- }
83
- #filter_type label > .wrap .wrap-inner input{
84
- width: 1px
85
- }
86
- #filter-columns-type{
87
- border:0;
88
- padding:0.5;
89
- }
90
- #filter-columns-size{
91
- border:0;
92
- padding:0.5;
93
- }
94
- #box-filter > .form{
95
- border: 0
96
- }
97
- """
98
-
99
- get_window_url_params = """
100
- function(url_params) {
101
- const params = new URLSearchParams(window.location.search);
102
- url_params = Object.fromEntries(params);
103
- return url_params;
104
- }
105
- """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/display/formatting.py DELETED
@@ -1,27 +0,0 @@
1
- def model_hyperlink(link, model_name):
2
- return f'<a target="_blank" href="{link}" style="color: var(--link-text-color); text-decoration: underline;text-decoration-style: dotted;">{model_name}</a>'
3
-
4
-
5
- def make_clickable_model(model_name):
6
- link = f"https://huggingface.co/{model_name}"
7
- return model_hyperlink(link, model_name)
8
-
9
-
10
- def styled_error(error):
11
- return f"<p style='color: red; font-size: 20px; text-align: center;'>{error}</p>"
12
-
13
-
14
- def styled_warning(warn):
15
- return f"<p style='color: orange; font-size: 20px; text-align: center;'>{warn}</p>"
16
-
17
-
18
- def styled_message(message):
19
- return f"<p style='color: green; font-size: 20px; text-align: center;'>{message}</p>"
20
-
21
-
22
- def has_no_nan_values(df, columns):
23
- return df[columns].notna().all(axis=1)
24
-
25
-
26
- def has_nan_values(df, columns):
27
- return df[columns].isna().any(axis=1)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/display/utils.py DELETED
@@ -1,110 +0,0 @@
1
- from dataclasses import dataclass, make_dataclass
2
- from enum import Enum
3
-
4
- import pandas as pd
5
-
6
- from src.about import Tasks
7
-
8
- def fields(raw_class):
9
- return [v for k, v in raw_class.__dict__.items() if k[:2] != "__" and k[-2:] != "__"]
10
-
11
-
12
- # These classes are for user facing column names,
13
- # to avoid having to change them all around the code
14
- # when a modif is needed
15
- @dataclass
16
- class ColumnContent:
17
- name: str
18
- type: str
19
- displayed_by_default: bool
20
- hidden: bool = False
21
- never_hidden: bool = False
22
-
23
- ## Leaderboard columns
24
- auto_eval_column_dict = []
25
- # Init
26
- auto_eval_column_dict.append(["model_type_symbol", ColumnContent, ColumnContent("T", "str", True, never_hidden=True)])
27
- auto_eval_column_dict.append(["model", ColumnContent, ColumnContent("Model", "markdown", True, never_hidden=True)])
28
- #Scores
29
- auto_eval_column_dict.append(["average", ColumnContent, ColumnContent("Average ⬆️", "number", True)])
30
- for task in Tasks:
31
- auto_eval_column_dict.append([task.name, ColumnContent, ColumnContent(task.value.col_name, "number", True)])
32
- # Model information
33
- auto_eval_column_dict.append(["model_type", ColumnContent, ColumnContent("Type", "str", False)])
34
- auto_eval_column_dict.append(["architecture", ColumnContent, ColumnContent("Architecture", "str", False)])
35
- auto_eval_column_dict.append(["weight_type", ColumnContent, ColumnContent("Weight type", "str", False, True)])
36
- auto_eval_column_dict.append(["precision", ColumnContent, ColumnContent("Precision", "str", False)])
37
- auto_eval_column_dict.append(["license", ColumnContent, ColumnContent("Hub License", "str", False)])
38
- auto_eval_column_dict.append(["params", ColumnContent, ColumnContent("#Params (B)", "number", False)])
39
- auto_eval_column_dict.append(["likes", ColumnContent, ColumnContent("Hub ❤️", "number", False)])
40
- auto_eval_column_dict.append(["still_on_hub", ColumnContent, ColumnContent("Available on the hub", "bool", False)])
41
- auto_eval_column_dict.append(["revision", ColumnContent, ColumnContent("Model sha", "str", False, False)])
42
-
43
- # We use make dataclass to dynamically fill the scores from Tasks
44
- AutoEvalColumn = make_dataclass("AutoEvalColumn", auto_eval_column_dict, frozen=True)
45
-
46
- ## For the queue columns in the submission tab
47
- @dataclass(frozen=True)
48
- class EvalQueueColumn: # Queue column
49
- model = ColumnContent("model", "markdown", True)
50
- revision = ColumnContent("revision", "str", True)
51
- private = ColumnContent("private", "bool", True)
52
- precision = ColumnContent("precision", "str", True)
53
- weight_type = ColumnContent("weight_type", "str", "Original")
54
- status = ColumnContent("status", "str", True)
55
-
56
- ## All the model information that we might need
57
- @dataclass
58
- class ModelDetails:
59
- name: str
60
- display_name: str = ""
61
- symbol: str = "" # emoji
62
-
63
-
64
- class ModelType(Enum):
65
- PT = ModelDetails(name="pretrained", symbol="🟢")
66
- FT = ModelDetails(name="fine-tuned", symbol="🔶")
67
- IFT = ModelDetails(name="instruction-tuned", symbol="⭕")
68
- RL = ModelDetails(name="RL-tuned", symbol="🟦")
69
- Unknown = ModelDetails(name="", symbol="?")
70
-
71
- def to_str(self, separator=" "):
72
- return f"{self.value.symbol}{separator}{self.value.name}"
73
-
74
- @staticmethod
75
- def from_str(type):
76
- if "fine-tuned" in type or "🔶" in type:
77
- return ModelType.FT
78
- if "pretrained" in type or "🟢" in type:
79
- return ModelType.PT
80
- if "RL-tuned" in type or "🟦" in type:
81
- return ModelType.RL
82
- if "instruction-tuned" in type or "⭕" in type:
83
- return ModelType.IFT
84
- return ModelType.Unknown
85
-
86
- class WeightType(Enum):
87
- Adapter = ModelDetails("Adapter")
88
- Original = ModelDetails("Original")
89
- Delta = ModelDetails("Delta")
90
-
91
- class Precision(Enum):
92
- float16 = ModelDetails("float16")
93
- bfloat16 = ModelDetails("bfloat16")
94
- Unknown = ModelDetails("?")
95
-
96
- def from_str(precision):
97
- if precision in ["torch.float16", "float16"]:
98
- return Precision.float16
99
- if precision in ["torch.bfloat16", "bfloat16"]:
100
- return Precision.bfloat16
101
- return Precision.Unknown
102
-
103
- # Column selection
104
- COLS = [c.name for c in fields(AutoEvalColumn) if not c.hidden]
105
-
106
- EVAL_COLS = [c.name for c in fields(EvalQueueColumn)]
107
- EVAL_TYPES = [c.type for c in fields(EvalQueueColumn)]
108
-
109
- BENCHMARK_COLS = [t.value.col_name for t in Tasks]
110
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/envs.py DELETED
@@ -1,25 +0,0 @@
1
- import os
2
-
3
- from huggingface_hub import HfApi
4
-
5
- # Info to change for your repository
6
- # ----------------------------------
7
- TOKEN = os.environ.get("HF_TOKEN") # A read/write token for your org
8
-
9
- OWNER = "demo-leaderboard-backend" # Change to your org - don't forget to create a results and request dataset, with the correct format!
10
- # ----------------------------------
11
-
12
- REPO_ID = f"{OWNER}/leaderboard"
13
- QUEUE_REPO = f"{OWNER}/requests"
14
- RESULTS_REPO = f"{OWNER}/results"
15
-
16
- # If you setup a cache later, just change HF_HOME
17
- CACHE_PATH=os.getenv("HF_HOME", ".")
18
-
19
- # Local caches
20
- EVAL_REQUESTS_PATH = os.path.join(CACHE_PATH, "eval-queue")
21
- EVAL_RESULTS_PATH = os.path.join(CACHE_PATH, "eval-results")
22
- EVAL_REQUESTS_PATH_BACKEND = os.path.join(CACHE_PATH, "eval-queue-bk")
23
- EVAL_RESULTS_PATH_BACKEND = os.path.join(CACHE_PATH, "eval-results-bk")
24
-
25
- API = HfApi(token=TOKEN)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/leaderboard/read_evals.py DELETED
@@ -1,196 +0,0 @@
1
- import glob
2
- import json
3
- import math
4
- import os
5
- from dataclasses import dataclass
6
-
7
- import dateutil
8
- import numpy as np
9
-
10
- from src.display.formatting import make_clickable_model
11
- from src.display.utils import AutoEvalColumn, ModelType, Tasks, Precision, WeightType
12
- from src.submission.check_validity import is_model_on_hub
13
-
14
-
15
- @dataclass
16
- class EvalResult:
17
- """Represents one full evaluation. Built from a combination of the result and request file for a given run.
18
- """
19
- eval_name: str # org_model_precision (uid)
20
- full_model: str # org/model (path on hub)
21
- org: str
22
- model: str
23
- revision: str # commit hash, "" if main
24
- results: dict
25
- precision: Precision = Precision.Unknown
26
- model_type: ModelType = ModelType.Unknown # Pretrained, fine tuned, ...
27
- weight_type: WeightType = WeightType.Original # Original or Adapter
28
- architecture: str = "Unknown"
29
- license: str = "?"
30
- likes: int = 0
31
- num_params: int = 0
32
- date: str = "" # submission date of request file
33
- still_on_hub: bool = False
34
-
35
- @classmethod
36
- def init_from_json_file(self, json_filepath):
37
- """Inits the result from the specific model result file"""
38
- with open(json_filepath) as fp:
39
- data = json.load(fp)
40
-
41
- config = data.get("config")
42
-
43
- # Precision
44
- precision = Precision.from_str(config.get("model_dtype"))
45
-
46
- # Get model and org
47
- org_and_model = config.get("model_name", config.get("model_args", None))
48
- org_and_model = org_and_model.split("/", 1)
49
-
50
- if len(org_and_model) == 1:
51
- org = None
52
- model = org_and_model[0]
53
- result_key = f"{model}_{precision.value.name}"
54
- else:
55
- org = org_and_model[0]
56
- model = org_and_model[1]
57
- result_key = f"{org}_{model}_{precision.value.name}"
58
- full_model = "/".join(org_and_model)
59
-
60
- still_on_hub, _, model_config = is_model_on_hub(
61
- full_model, config.get("model_sha", "main"), trust_remote_code=True, test_tokenizer=False
62
- )
63
- architecture = "?"
64
- if model_config is not None:
65
- architectures = getattr(model_config, "architectures", None)
66
- if architectures:
67
- architecture = ";".join(architectures)
68
-
69
- # Extract results available in this file (some results are split in several files)
70
- results = {}
71
- for task in Tasks:
72
- task = task.value
73
-
74
- # We average all scores of a given metric (not all metrics are present in all files)
75
- accs = np.array([v.get(task.metric, None) for k, v in data["results"].items() if task.benchmark == k])
76
- if accs.size == 0 or any([acc is None for acc in accs]):
77
- continue
78
-
79
- mean_acc = np.mean(accs) * 100.0
80
- results[task.benchmark] = mean_acc
81
-
82
- return self(
83
- eval_name=result_key,
84
- full_model=full_model,
85
- org=org,
86
- model=model,
87
- results=results,
88
- precision=precision,
89
- revision= config.get("model_sha", ""),
90
- still_on_hub=still_on_hub,
91
- architecture=architecture
92
- )
93
-
94
- def update_with_request_file(self, requests_path):
95
- """Finds the relevant request file for the current model and updates info with it"""
96
- request_file = get_request_file_for_model(requests_path, self.full_model, self.precision.value.name)
97
-
98
- try:
99
- with open(request_file, "r") as f:
100
- request = json.load(f)
101
- self.model_type = ModelType.from_str(request.get("model_type", ""))
102
- self.weight_type = WeightType[request.get("weight_type", "Original")]
103
- self.license = request.get("license", "?")
104
- self.likes = request.get("likes", 0)
105
- self.num_params = request.get("params", 0)
106
- self.date = request.get("submitted_time", "")
107
- except Exception:
108
- print(f"Could not find request file for {self.org}/{self.model} with precision {self.precision.value.name}")
109
-
110
- def to_dict(self):
111
- """Converts the Eval Result to a dict compatible with our dataframe display"""
112
- average = sum([v for v in self.results.values() if v is not None]) / len(Tasks)
113
- data_dict = {
114
- "eval_name": self.eval_name, # not a column, just a save name,
115
- AutoEvalColumn.precision.name: self.precision.value.name,
116
- AutoEvalColumn.model_type.name: self.model_type.value.name,
117
- AutoEvalColumn.model_type_symbol.name: self.model_type.value.symbol,
118
- AutoEvalColumn.weight_type.name: self.weight_type.value.name,
119
- AutoEvalColumn.architecture.name: self.architecture,
120
- AutoEvalColumn.model.name: make_clickable_model(self.full_model),
121
- AutoEvalColumn.revision.name: self.revision,
122
- AutoEvalColumn.average.name: average,
123
- AutoEvalColumn.license.name: self.license,
124
- AutoEvalColumn.likes.name: self.likes,
125
- AutoEvalColumn.params.name: self.num_params,
126
- AutoEvalColumn.still_on_hub.name: self.still_on_hub,
127
- }
128
-
129
- for task in Tasks:
130
- data_dict[task.value.col_name] = self.results[task.value.benchmark]
131
-
132
- return data_dict
133
-
134
-
135
- def get_request_file_for_model(requests_path, model_name, precision):
136
- """Selects the correct request file for a given model. Only keeps runs tagged as FINISHED"""
137
- request_files = os.path.join(
138
- requests_path,
139
- f"{model_name}_eval_request_*.json",
140
- )
141
- request_files = glob.glob(request_files)
142
-
143
- # Select correct request file (precision)
144
- request_file = ""
145
- request_files = sorted(request_files, reverse=True)
146
- for tmp_request_file in request_files:
147
- with open(tmp_request_file, "r") as f:
148
- req_content = json.load(f)
149
- if (
150
- req_content["status"] in ["FINISHED"]
151
- and req_content["precision"] == precision.split(".")[-1]
152
- ):
153
- request_file = tmp_request_file
154
- return request_file
155
-
156
-
157
- def get_raw_eval_results(results_path: str, requests_path: str) -> list[EvalResult]:
158
- """From the path of the results folder root, extract all needed info for results"""
159
- model_result_filepaths = []
160
-
161
- for root, _, files in os.walk(results_path):
162
- # We should only have json files in model results
163
- if len(files) == 0 or any([not f.endswith(".json") for f in files]):
164
- continue
165
-
166
- # Sort the files by date
167
- try:
168
- files.sort(key=lambda x: x.removesuffix(".json").removeprefix("results_")[:-7])
169
- except dateutil.parser._parser.ParserError:
170
- files = [files[-1]]
171
-
172
- for file in files:
173
- model_result_filepaths.append(os.path.join(root, file))
174
-
175
- eval_results = {}
176
- for model_result_filepath in model_result_filepaths:
177
- # Creation of result
178
- eval_result = EvalResult.init_from_json_file(model_result_filepath)
179
- eval_result.update_with_request_file(requests_path)
180
-
181
- # Store results of same eval together
182
- eval_name = eval_result.eval_name
183
- if eval_name in eval_results.keys():
184
- eval_results[eval_name].results.update({k: v for k, v in eval_result.results.items() if v is not None})
185
- else:
186
- eval_results[eval_name] = eval_result
187
-
188
- results = []
189
- for v in eval_results.values():
190
- try:
191
- v.to_dict() # we test if the dict version is complete
192
- results.append(v)
193
- except KeyError: # not all eval values present
194
- continue
195
-
196
- return results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/populate.py DELETED
@@ -1,58 +0,0 @@
1
- import json
2
- import os
3
-
4
- import pandas as pd
5
-
6
- from src.display.formatting import has_no_nan_values, make_clickable_model
7
- from src.display.utils import AutoEvalColumn, EvalQueueColumn
8
- from src.leaderboard.read_evals import get_raw_eval_results
9
-
10
-
11
- def get_leaderboard_df(results_path: str, requests_path: str, cols: list, benchmark_cols: list) -> pd.DataFrame:
12
- """Creates a dataframe from all the individual experiment results"""
13
- raw_data = get_raw_eval_results(results_path, requests_path)
14
- all_data_json = [v.to_dict() for v in raw_data]
15
-
16
- df = pd.DataFrame.from_records(all_data_json)
17
- df = df.sort_values(by=[AutoEvalColumn.average.name], ascending=False)
18
- df = df[cols].round(decimals=2)
19
-
20
- # filter out if any of the benchmarks have not been produced
21
- df = df[has_no_nan_values(df, benchmark_cols)]
22
- return df
23
-
24
-
25
- def get_evaluation_queue_df(save_path: str, cols: list) -> list[pd.DataFrame]:
26
- """Creates the different dataframes for the evaluation queues requestes"""
27
- entries = [entry for entry in os.listdir(save_path) if not entry.startswith(".")]
28
- all_evals = []
29
-
30
- for entry in entries:
31
- if ".json" in entry:
32
- file_path = os.path.join(save_path, entry)
33
- with open(file_path) as fp:
34
- data = json.load(fp)
35
-
36
- data[EvalQueueColumn.model.name] = make_clickable_model(data["model"])
37
- data[EvalQueueColumn.revision.name] = data.get("revision", "main")
38
-
39
- all_evals.append(data)
40
- elif ".md" not in entry:
41
- # this is a folder
42
- sub_entries = [e for e in os.listdir(f"{save_path}/{entry}") if os.path.isfile(e) and not e.startswith(".")]
43
- for sub_entry in sub_entries:
44
- file_path = os.path.join(save_path, entry, sub_entry)
45
- with open(file_path) as fp:
46
- data = json.load(fp)
47
-
48
- data[EvalQueueColumn.model.name] = make_clickable_model(data["model"])
49
- data[EvalQueueColumn.revision.name] = data.get("revision", "main")
50
- all_evals.append(data)
51
-
52
- pending_list = [e for e in all_evals if e["status"] in ["PENDING", "RERUN"]]
53
- running_list = [e for e in all_evals if e["status"] == "RUNNING"]
54
- finished_list = [e for e in all_evals if e["status"].startswith("FINISHED") or e["status"] == "PENDING_NEW_EVAL"]
55
- df_pending = pd.DataFrame.from_records(pending_list, columns=cols)
56
- df_running = pd.DataFrame.from_records(running_list, columns=cols)
57
- df_finished = pd.DataFrame.from_records(finished_list, columns=cols)
58
- return df_finished[cols], df_running[cols], df_pending[cols]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/submission/check_validity.py DELETED
@@ -1,99 +0,0 @@
1
- import json
2
- import os
3
- import re
4
- from collections import defaultdict
5
- from datetime import datetime, timedelta, timezone
6
-
7
- import huggingface_hub
8
- from huggingface_hub import ModelCard
9
- from huggingface_hub.hf_api import ModelInfo
10
- from transformers import AutoConfig
11
- from transformers.models.auto.tokenization_auto import AutoTokenizer
12
-
13
- def check_model_card(repo_id: str) -> tuple[bool, str]:
14
- """Checks if the model card and license exist and have been filled"""
15
- try:
16
- card = ModelCard.load(repo_id)
17
- except huggingface_hub.utils.EntryNotFoundError:
18
- return False, "Please add a model card to your model to explain how you trained/fine-tuned it."
19
-
20
- # Enforce license metadata
21
- if card.data.license is None:
22
- if not ("license_name" in card.data and "license_link" in card.data):
23
- return False, (
24
- "License not found. Please add a license to your model card using the `license` metadata or a"
25
- " `license_name`/`license_link` pair."
26
- )
27
-
28
- # Enforce card content
29
- if len(card.text) < 200:
30
- return False, "Please add a description to your model card, it is too short."
31
-
32
- return True, ""
33
-
34
- def is_model_on_hub(model_name: str, revision: str, token: str = None, trust_remote_code=False, test_tokenizer=False) -> tuple[bool, str]:
35
- """Checks if the model model_name is on the hub, and whether it (and its tokenizer) can be loaded with AutoClasses."""
36
- try:
37
- config = AutoConfig.from_pretrained(model_name, revision=revision, trust_remote_code=trust_remote_code, token=token)
38
- if test_tokenizer:
39
- try:
40
- tk = AutoTokenizer.from_pretrained(model_name, revision=revision, trust_remote_code=trust_remote_code, token=token)
41
- except ValueError as e:
42
- return (
43
- False,
44
- f"uses a tokenizer which is not in a transformers release: {e}",
45
- None
46
- )
47
- except Exception as e:
48
- return (False, "'s tokenizer cannot be loaded. Is your tokenizer class in a stable transformers release, and correctly configured?", None)
49
- return True, None, config
50
-
51
- except ValueError:
52
- return (
53
- False,
54
- "needs to be launched with `trust_remote_code=True`. For safety reason, we do not allow these models to be automatically submitted to the leaderboard.",
55
- None
56
- )
57
-
58
- except Exception as e:
59
- return False, "was not found on hub!", None
60
-
61
-
62
- def get_model_size(model_info: ModelInfo, precision: str):
63
- """Gets the model size from the configuration, or the model name if the configuration does not contain the information."""
64
- try:
65
- model_size = round(model_info.safetensors["total"] / 1e9, 3)
66
- except (AttributeError, TypeError):
67
- return 0 # Unknown model sizes are indicated as 0, see NUMERIC_INTERVALS in app.py
68
-
69
- size_factor = 8 if (precision == "GPTQ" or "gptq" in model_info.modelId.lower()) else 1
70
- model_size = size_factor * model_size
71
- return model_size
72
-
73
- def get_model_arch(model_info: ModelInfo):
74
- """Gets the model architecture from the configuration"""
75
- return model_info.config.get("architectures", "Unknown")
76
-
77
- def already_submitted_models(requested_models_dir: str) -> set[str]:
78
- """Gather a list of already submitted models to avoid duplicates"""
79
- depth = 1
80
- file_names = []
81
- users_to_submission_dates = defaultdict(list)
82
-
83
- for root, _, files in os.walk(requested_models_dir):
84
- current_depth = root.count(os.sep) - requested_models_dir.count(os.sep)
85
- if current_depth == depth:
86
- for file in files:
87
- if not file.endswith(".json"):
88
- continue
89
- with open(os.path.join(root, file), "r") as f:
90
- info = json.load(f)
91
- file_names.append(f"{info['model']}_{info['revision']}_{info['precision']}")
92
-
93
- # Select organisation
94
- if info["model"].count("/") == 0 or "submitted_time" not in info:
95
- continue
96
- organisation, _ = info["model"].split("/")
97
- users_to_submission_dates[organisation].append(info["submitted_time"])
98
-
99
- return set(file_names), users_to_submission_dates
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/submission/submit.py DELETED
@@ -1,119 +0,0 @@
1
- import json
2
- import os
3
- from datetime import datetime, timezone
4
-
5
- from src.display.formatting import styled_error, styled_message, styled_warning
6
- from src.envs import API, EVAL_REQUESTS_PATH, TOKEN, QUEUE_REPO
7
- from src.submission.check_validity import (
8
- already_submitted_models,
9
- check_model_card,
10
- get_model_size,
11
- is_model_on_hub,
12
- )
13
-
14
- REQUESTED_MODELS = None
15
- USERS_TO_SUBMISSION_DATES = None
16
-
17
- def add_new_eval(
18
- model: str,
19
- base_model: str,
20
- revision: str,
21
- precision: str,
22
- weight_type: str,
23
- model_type: str,
24
- ):
25
- global REQUESTED_MODELS
26
- global USERS_TO_SUBMISSION_DATES
27
- if not REQUESTED_MODELS:
28
- REQUESTED_MODELS, USERS_TO_SUBMISSION_DATES = already_submitted_models(EVAL_REQUESTS_PATH)
29
-
30
- user_name = ""
31
- model_path = model
32
- if "/" in model:
33
- user_name = model.split("/")[0]
34
- model_path = model.split("/")[1]
35
-
36
- precision = precision.split(" ")[0]
37
- current_time = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
38
-
39
- if model_type is None or model_type == "":
40
- return styled_error("Please select a model type.")
41
-
42
- # Does the model actually exist?
43
- if revision == "":
44
- revision = "main"
45
-
46
- # Is the model on the hub?
47
- if weight_type in ["Delta", "Adapter"]:
48
- base_model_on_hub, error, _ = is_model_on_hub(model_name=base_model, revision=revision, token=TOKEN, test_tokenizer=True)
49
- if not base_model_on_hub:
50
- return styled_error(f'Base model "{base_model}" {error}')
51
-
52
- if not weight_type == "Adapter":
53
- model_on_hub, error, _ = is_model_on_hub(model_name=model, revision=revision, token=TOKEN, test_tokenizer=True)
54
- if not model_on_hub:
55
- return styled_error(f'Model "{model}" {error}')
56
-
57
- # Is the model info correctly filled?
58
- try:
59
- model_info = API.model_info(repo_id=model, revision=revision)
60
- except Exception:
61
- return styled_error("Could not get your model information. Please fill it up properly.")
62
-
63
- model_size = get_model_size(model_info=model_info, precision=precision)
64
-
65
- # Were the model card and license filled?
66
- try:
67
- license = model_info.cardData["license"]
68
- except Exception:
69
- return styled_error("Please select a license for your model")
70
-
71
- modelcard_OK, error_msg = check_model_card(model)
72
- if not modelcard_OK:
73
- return styled_error(error_msg)
74
-
75
- # Seems good, creating the eval
76
- print("Adding new eval")
77
-
78
- eval_entry = {
79
- "model": model,
80
- "base_model": base_model,
81
- "revision": revision,
82
- "precision": precision,
83
- "weight_type": weight_type,
84
- "status": "PENDING",
85
- "submitted_time": current_time,
86
- "model_type": model_type,
87
- "likes": model_info.likes,
88
- "params": model_size,
89
- "license": license,
90
- "private": False,
91
- }
92
-
93
- # Check for duplicate submission
94
- if f"{model}_{revision}_{precision}" in REQUESTED_MODELS:
95
- return styled_warning("This model has been already submitted.")
96
-
97
- print("Creating eval file")
98
- OUT_DIR = f"{EVAL_REQUESTS_PATH}/{user_name}"
99
- os.makedirs(OUT_DIR, exist_ok=True)
100
- out_path = f"{OUT_DIR}/{model_path}_eval_request_False_{precision}_{weight_type}.json"
101
-
102
- with open(out_path, "w") as f:
103
- f.write(json.dumps(eval_entry))
104
-
105
- print("Uploading eval file")
106
- API.upload_file(
107
- path_or_fileobj=out_path,
108
- path_in_repo=out_path.split("eval-queue/")[1],
109
- repo_id=QUEUE_REPO,
110
- repo_type="dataset",
111
- commit_message=f"Add {model} to eval queue",
112
- )
113
-
114
- # Remove the local file
115
- os.remove(out_path)
116
-
117
- return styled_message(
118
- "Your request has been submitted to the evaluation queue!\nPlease wait for up to an hour for the model to show in the PENDING list."
119
- )