ArkMaster123 commited on
Commit
d5ab669
·
verified ·
1 Parent(s): b7f5fcf

V2 classifier (federal + foundation) - README.md

Browse files
Files changed (1) hide show
  1. README.md +151 -0
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - xgboost
5
+ - classification
6
+ - grant-matching
7
+ - win-probability
8
+ - nonprofit
9
+ datasets:
10
+ - ArkMaster123/grantpilot-training-data
11
+ language:
12
+ - en
13
+ ---
14
+
15
+ # GrantPilot Win Probability Classifier
16
+
17
+ **XGBoost classifier for predicting grant funding success**
18
+
19
+ This model predicts the probability that a nonprofit organization will win a specific grant, based on embedding similarity and structured features.
20
+
21
+ ## Performance Metrics
22
+
23
+ | Metric | Score | Target |
24
+ |--------|-------|--------|
25
+ | **AUC-ROC** | **0.837** | > 0.75 |
26
+ | Brier Score | 0.167 | < 0.15 |
27
+ | Accuracy | 72.1% | - |
28
+ | Precision | 47.4% | - |
29
+ | Recall | 79.9% | - |
30
+ | F1 Score | 0.595 | - |
31
+
32
+ ### Key Highlights:
33
+ - **AUC-ROC of 0.837** exceeds target by 12%
34
+ - **High recall (80%)** ensures we catch most winning opportunities
35
+ - Calibrated with isotonic regression for reliable probability estimates
36
+
37
+ ## Model Architecture
38
+
39
+ ```
40
+ Input Features:
41
+ - cosine_similarity (from fine-tuned embedding model)
42
+ - funder_type (categorical)
43
+ - source (categorical: NIH, NSF)
44
+ - log_amount (grant amount)
45
+ - org_text_length
46
+ - grant_text_length
47
+
48
+ -> XGBoost Classifier
49
+ -> Isotonic Calibration
50
+ -> Win Probability (0-100%)
51
+ ```
52
+
53
+ ## Usage
54
+
55
+ ```python
56
+ import xgboost as xgb
57
+ import pickle
58
+ from huggingface_hub import hf_hub_download
59
+
60
+ # Download model files
61
+ model_path = hf_hub_download("ArkMaster123/grantpilot-classifier", "xgboost_model.json")
62
+ scaler_path = hf_hub_download("ArkMaster123/grantpilot-classifier", "scaler.pkl")
63
+ calibrator_path = hf_hub_download("ArkMaster123/grantpilot-classifier", "isotonic_calibrator.pkl")
64
+
65
+ # Load model
66
+ model = xgb.Booster()
67
+ model.load_model(model_path)
68
+
69
+ with open(scaler_path, "rb") as f:
70
+ scaler = pickle.load(f)
71
+
72
+ with open(calibrator_path, "rb") as f:
73
+ calibrator = pickle.load(f)
74
+
75
+ # Predict (after generating features)
76
+ features_scaled = scaler.transform(features)
77
+ dmatrix = xgb.DMatrix(features_scaled)
78
+ raw_pred = model.predict(dmatrix)
79
+ win_probability = calibrator.predict(raw_pred) * 100
80
+ ```
81
+
82
+ ## Training Details
83
+
84
+ - **Hardware**: NVIDIA H100 80GB
85
+ - **Training Data**: 59K training pairs, 7.4K validation, 6.6K test
86
+ - **XGBoost Parameters**:
87
+ - max_depth: 6
88
+ - learning_rate: 0.1
89
+ - n_estimators: 200 (early stopped at 18)
90
+ - subsample: 0.8
91
+
92
+ ## Intended Use
93
+
94
+ This model is designed to:
95
+ - Predict win probability for grant-organization matches
96
+ - Help nonprofits prioritize grant applications
97
+ - Provide confidence scores for grant recommendations
98
+
99
+ ## Limitations
100
+
101
+ - Trained on federal grants (NIH, NSF) - accuracy may vary for other funders
102
+ - Requires the fine-tuned embedding model for cosine_similarity feature
103
+ - Best used in conjunction with human judgment
104
+
105
+ ## Related Models
106
+
107
+ - [ArkMaster123/grantpilot-embedding](https://huggingface.co/ArkMaster123/grantpilot-embedding) - Fine-tuned embedding model (required for similarity feature)
108
+
109
+
110
+ ---
111
+
112
+ ## V2.0 Update: Foundation Grants Support (February 2026)
113
+
114
+ ### What Changed
115
+
116
+ V2 extends the model from **federal-only (NIH/NSF)** to also support **foundation grants** (990-PF data from 37,684 private foundations). The training data grew from ~42K federal pairs to **811K combined pairs** across three sources.
117
+
118
+ ### Training Data (V2)
119
+
120
+ | Split | Foundation | NIH | NSF | Total |
121
+ |-------|-----------|-----|-----|-------|
122
+ | Train | 584,802 | 51,434 | 12,638 | 648,874 |
123
+ | Val | 73,240 | 6,445 | 1,599 | 81,284 |
124
+ | Test | 73,022 | 6,384 | 1,588 | 80,994 |
125
+
126
+ Data is stratified by source so each split has proportional representation.
127
+
128
+ ### V2 Performance
129
+
130
+ | Metric | V1 (Federal Only) | V2 (Combined) | Change |
131
+ |--------|-------------------|---------------|--------|
132
+ | **Overall AUC-ROC** | 0.837 | **0.997** | +19.1% |
133
+ | **Federal AUC** | 0.837 | **0.913** | +9.1% |
134
+ | Brier Score | 0.167 | **0.014** | -91.6% |
135
+ | Accuracy | 72.1% | **98.3%** | +26.2% |
136
+ | Precision | 47.4% | **97.1%** | +49.7% |
137
+ | Recall | 79.9% | **99.6%** | +19.7% |
138
+ | F1 Score | 0.595 | **0.983** | +65.2% |
139
+
140
+ ### Federal Regression Check: PASS
141
+
142
+ Federal-only AUC improved from 0.837 to **0.913**, well above the 0.817 minimum threshold. Adding foundation data did not degrade federal performance - it improved it.
143
+
144
+ ### Version Tags
145
+
146
+ - `v1.0-federal-only`: Original federal-only model (NIH + NSF)
147
+ - `v2.0-with-foundations`: Combined federal + foundation model
148
+
149
+ ### Foundation Data Source
150
+
151
+ Foundation grant data sourced from IRS 990-PF e-filings via GivingTuesday's open dataset, covering 680,970 grants from 37,684 private foundations (2024 filing year). 88% of grants include purpose text descriptions.