Update README.md
Browse files
README.md
CHANGED
|
@@ -9,4 +9,91 @@ datasets:
|
|
| 9 |
language:
|
| 10 |
- en
|
| 11 |
---
|
| 12 |
-
**Technical Specifications Document is available at**: https://docs.google.com/document/d/1eLUFC-8FtJkaQT9dUhjwRRKn8bXrHaZsXdMlIvCoeT4/edit?usp=sharing
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
language:
|
| 10 |
- en
|
| 11 |
---
|
| 12 |
+
**Technical Specifications Document is available at**: https://docs.google.com/document/d/1eLUFC-8FtJkaQT9dUhjwRRKn8bXrHaZsXdMlIvCoeT4/edit?usp=sharing
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
# Non-Profit Mapping Project Documentation: Religious Orgs Segmentation
|
| 19 |
+
|
| 20 |
+
Author: Zilun Lin
|
| 21 |
+
|
| 22 |
+
# 1\. Approach
|
| 23 |
+
|
| 24 |
+
## Definition
|
| 25 |
+
|
| 26 |
+
We use the following definition for categorizing religious orgs provided in the academic literature:
|
| 27 |
+
“Religious organizations are organizations whose identity and mission are derived from a religious or spiritual tradition and which operate as registered or unregistered, nonprofit, voluntary entities.” ([source](https://www.montclair.edu/profilepages/media/11259/user/religiousorganizationsglobalencyclope.pdf))
|
| 28 |
+
This definition is operationalized in how we prompt GPT 4 to classify the training and testing datasets. Namely, we give it information on the name, mission statement and key activities and prompt it to find mentions/wording/terminology that reveal an org’s religious affiliations.
|
| 29 |
+
|
| 30 |
+
## Religious Recipient Orgs
|
| 31 |
+
|
| 32 |
+
Using the 990 CN file and BERT classifier:
|
| 33 |
+
(1) Categorise orgs by religion (if any) using name (23F990-LINE-C), mission statement (23F990-PART-03-LINE-1) and activities (20F990-PART-03-LINE-4A, 4B, 4C).
|
| 34 |
+
|
| 35 |
+
Using the 990 EZ file and BERT classifier:
|
| 36 |
+
(1) Categorise orgs by religion (if any) using name (23F990-LINE-C), primary exempt purpose (F9\_03\_PZ\_MISSIODESCES) and program accomplishment description(F9\_03\_PZ\_PRSRACACDEES).
|
| 37 |
+
|
| 38 |
+
# 2\. Code documentation
|
| 39 |
+
|
| 40 |
+
The code has two parts. The first consists of a notebook where the fine-tuning and testing datasets are constructed and uploaded to huggingface. This latter notebook is stored on databricks. The second consists of a Google Colab notebook where the actual fine-tuning of the model is done and its accuracy is tested. We use Google Colab because fine-tuning libraries can’t seem to be run on my databricks environment and I couldn’t figure out how to fix it. Google Colab also has the upside of giving me access to the much more powerful A100 GPUs (after paying a small fee) that is 10x quicker for fine-tuning.
|
| 41 |
+
|
| 42 |
+
All of the notebooks should be reasonably documented. Please message Zilun Lin if there are any mistakes or missing documentation.
|
| 43 |
+
|
| 44 |
+
## Classifying and formatting datasets
|
| 45 |
+
|
| 46 |
+
This notebook randomly samples from the 990 datamart and classifies the sample orgs using GPT4. It also generates a curated dataset of artificial orgs that are associated with under-represented religions. These two datasets are combined, formatted into an appropriate instruct-prompt-output format for fine-tuning and uploaded to HuggingFace. The final dataset has over 2k examples for training and validation, and 500 examples for testing.
|
| 47 |
+
|
| 48 |
+
[https://dbc-3a4d04f2-8cab.cloud.databricks.com/editor/notebooks/1182041857993717?o=4203893953353865](https://dbc-3a4d04f2-8cab.cloud.databricks.com/editor/notebooks/1182041857993717?o=4203893953353865)
|
| 49 |
+
|
| 50 |
+
## Fine-tuning the LLM and testing for accuracy
|
| 51 |
+
|
| 52 |
+
We downloaded the fine-tuning dataset from HuggingFace and fine-tune a set of LLMs. The resulting models are uploaded to HuggingFace. We also test these model’s accuracy on an unseen testing dataset.
|
| 53 |
+
|
| 54 |
+
(Llama Models)
|
| 55 |
+
[https://colab.research.google.com/drive/1tZBVcQ\_XQeb11HUBKxKjPTGBhMwJCiDF?usp=sharing](https://colab.research.google.com/drive/1tZBVcQ_XQeb11HUBKxKjPTGBhMwJCiDF?usp=sharing)
|
| 56 |
+
|
| 57 |
+
(Bert models)
|
| 58 |
+
[https://colab.research.google.com/drive/1OaV9wwqCzWqRXFmKzzDYW3Hwq\_zwaUE5?usp=sharing](https://colab.research.google.com/drive/1OaV9wwqCzWqRXFmKzzDYW3Hwq_zwaUE5?usp=sharing)
|
| 59 |
+
|
| 60 |
+
# 3\. Outputs and Results
|
| 61 |
+
|
| 62 |
+
The two notebooks’ final outputs are two models:
|
| 63 |
+
|
| 64 |
+
- BERT base curated
|
| 65 |
+
- Llama 3.2 3B curated
|
| 66 |
+
|
| 67 |
+
The curated models are fine-tuned using the combined dataset of actual and artificial orgs. The curated models should have better accuracy on under-represented orgs.
|
| 68 |
+
|
| 69 |
+
The following are the accuracy measures (higher is better) for the fine-tuned/trained models.
|
| 70 |
+
|
| 71 |
+
- BERT base curated:
|
| 72 |
+
- Weighted F1 score: 0.93
|
| 73 |
+
- Macro F1 score: 0.76
|
| 74 |
+
- Llama 3.2 3B curated:
|
| 75 |
+
- Weighted F1 score: 0.85
|
| 76 |
+
- Macro F1 score: 0.27
|
| 77 |
+
|
| 78 |
+
The following are the speed of inference measures (lower is better) for the fine-tuned/trained models.
|
| 79 |
+
|
| 80 |
+
- BERT base curated time taken for 500 inferences: 3 seconds.
|
| 81 |
+
- Llama 3.2 3B curated time taken for 500 inferences: \~10 minutes
|
| 82 |
+
|
| 83 |
+
When comparing models for text classification, BERT stands out for its fast inference speed and strong classification performance, as indicated by its F1 scores. The F1 score is a measure that balances precision and recall, offering a robust metric for evaluating classification tasks. BERT performs better than LLaMa in both the weighted and the macro F1 metrics, suggesting that it has a high accuracy in general and is adept at categorizing less frequent religious affiliations as well.
|
| 84 |
+
|
| 85 |
+
Aside from less accurate classification, the larger Llama 3.2 3B model also has a major drawback: its inference speed is significantly slower, and hosting the model is resource-intensive. For example, a linear extrapolation suggests that categorizing 200,000 organizations with Llama 3.2 3B could take approximately 4,000 minutes. This makes it less practical for scenarios where processing speed is a priority.
|
| 86 |
+
|
| 87 |
+
In comparison, BERT is much faster for inference, thanks to its streamlined model architecture, which is specifically optimized for tasks like classification. This makes BERT a more practical choice for applications requiring both speed and great classification performance.
|
| 88 |
+
|
| 89 |
+
# 4\. Deployment
|
| 90 |
+
|
| 91 |
+
The chosen BERT model is now hosted on MLFlow (Databricks) in the model registry under the name \`religious\_orgs\_model\`, and has been released to the public under the apache-2 license on [Huggingface](https://huggingface.co/GivingTuesday/religious_org_v1). The processed data will be available for download in a data mart or API.
|
| 92 |
+
|
| 93 |
+
The API endpoint will output five fields, three for BERT classification and two based on 1023 EZ data availability:
|
| 94 |
+
BERT Natural Language Outputs:
|
| 95 |
+
(1) Religious Classification (and classification probability)
|
| 96 |
+
(2) Classification probability for the designated religious affiliations (and classification probability)
|
| 97 |
+
(3) Classification probability for whether the organisation is religious or not (and probability)
|
| 98 |
+
|
| 99 |
+
|