aryn25 commited on
Commit
a6d1985
·
verified ·
1 Parent(s): 4ccff65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md CHANGED
@@ -8,5 +8,92 @@ sdk_version: 5.23.1
8
  app_file: app.py
9
  pinned: false
10
  ---
 
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
8
  app_file: app.py
9
  pinned: false
10
  ---
11
+ Sure! Here’s the final, polished README.md text you can use for your Bias Bin project:
12
 
13
+
14
+
15
+ Bias Bin: Bias Detection and Mitigation in Language Models
16
+
17
+ Bias Bin is an interactive Gradio-based web application for detecting and mitigating gender bias in narrative text. It uses a fine-tuned BERT model and counterfactual data augmentation techniques to highlight and analyze bias in NLP outputs.
18
+
19
+ 🚀 Live Demo: Try it on Hugging Face Spaces
20
+
21
+
22
+
23
+ 🧠 Project Overview
24
+
25
+ This tool allows users to:
26
+ • Detect gender bias in input text using a BERT-based classification model.
27
+ • Explore counterfactual predictions by swapping gendered terms.
28
+ • Visualize bias scores to understand model behavior.
29
+ • Demonstrate bias mitigation through gender-swapped text examples.
30
+
31
+ This project was developed as part of a university coursework in Deep Learning & Generative AI, focusing on fairness, explainability, and responsible AI.
32
+
33
+
34
+
35
+ 📁 Repository Contents
36
+ • app.py – Main Python file to launch the Gradio web app.
37
+ • Evaluation&Results.ipynb – Notebook with experiments, model evaluations, and visualizations.
38
+ • fine_tuned_model.zip – Zip file containing the fine-tuned BERT model (must be extracted).
39
+ • requirements.txt – List of Python dependencies.
40
+
41
+
42
+
43
+ ⚙️ Setup Instructions
44
+ 1. Clone the Repository
45
+
46
+ git clone https://huggingface.co/spaces/aryn25/bias.bin
47
+ cd bias.bin
48
+
49
+
50
+ 2. Install Dependencies
51
+
52
+ pip install -r requirements.txt
53
+
54
+
55
+ 3. Extract the Model
56
+ Unzip the fine_tuned_model.zip file and place the extracted folder in the project root.
57
+ 4. Run the App
58
+
59
+ python app.py
60
+
61
+
62
+ 5. Open in Browser
63
+ Visit the Gradio URL printed in the terminal (e.g., http://127.0.0.1:7860/).
64
+
65
+
66
+
67
+ 📊 Methodology
68
+ • Model: Fine-tuned BERT classifier trained on gender-labeled narrative datasets.
69
+ • Bias Detection: Uses counterfactual data augmentation by swapping gendered words (e.g., “he” → “she”).
70
+ • Metrics: Bias scores are computed based on prediction discrepancies between original and counterfactual samples.
71
+
72
+
73
+
74
+ 📚 References
75
+
76
+ This project is built using foundational and peer-reviewed research on:
77
+ • BERT and Transformer models
78
+ • Gender bias in NLP
79
+ • Counterfactual data augmentation
80
+ • Bias mitigation techniques
81
+
82
+ Full citation list available in the project report.
83
+
84
+
85
+
86
+ 📌 Authors
87
+
88
+ Created by Aryan N. Salge and team as part of the Deep Learning & Generative AI coursework at the National College of Ireland.
89
+
90
+
91
+
92
+ 📄 License
93
+
94
+ This project is for educational and research purposes. Please cite appropriately if you use or adapt the work.
95
+
96
+
97
+
98
+ Let me know if you’d like a version tailored for GitHub, Hugging Face README.md, or both!
99
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference