mariamSoub lmarty commited on
Commit
6ac33ef
·
1 Parent(s): 4afd940

Update app.py (#1)

Browse files

- Update app.py (fadf5f41ce68b2cbcb63e26aae07bdbdd04f6bbc)


Co-authored-by: Leah Marty <lmarty@users.noreply.huggingface.co>

Files changed (1) hide show
  1. app.py +3 -3
app.py CHANGED
@@ -441,7 +441,7 @@ def analyze_response(user_response):
441
 
442
  print("\nRegard Analysis:")
443
  regard_label = analysis_result["regard_label"]
444
- regard_score = analysis_result["regard_score"]
445
 
446
  # MNLI fairness signal
447
  mnli_rating = mnli_bias_score(user_response)
@@ -583,8 +583,8 @@ with gr.Blocks(title="Bias Detection & Mitigation Tool") as demo:
583
  gr.Markdown("""
584
  ### About
585
  This tool uses multiple models to detect bias in text:
586
- - LLaMA for bias classification
587
- - Regard classifier for social perceptions (is the text negative or positive?)
588
  - MNLI for fairness scoring
589
  - Fairlearn for demographic metrics
590
  """)
 
441
 
442
  print("\nRegard Analysis:")
443
  regard_label = analysis_result["regard_label"]
444
+ regard_score = {analysis_result["regard_score"]:.2f}
445
 
446
  # MNLI fairness signal
447
  mnli_rating = mnli_bias_score(user_response)
 
583
  gr.Markdown("""
584
  ### About
585
  This tool uses multiple models to detect bias in text:
586
+ - LLaMA performs bias classification. Bias label indicates whether the response is biased, bias type returns the type of social bias found in the response and demographic group affected, if biased.
587
+ - The Regard classifier indicates the social perception of the response (is the text negative or positive?)
588
  - MNLI for fairness scoring
589
  - Fairlearn for demographic metrics
590
  """)