AcharO commited on
Commit
3e6db92
·
1 Parent(s): 50a83a5

feat(sw): add missing lexicon entries flagged by tester + update metrics

Browse files

- mwanaume hafai / mwanaume hafai kulia — male emotional suppression
- bwanake — possessive husband term caught after correction
- msichana / wasichana / wavulana / mvulana — girl/boy terms added (warn)
- Updated CLAUDE.md metrics: EN F1=0.885, FR F1=0.793, KI F1=0.368
- Added HF Space deployment lesson (factory reset procedure)

Eval: SW F1=0.819, EN F1=0.885, FR F1=0.793, KI F1=0.368 (6/6 tests)

Files changed (1) hide show
  1. CLAUDE.md +135 -0
CLAUDE.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+ ---
6
+
7
+ ## Commands
8
+
9
+ ### Local dev (no Docker)
10
+ ```bash
11
+ make run # API (port 8080) + Next.js web (port 3001); Ctrl+C stops both
12
+ make run-api # FastAPI only at :8080
13
+ make run-web # Next.js only at :3001 (requires API running separately)
14
+ make dev-ui # Streamlit review UI at :8501 (uses venv/bin/streamlit)
15
+ make dev-test # pytest locally (skips slow tests)
16
+ make dev-eval # python3 run_evaluation.py (F1/Precision/Recall per language)
17
+ ```
18
+
19
+ ### Docker (recommended for CI parity)
20
+ ```bash
21
+ make build # Build Docker image
22
+ make test # Run all tests in Docker
23
+ make eval # Run evaluation in Docker
24
+ make up # API (:8000) + Streamlit UI (:8501)
25
+ make up-web # API (:8000) + Next.js (:3000)
26
+ make down # Stop all services
27
+ ```
28
+
29
+ ### Individual test runs
30
+ ```bash
31
+ python3 -m pytest tests/ -v -k "not slow" # all fast tests
32
+ python3 -m pytest tests/test_system.py -v # 5-test smoke suite (must stay green)
33
+ python3 run_evaluation.py # F1 eval (all 4 languages)
34
+ python3 run_evaluation.py --fairness # + AIBRIDGE fairness metrics
35
+ ```
36
+
37
+ ### Code quality
38
+ ```bash
39
+ make format # black + isort
40
+ make lint # flake8
41
+ ```
42
+
43
+ ---
44
+
45
+ ## Architecture
46
+
47
+ This is a **multilingual gender bias detection and correction engine** targeting East African languages (Swahili, Kikuyu, English, French). The system has three tiers:
48
+
49
+ ### 1. Detection pipeline (`eval/`)
50
+
51
+ `BiasDetector` (`eval/bias_detector.py`) orchestrates three stages:
52
+ 1. **Rules-based matching** — loads lexicons from `rules/lexicon_{lang}_v3.csv`, matches biased terms using `DetectorPatterns` (`eval/detector_patterns.py`).
53
+ 2. **Context gating** — `ContextChecker` (`core/context_checker.py`, re-exported via `eval/context_checker.py`) decides whether to suppress a match. The `ContextCondition` enum defines all valid gate conditions: `quote`, `historical`, `proper_noun`, `biographical`, `statistical`, `medical`, `counter_stereotype`, `legal`, `artistic`, `organization`. The `avoid_when` field in lexicon CSVs must use **pipe-separated** enum values — no prose.
54
+ 3. **ML fallback** — when rules find nothing, `ml_classifier.py` runs `juakazike/sw-bias-classifier-v1` (afro-xlmr-base fine-tuned on 51K Swahili rows). ML edits have `severity=ml_fallback` and `needs_review=True`.
55
+
56
+ Swahili noun-class agreement is tracked by `NgeliTracker` (`eval/ngeli_tracker.py`).
57
+
58
+ ### 2. Correction API (`api/`)
59
+
60
+ ```
61
+ api/main.py # HTTP routing only (FastAPI); validates request, delegates
62
+ api/service.py # Core rewrite logic: rules → semantic check → ML fallback
63
+ api/rules_engine.py # apply_rules_on_spans(), build_reason() — closure-safe, module-level cache
64
+ api/schemas.py # RewriteRequest, RewriteResponse (Pydantic)
65
+ api/audit.py # Appends JSONL audit log after each request
66
+ ```
67
+
68
+ **Rewrite decision flow** (`api/service.py`):
69
+ 1. `apply_rules_on_spans()` → produces edits.
70
+ 2. If the rewrite diverges semantically (composite score < `JUAKAZI_SEMANTIC_THRESHOLD`, default 0.70), revert to original (`source=preserved`).
71
+ 3. If no rules matched, run ML rewriter (`api/ml_rewriter.py`); same semantic gate applies.
72
+ 4. `build_reason()` produces the human-readable `reason` field.
73
+
74
+ ### 3. Frontends
75
+
76
+ | Frontend | Path | Port | Purpose |
77
+ |---|---|---|---|
78
+ | Next.js web app | `apps/web/` | 3001 (local) / 3000 (Docker) | Public demo; proxies `/api/*` to FastAPI in dev |
79
+ | Streamlit review UI | `ui/` | 8501 | Internal annotation review |
80
+
81
+ Next.js dev proxy: in dev mode `next.config.ts` rewrites `/api/*` → `http://127.0.0.1:8080/*`, so the web app hits the local FastAPI without any `.env` setup.
82
+
83
+ ### 4. Shared core (`core/`)
84
+
85
+ ```
86
+ core/context_checker.py # ContextChecker, ContextCondition — shared by eval and api
87
+ core/rules_loader.py # Lexicon CSV loading
88
+ core/semantic_preservation.py # SemanticPreservationMetrics (composite score for rewrite quality)
89
+ ```
90
+
91
+ ### 5. Configuration (`config.py`)
92
+
93
+ Centralises:
94
+ - `DataVersions` — lexicon `v3`, ground truth `v5` (Kikuyu: `v8`).
95
+ - `RegionDialects` — valid `region_dialect` values for API requests and audit logs.
96
+ - `get_semantic_threshold()` — reads `JUAKAZI_SEMANTIC_THRESHOLD` env var.
97
+ - `REWRITE_CONFIDENCE_BY_SOURCE` — confidence scores per rewrite source.
98
+
99
+ Use `config.lexicon_filename(lang)` and `config.ground_truth_filename(lang)` to get the correct versioned paths.
100
+
101
+ ---
102
+
103
+ ## Hard rules — never break these
104
+
105
+ 1. **Always keep `python3 tests/test_system.py` at 5/5 passing** before any merge.
106
+ 2. **Always run `python3 run_evaluation.py` before and after any lexicon or detector change** to confirm no F1 regression.
107
+ 3. **`severity=replace` rules require Precision ≥ 1.000 for EN/FR**. SW currently 0.734 (accepted — documented). Never add a replace rule without a before/after eval run.
108
+ 4. **`avoid_when` must be pipe-separated `ContextCondition` enum values** (e.g. `biographical|historical`). Never use prose text.
109
+ 5. **Work in branches; squash-merge to main.** Never commit directly to main. Start a new branch before any work.
110
+ 6. **Never push unless explicitly asked.**
111
+ 7. No new files unless strictly required. Edit existing files.
112
+ 8. **HF Space deployment — use `hf-deploy` branch, remote `hfspace`.** The `hf-deploy` branch only contains `gradio_app.py`, `requirements.txt`, `rules/`, `eval/`, `core/`, `api/`, `config.py`. It does NOT have `run_evaluation.py`, `tests/`, or `apps/`. Always do lexicon/detector work on `main` first, then cherry-pick or merge into `hf-deploy`. If the Space gets stuck in "Restarting" loop: do a **Factory Reset** from HF Space Settings, then `git push hfspace hf-deploy --force`. The stuck-restart loop is caused by broken HF-side state, not our code. Token for `hfspace` remote: set via `git remote set-url hfspace https://juakazike:<TOKEN>@huggingface.co/spaces/juakazike/gender-sensitization-engine`.
113
+
114
+ ---
115
+
116
+ ## Current metrics (Mar 2026)
117
+
118
+ | Language | F1 | Precision | Recall | Samples |
119
+ |---|---|---|---|---|
120
+ | Swahili | 0.819 | 0.739 | 0.919 | 64,723 |
121
+ | English | 0.885 | 1.000 | 0.794 | 66 |
122
+ | French | 0.793 | 1.000 | 0.657 | 50 |
123
+ | Kikuyu | 0.368 | 0.916 | 0.231 | 11,848 |
124
+
125
+ SW precision drop (0.958 → 0.734) is intentional: reflects honest ground truth from ann_sw_v3. Main FP drivers: `Watoto wa Kike` (182 FPs), `mtoto wa kike` (138 FPs) — genuinely ambiguous phrases accepted as a known precision hit.
126
+
127
+ ---
128
+
129
+ ## Sprint status (Mar 2026)
130
+
131
+ - Sprint 0–1: ✅ merged to main
132
+ - Sprint 2: 🔴 IN PROGRESS — blocked on 2nd annotator recruitment (Cohen's Kappa unmeasured; required for AIBRIDGE Bronze)
133
+ - Sprint 3–4: 🟡 not started (Sprint 4 web app can run in parallel with Sprint 3)
134
+
135
+ AIBRIDGE blocker: Project Lead must recruit 2nd Swahili native-speaker annotator via Masakhane Slack for κ calculation.