Upload 3 files
Browse files- app.py +0 -0
- phase2_rkb.py +1124 -0
- requirements.txt +2 -0
app.py
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
phase2_rkb.py
ADDED
|
@@ -0,0 +1,1124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ═══════════════════════════════════════════════════════════════
|
| 2 |
+
# RegMap Phase 2 — Regulatory Knowledge Base
|
| 3 |
+
# Qualification questions, obligations, overlaps, gap analysis
|
| 4 |
+
# ═══════════════════════════════════════════════════════════════
|
| 5 |
+
|
| 6 |
+
# ─────────────────────────────────────────────────
|
| 7 |
+
# SECTION 1: QUALIFICATION QUESTIONS
|
| 8 |
+
# For each AI-specific regulation, questions to
|
| 9 |
+
# confirm applicability and determine obligation set
|
| 10 |
+
# ─────────────────────────────────────────────────
|
| 11 |
+
|
| 12 |
+
QUALIFICATION_QUESTIONS = {
|
| 13 |
+
|
| 14 |
+
# ── EU AI ACT ──
|
| 15 |
+
"EU AI Act (Regulation 2024/1689)": {
|
| 16 |
+
"questions": [
|
| 17 |
+
{
|
| 18 |
+
"id": "euaia_exception",
|
| 19 |
+
"text": "Is your AI system covered by any of the following exceptions to the EU AI Act?",
|
| 20 |
+
"type": "multi_select",
|
| 21 |
+
"options": [
|
| 22 |
+
"The AI system is used exclusively for military, defence, or national security purposes (Art. 2(3))",
|
| 23 |
+
"The AI system is used solely for scientific research and development and has not yet been placed on the market or put into service (Art. 2(6))",
|
| 24 |
+
"The AI system is used for purely personal, non-professional purposes by a natural person (Art. 2(7))",
|
| 25 |
+
"The AI system is released under a free and open-source licence with publicly available parameters, including weights (Art. 2(12))",
|
| 26 |
+
"The AI system is operated by a third-country public authority under international law enforcement or judicial cooperation agreements (Art. 2(4))",
|
| 27 |
+
"None of the above",
|
| 28 |
+
],
|
| 29 |
+
"note": "The open-source exception does NOT apply if the system is classified as high-risk (Annex III), performs a prohibited practice (Art. 5), or is a GPAI model with systemic risk.",
|
| 30 |
+
},
|
| 31 |
+
{
|
| 32 |
+
"id": "euaia_prohibited",
|
| 33 |
+
"text": "Does your AI system perform any of the following practices prohibited under Art. 5 of the EU AI Act?",
|
| 34 |
+
"type": "multi_select",
|
| 35 |
+
"options": [
|
| 36 |
+
"Subliminal manipulation techniques that cause or are likely to cause harm",
|
| 37 |
+
"Exploitation of vulnerabilities of specific groups due to age, disability, or social/economic situation",
|
| 38 |
+
"Social scoring by public authorities leading to detrimental treatment",
|
| 39 |
+
"Real-time remote biometric identification in publicly accessible spaces for law enforcement (except narrow exceptions)",
|
| 40 |
+
"Emotion recognition in the workplace or in education institutions (except for medical or safety reasons)",
|
| 41 |
+
"Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases",
|
| 42 |
+
"Biometric categorisation to infer race, political opinions, trade union membership, religious beliefs, sex life or sexual orientation",
|
| 43 |
+
"Individual predictive policing based solely on profiling or personality traits",
|
| 44 |
+
"None of the above",
|
| 45 |
+
],
|
| 46 |
+
"note": "If your system falls under any prohibited practice, it cannot be placed on the EU market. Penalties: up to €35M or 7% of global annual turnover.",
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"id": "euaia_annex3",
|
| 50 |
+
"text": "Is your AI system used in any of the following high-risk domains listed in Annex III?",
|
| 51 |
+
"type": "multi_select",
|
| 52 |
+
"options": [
|
| 53 |
+
"Biometric identification and categorisation of natural persons",
|
| 54 |
+
"Management and operation of critical infrastructure (energy, transport, water, digital)",
|
| 55 |
+
"Education and vocational training (access, admission, assessment)",
|
| 56 |
+
"Employment, worker management, and access to self-employment (recruitment, screening, evaluation, monitoring)",
|
| 57 |
+
"Access to essential private/public services (credit scoring, insurance, social benefits, emergency services)",
|
| 58 |
+
"Law enforcement (risk assessment, polygraph, evidence evaluation, profiling)",
|
| 59 |
+
"Migration, asylum, and border control (risk assessment, document verification)",
|
| 60 |
+
"Administration of justice and democratic processes",
|
| 61 |
+
"None of the above",
|
| 62 |
+
],
|
| 63 |
+
},
|
| 64 |
+
{
|
| 65 |
+
"id": "euaia_art6_3",
|
| 66 |
+
"text": "Even if your system falls within an Annex III category, it may NOT be high-risk if ALL of the following conditions are met (Art. 6(3)). Do all apply?",
|
| 67 |
+
"type": "single_select",
|
| 68 |
+
"options": [
|
| 69 |
+
"Yes — my system performs a narrow procedural task, improves a previous human activity, detects decision patterns without replacing human assessment, or is purely preparatory",
|
| 70 |
+
"No — my system directly influences consequential decisions about individuals",
|
| 71 |
+
],
|
| 72 |
+
"condition": "Show only if any Annex III category selected (not 'None of the above')",
|
| 73 |
+
"note": "Art. 6(3) exception: even within Annex III, a system is NOT high-risk if it does not pose significant risk of harm and meets specific conditions.",
|
| 74 |
+
},
|
| 75 |
+
{
|
| 76 |
+
"id": "euaia_transparency",
|
| 77 |
+
"text": "Does your AI system involve any of the following (Art. 50 — Transparency)?",
|
| 78 |
+
"type": "multi_select",
|
| 79 |
+
"options": [
|
| 80 |
+
"Direct interaction with natural persons (chatbot, virtual assistant)",
|
| 81 |
+
"Generation of synthetic audio, image, video or text content (deepfakes, GenAI output)",
|
| 82 |
+
"Emotion recognition or biometric categorisation",
|
| 83 |
+
"None of the above",
|
| 84 |
+
],
|
| 85 |
+
},
|
| 86 |
+
],
|
| 87 |
+
},
|
| 88 |
+
|
| 89 |
+
# ── EU AI ACT — GPAI ──
|
| 90 |
+
"EU AI Act — GPAI Framework (Chapter V)": {
|
| 91 |
+
"questions": [
|
| 92 |
+
{
|
| 93 |
+
"id": "gpai_systemic",
|
| 94 |
+
"text": "Does your general-purpose AI model meet any of the following criteria for systemic risk (Art. 51)?",
|
| 95 |
+
"type": "single_select",
|
| 96 |
+
"options": [
|
| 97 |
+
"Yes — training compute exceeds 10^25 FLOPs",
|
| 98 |
+
"Yes — designated as systemic risk by the European Commission (Art. 51(1)(b))",
|
| 99 |
+
"No — standard GPAI model",
|
| 100 |
+
"I don't know",
|
| 101 |
+
],
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"id": "gpai_open_source",
|
| 105 |
+
"text": "Is your GPAI model released under a free and open-source licence?",
|
| 106 |
+
"type": "single_select",
|
| 107 |
+
"options": [
|
| 108 |
+
"Yes — open-source with publicly available model weights",
|
| 109 |
+
"No — proprietary or restricted access",
|
| 110 |
+
],
|
| 111 |
+
"note": "Open-source GPAI models have reduced obligations (only technical doc + copyright policy + training data summary). Exception does NOT apply if systemic risk.",
|
| 112 |
+
},
|
| 113 |
+
],
|
| 114 |
+
},
|
| 115 |
+
|
| 116 |
+
# ── COLORADO AI ACT ──
|
| 117 |
+
"Colorado AI Act (SB 24-205)": {
|
| 118 |
+
"questions": [
|
| 119 |
+
{
|
| 120 |
+
"id": "co_consequential",
|
| 121 |
+
"text": "Does your AI system make or substantially influence consequential decisions in any of the following areas?",
|
| 122 |
+
"type": "multi_select",
|
| 123 |
+
"options": [
|
| 124 |
+
"Education enrollment or opportunity",
|
| 125 |
+
"Employment or employment opportunity",
|
| 126 |
+
"Financial or lending services",
|
| 127 |
+
"Government services or benefits",
|
| 128 |
+
"Healthcare services",
|
| 129 |
+
"Housing",
|
| 130 |
+
"Insurance",
|
| 131 |
+
"Legal services",
|
| 132 |
+
"None of the above — system does not make consequential decisions",
|
| 133 |
+
],
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"id": "co_exception",
|
| 137 |
+
"text": "Does any of the following exceptions apply?",
|
| 138 |
+
"type": "multi_select",
|
| 139 |
+
"options": [
|
| 140 |
+
"System is approved/regulated by a federal agency (FDA, FAA, etc.) with equivalent or stricter standards",
|
| 141 |
+
"System performs only narrow procedural tasks without influencing consequential decisions",
|
| 142 |
+
"System is used solely to detect, prevent, or mitigate discrimination or increase diversity",
|
| 143 |
+
"System is used solely to detect decision-making patterns without replacing human judgment",
|
| 144 |
+
"AI-powered chatbot (disclosure-only obligation)",
|
| 145 |
+
"None of the above",
|
| 146 |
+
],
|
| 147 |
+
},
|
| 148 |
+
],
|
| 149 |
+
},
|
| 150 |
+
|
| 151 |
+
# ── TEXAS TRAIGA ──
|
| 152 |
+
"Texas TRAIGA (HB 1709)": {
|
| 153 |
+
"questions": [
|
| 154 |
+
{
|
| 155 |
+
"id": "tx_deployer",
|
| 156 |
+
"text": "Do you deploy a generative AI system that creates synthetic content (text, images, audio, video)?",
|
| 157 |
+
"type": "single_select",
|
| 158 |
+
"options": [
|
| 159 |
+
"Yes — we deploy generative AI producing synthetic content",
|
| 160 |
+
"No — our system does not generate synthetic content",
|
| 161 |
+
],
|
| 162 |
+
},
|
| 163 |
+
],
|
| 164 |
+
},
|
| 165 |
+
|
| 166 |
+
# ── UTAH AI POLICY ACT ──
|
| 167 |
+
"Utah AI Policy Act (SB 149)": {
|
| 168 |
+
"questions": [
|
| 169 |
+
{
|
| 170 |
+
"id": "ut_regulated",
|
| 171 |
+
"text": "Is your AI system used in a regulated occupation or industry in Utah (e.g. licensed professionals, insurance, financial services)?",
|
| 172 |
+
"type": "single_select",
|
| 173 |
+
"options": [
|
| 174 |
+
"Yes — used in a regulated occupation/industry",
|
| 175 |
+
"No — not used in regulated contexts",
|
| 176 |
+
],
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"id": "ut_interaction",
|
| 180 |
+
"text": "Does your AI system interact directly with consumers?",
|
| 181 |
+
"type": "single_select",
|
| 182 |
+
"options": [
|
| 183 |
+
"Yes — direct consumer interaction",
|
| 184 |
+
"No — no direct consumer interaction",
|
| 185 |
+
],
|
| 186 |
+
},
|
| 187 |
+
],
|
| 188 |
+
},
|
| 189 |
+
|
| 190 |
+
# ── CALIFORNIA ADMT ──
|
| 191 |
+
"California CCPA / ADMT Regulations": {
|
| 192 |
+
"questions": [
|
| 193 |
+
{
|
| 194 |
+
"id": "ca_threshold",
|
| 195 |
+
"text": "Does your organization meet the CCPA applicability thresholds?",
|
| 196 |
+
"type": "multi_select",
|
| 197 |
+
"options": [
|
| 198 |
+
"Annual gross revenue exceeding $25 million",
|
| 199 |
+
"Buys, sells, or shares personal information of 100,000+ California consumers/households/devices",
|
| 200 |
+
"Derives 50%+ of annual revenue from selling/sharing personal information",
|
| 201 |
+
"None of the above",
|
| 202 |
+
],
|
| 203 |
+
},
|
| 204 |
+
{
|
| 205 |
+
"id": "ca_admt",
|
| 206 |
+
"text": "Does your AI system qualify as automated decision-making technology (ADMT) under the CPPA regulations?",
|
| 207 |
+
"type": "single_select",
|
| 208 |
+
"options": [
|
| 209 |
+
"Yes — makes or assists decisions on employment, housing, education, health, insurance, financial, or legal matters",
|
| 210 |
+
"Yes — processes personal information to profile consumers",
|
| 211 |
+
"No — does not qualify as ADMT",
|
| 212 |
+
],
|
| 213 |
+
},
|
| 214 |
+
],
|
| 215 |
+
},
|
| 216 |
+
|
| 217 |
+
# ── ILLINOIS AI VIDEO INTERVIEW ──
|
| 218 |
+
"Illinois AI Video Interview Act (HB 3773)": {
|
| 219 |
+
"questions": [
|
| 220 |
+
{
|
| 221 |
+
"id": "il_video",
|
| 222 |
+
"text": "Does your AI system analyze video interviews of job applicants?",
|
| 223 |
+
"type": "single_select",
|
| 224 |
+
"options": [
|
| 225 |
+
"Yes — AI analyzes applicant video interviews",
|
| 226 |
+
"No",
|
| 227 |
+
],
|
| 228 |
+
},
|
| 229 |
+
],
|
| 230 |
+
},
|
| 231 |
+
|
| 232 |
+
# ── DIFC REGULATION 10 ──
|
| 233 |
+
"DIFC Regulation 10 (AI Processing)": {
|
| 234 |
+
"questions": [
|
| 235 |
+
{
|
| 236 |
+
"id": "difc_autonomous",
|
| 237 |
+
"text": "Does your AI system process personal data in an autonomous or semi-autonomous manner within the DIFC?",
|
| 238 |
+
"type": "single_select",
|
| 239 |
+
"options": [
|
| 240 |
+
"Yes — autonomous/semi-autonomous processing of personal data in DIFC",
|
| 241 |
+
"No",
|
| 242 |
+
],
|
| 243 |
+
},
|
| 244 |
+
{
|
| 245 |
+
"id": "difc_commercial_high_risk",
|
| 246 |
+
"text": "Is the AI system used for commercial purposes and does it involve high-risk processing activities?",
|
| 247 |
+
"type": "single_select",
|
| 248 |
+
"options": [
|
| 249 |
+
"Yes — commercial use with high-risk processing (e.g. profiling, automated decisions with legal effects, special category data, systematic monitoring)",
|
| 250 |
+
"No — either non-commercial or no high-risk processing",
|
| 251 |
+
],
|
| 252 |
+
"note": "High-risk commercial use triggers mandatory AI system certification and appointment of an Autonomous Systems Officer (ASO). Full enforcement from January 2026.",
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"id": "difc_profiling",
|
| 256 |
+
"text": "Does the system involve any of the following?",
|
| 257 |
+
"type": "multi_select",
|
| 258 |
+
"options": [
|
| 259 |
+
"Profiling of individuals",
|
| 260 |
+
"Automated decision-making with legal or significant effects",
|
| 261 |
+
"Processing of special categories of personal data (health, biometric, etc.)",
|
| 262 |
+
"Systematic monitoring of individuals",
|
| 263 |
+
"None of the above",
|
| 264 |
+
],
|
| 265 |
+
},
|
| 266 |
+
],
|
| 267 |
+
},
|
| 268 |
+
}
|
| 269 |
+
|
| 270 |
+
|
| 271 |
+
# ─────────────────────────────────────────────────
|
| 272 |
+
# SECTION 2: OBLIGATIONS
|
| 273 |
+
# Per regulation, per role, per risk level
|
| 274 |
+
# ─────────────────────────────────────────────────
|
| 275 |
+
|
| 276 |
+
OBLIGATIONS = {
|
| 277 |
+
|
| 278 |
+
# ════════════════════════════════════════════
|
| 279 |
+
# AI-SPECIFIC — FULL DEEP DIVE
|
| 280 |
+
# ═════════════════════════════════════════���══
|
| 281 |
+
|
| 282 |
+
"EU AI Act (Regulation 2024/1689)": {
|
| 283 |
+
"prohibited": {
|
| 284 |
+
"label": "Prohibited AI Practice (Art. 5)",
|
| 285 |
+
"obligations": [
|
| 286 |
+
"Immediately cease and remove the AI system from the EU market",
|
| 287 |
+
"Penalties: up to €35M or 7% of global annual turnover",
|
| 288 |
+
],
|
| 289 |
+
},
|
| 290 |
+
"high_risk_provider": {
|
| 291 |
+
"label": "High-Risk AI System — Provider Obligations",
|
| 292 |
+
"obligations": [
|
| 293 |
+
"Art. 9 — Establish and maintain a risk management system throughout the AI system's lifecycle",
|
| 294 |
+
"Art. 10 — Implement data governance: training, validation, and testing datasets must be relevant, representative, and free of errors",
|
| 295 |
+
"Art. 11 — Prepare and maintain technical documentation (before placing on market and keep up to date)",
|
| 296 |
+
"Art. 12 — Enable automatic recording of events (logging) for traceability",
|
| 297 |
+
"Art. 13 — Design system for sufficient transparency to allow deployers to interpret output and use appropriately",
|
| 298 |
+
"Art. 14 — Design for effective human oversight, enabling human-machine interface tools",
|
| 299 |
+
"Art. 15 — Achieve appropriate levels of accuracy, robustness, and cybersecurity",
|
| 300 |
+
"Art. 17 — Establish and document a quality management system (QMS)",
|
| 301 |
+
"Art. 43 — Complete conformity assessment (self-assessment or third-party depending on Annex III category)",
|
| 302 |
+
"Art. 47 — Affix CE marking upon successful conformity assessment",
|
| 303 |
+
"Art. 49 — Register in EU database before placing on market",
|
| 304 |
+
"Art. 72 — Establish post-market monitoring system",
|
| 305 |
+
"Art. 73 — Report serious incidents to market surveillance authorities",
|
| 306 |
+
],
|
| 307 |
+
"deadline": "August 2, 2026 (most obligations); August 2, 2027 (systems already on market)",
|
| 308 |
+
"penalty": "Up to €15M or 3% of global annual turnover",
|
| 309 |
+
},
|
| 310 |
+
"high_risk_deployer": {
|
| 311 |
+
"label": "High-Risk AI System — Deployer Obligations",
|
| 312 |
+
"obligations": [
|
| 313 |
+
"Art. 26(1) — Implement appropriate technical and organisational measures to use system in accordance with instructions",
|
| 314 |
+
"Art. 26(2) — Assign human oversight to competent, trained, authorised individuals",
|
| 315 |
+
"Art. 26(4) — Monitor operation and inform provider/distributor of risks or incidents",
|
| 316 |
+
"Art. 26(5) — Conduct a Fundamental Rights Impact Assessment (FRIA) before deploying high-risk systems in: law enforcement, migration, public services, education, employment, essential services",
|
| 317 |
+
"Art. 26(6) — Use system only for intended purpose as described in instructions of use",
|
| 318 |
+
"Art. 26(7) — Inform affected individuals that they are subject to high-risk AI system use",
|
| 319 |
+
"Art. 26(8) — Ensure human oversight is exercised effectively",
|
| 320 |
+
"Art. 26(11) — Keep logs automatically generated by the high-risk AI system for at least 6 months",
|
| 321 |
+
],
|
| 322 |
+
"deadline": "August 2, 2026",
|
| 323 |
+
"penalty": "Up to €15M or 3% of global annual turnover",
|
| 324 |
+
},
|
| 325 |
+
"limited_risk": {
|
| 326 |
+
"label": "Limited Risk — Transparency Obligations (Art. 50)",
|
| 327 |
+
"obligations": [
|
| 328 |
+
"Art. 50(1) — Inform individuals that they are interacting with an AI system (unless obvious from context)",
|
| 329 |
+
"Art. 50(2) — Label AI-generated synthetic content (audio, image, video, text) in machine-readable format",
|
| 330 |
+
"Art. 50(3) — Deployers of emotion recognition or biometric categorisation must inform exposed individuals",
|
| 331 |
+
"Art. 50(4) — Deployers of deepfake systems must disclose content is AI-generated (exception: artistic/satirical freedom with editorial safeguards)",
|
| 332 |
+
],
|
| 333 |
+
"deadline": "August 2, 2025",
|
| 334 |
+
"penalty": "Up to €15M or 3% of global annual turnover",
|
| 335 |
+
},
|
| 336 |
+
"minimal_risk": {
|
| 337 |
+
"label": "Minimal Risk",
|
| 338 |
+
"obligations": [
|
| 339 |
+
"Art. 4 — Ensure AI literacy: staff and operators must have sufficient understanding of AI systems they develop or use",
|
| 340 |
+
],
|
| 341 |
+
"deadline": "February 2, 2025 (AI literacy)",
|
| 342 |
+
},
|
| 343 |
+
},
|
| 344 |
+
|
| 345 |
+
"EU AI Act — GPAI Framework (Chapter V)": {
|
| 346 |
+
"gpai_standard": {
|
| 347 |
+
"label": "GPAI Model — Standard Obligations (Art. 53)",
|
| 348 |
+
"obligations": [
|
| 349 |
+
"Art. 53(1)(a) — Prepare and maintain technical documentation of the model including training and testing process",
|
| 350 |
+
"Art. 53(1)(b) — Prepare information and documentation for downstream providers integrating the model",
|
| 351 |
+
"Art. 53(1)(c) — Establish a policy to comply with EU Copyright Directive (including text and data mining opt-out)",
|
| 352 |
+
"Art. 53(1)(d) — Publish a sufficiently detailed summary of training data content",
|
| 353 |
+
],
|
| 354 |
+
},
|
| 355 |
+
"gpai_systemic": {
|
| 356 |
+
"label": "GPAI Model with Systemic Risk — Additional Obligations (Art. 55)",
|
| 357 |
+
"obligations": [
|
| 358 |
+
"Art. 55(1)(a) — Perform model evaluation including adversarial testing",
|
| 359 |
+
"Art. 55(1)(b) — Assess and mitigate systemic risks",
|
| 360 |
+
"Art. 55(1)(c) — Track, document and report serious incidents to AI Office and national authorities",
|
| 361 |
+
"Art. 55(1)(d) — Ensure adequate level of cybersecurity protection",
|
| 362 |
+
"All standard GPAI obligations (Art. 53) also apply",
|
| 363 |
+
],
|
| 364 |
+
},
|
| 365 |
+
"gpai_open_source": {
|
| 366 |
+
"label": "Open-Source GPAI Model — Reduced Obligations",
|
| 367 |
+
"obligations": [
|
| 368 |
+
"Art. 53(2) — Only technical documentation and copyright compliance policy required",
|
| 369 |
+
"Training data summary still required",
|
| 370 |
+
"If model has systemic risk: full Art. 53 + Art. 55 obligations apply regardless of open-source status",
|
| 371 |
+
],
|
| 372 |
+
},
|
| 373 |
+
},
|
| 374 |
+
|
| 375 |
+
"Colorado AI Act (SB 24-205)": {
|
| 376 |
+
"developer": {
|
| 377 |
+
"label": "Developer Obligations",
|
| 378 |
+
"obligations": [
|
| 379 |
+
"Make available to deployers a general statement of reasonably foreseeable uses and known harmful uses",
|
| 380 |
+
"Provide high-level summary of training data used",
|
| 381 |
+
"Provide documentation on known limitations and how the system was evaluated for performance and bias",
|
| 382 |
+
"Publish on website a statement describing types of high-risk AI systems developed and risk management practices",
|
| 383 |
+
"Report known or reasonably foreseeable risks of algorithmic discrimination to the Colorado Attorney General and deployers",
|
| 384 |
+
],
|
| 385 |
+
},
|
| 386 |
+
"deployer": {
|
| 387 |
+
"label": "Deployer Obligations",
|
| 388 |
+
"obligations": [
|
| 389 |
+
"Implement a risk management policy and program governing deployment of high-risk AI",
|
| 390 |
+
"Complete an annual impact assessment for each high-risk AI system",
|
| 391 |
+
"Disclose to consumers: when they are interacting with an AI system, when AI is a substantial factor in a consequential decision, and that they can request human review",
|
| 392 |
+
"Provide consumers with an explanation of the decision, right to correct data, and right to appeal",
|
| 393 |
+
"Notify the Attorney General within 90 days of discovering algorithmic discrimination",
|
| 394 |
+
"Review AI system outputs for algorithmic discrimination",
|
| 395 |
+
],
|
| 396 |
+
},
|
| 397 |
+
"affirmative_defense": "Compliance with NIST AI RMF or equivalent recognised framework may serve as an affirmative defense",
|
| 398 |
+
"deadline": "June 30, 2026 (delayed from Feb 1, 2026)",
|
| 399 |
+
"penalty": "Violations treated as unfair trade practices under Colorado Consumer Protection Act",
|
| 400 |
+
},
|
| 401 |
+
|
| 402 |
+
"Texas TRAIGA (HB 1709)": {
|
| 403 |
+
"deployer": {
|
| 404 |
+
"label": "Deployer Obligations",
|
| 405 |
+
"obligations": [
|
| 406 |
+
"Disclose to each person that content is generated by AI when deploying generative AI producing synthetic content",
|
| 407 |
+
"Mark or label AI-generated content with clear disclosure",
|
| 408 |
+
"Prohibition on using generative AI to create materially deceptive content intended to influence elections",
|
| 409 |
+
],
|
| 410 |
+
},
|
| 411 |
+
"deadline": "September 1, 2025",
|
| 412 |
+
},
|
| 413 |
+
|
| 414 |
+
"Utah AI Policy Act (SB 149)": {
|
| 415 |
+
"all_operators": {
|
| 416 |
+
"label": "All AI Operators — Disclosure Obligations",
|
| 417 |
+
"obligations": [
|
| 418 |
+
"Disclose to individuals that they are interacting with generative AI (if direct interaction)",
|
| 419 |
+
"Regulated occupations: clearly disclose use of AI when providing services in a regulated industry",
|
| 420 |
+
"Prohibition on using AI to represent that a person is a licensed professional if they are not",
|
| 421 |
+
],
|
| 422 |
+
},
|
| 423 |
+
"deadline": "May 1, 2024 (already in effect)",
|
| 424 |
+
},
|
| 425 |
+
|
| 426 |
+
"California CCPA / ADMT Regulations": {
|
| 427 |
+
"deployer": {
|
| 428 |
+
"label": "ADMT Deployer Obligations",
|
| 429 |
+
"obligations": [
|
| 430 |
+
"Pre-use notice: inform consumers about ADMT use before processing begins",
|
| 431 |
+
"Right to opt out: provide mechanism for consumers to opt out of ADMT for significant decisions",
|
| 432 |
+
"Access request: respond to consumer requests about ADMT logic and outputs",
|
| 433 |
+
"Impact assessment: conduct and document risk assessment for ADMT use cases with significant effects",
|
| 434 |
+
"Non-discrimination: ensure ADMT does not result in differential treatment based on protected characteristics",
|
| 435 |
+
],
|
| 436 |
+
},
|
| 437 |
+
"threshold_note": "Applies only to businesses meeting CCPA thresholds: $25M+ revenue, 100K+ consumers data, or 50%+ revenue from data sales",
|
| 438 |
+
},
|
| 439 |
+
|
| 440 |
+
"Illinois AI Video Interview Act (HB 3773)": {
|
| 441 |
+
"employer": {
|
| 442 |
+
"label": "Employer Obligations",
|
| 443 |
+
"obligations": [
|
| 444 |
+
"Notify each applicant that AI will be used to analyze the video interview",
|
| 445 |
+
"Provide information on how the AI works and what characteristics it evaluates",
|
| 446 |
+
"Obtain consent from the applicant before using AI analysis",
|
| 447 |
+
"Limit sharing of video: only persons whose expertise is necessary to evaluate applicant may view",
|
| 448 |
+
"Destroy video within 30 days of applicant's request",
|
| 449 |
+
"Report demographic data on applicants to the Department of Commerce annually (if >500 employees)",
|
| 450 |
+
],
|
| 451 |
+
},
|
| 452 |
+
},
|
| 453 |
+
|
| 454 |
+
"DIFC Regulation 10 (AI Processing)": {
|
| 455 |
+
"deployer_operator": {
|
| 456 |
+
"label": "Deployer / Operator — General Obligations",
|
| 457 |
+
"obligations": [
|
| 458 |
+
"Provide clear and explicit notice to users at initial use: explain the AI technology, whether it operates autonomously, and its impact on privacy rights",
|
| 459 |
+
"Design and operate the AI system in accordance with principles of ethics, fairness, transparency, security, and accountability",
|
| 460 |
+
"Conduct a mandatory Data Protection Impact Assessment (DPIA) specifically addressing AI-related risks and mitigation strategies",
|
| 461 |
+
"Implement human oversight mechanisms for automated decision-making",
|
| 462 |
+
"Ensure data subjects can challenge AI system outcomes and request human review",
|
| 463 |
+
"Be able to explain AI processing in non-technical terms with supporting evidence",
|
| 464 |
+
"Maintain records of AI processing activities",
|
| 465 |
+
"Implement data protection by design and by default for AI systems",
|
| 466 |
+
"Monitor AI system outputs for accuracy, fairness, and bias",
|
| 467 |
+
],
|
| 468 |
+
},
|
| 469 |
+
"high_risk": {
|
| 470 |
+
"label": "High-Risk AI Processing — Additional Obligations",
|
| 471 |
+
"obligations": [
|
| 472 |
+
"Obtain AI system certification under the DIFC Commissioner's certification scheme (certification is system-specific, not entity-specific)",
|
| 473 |
+
"Appoint an Autonomous Systems Officer (ASO) with substantially similar status, competencies, and tasks as a Data Protection Officer (DPO) — the same person may serve as both ASO and DPO if competencies align",
|
| 474 |
+
"The ASO must monitor AI compliance, conduct DPIAs, review risks with senior management, and make recommendations for accountability",
|
| 475 |
+
"Ensure the AI system processes personal data solely for human-defined or human-approved purposes",
|
| 476 |
+
"Report significant findings from impact assessments to the DIFC Commissioner of Data Protection",
|
| 477 |
+
],
|
| 478 |
+
"note": "Full enforcement of high-risk processing requirements from January 2026.",
|
| 479 |
+
},
|
| 480 |
+
},
|
| 481 |
+
|
| 482 |
+
|
| 483 |
+
# ════════════════════════════════════════════
|
| 484 |
+
# PRIVACY — KEY OBLIGATIONS (AI-relevant)
|
| 485 |
+
# ════════════════════════════════════════════
|
| 486 |
+
|
| 487 |
+
"GDPR (Regulation 2016/679)": {
|
| 488 |
+
"key_obligations": [
|
| 489 |
+
"Art. 6 — Establish a lawful basis for processing personal data (consent, legitimate interest, contract, etc.)",
|
| 490 |
+
"Art. 13-14 — Transparency: inform data subjects about processing, purpose, recipients, and rights",
|
| 491 |
+
"Art. 22 — Right not to be subject to solely automated decisions with legal/significant effects; right to human intervention",
|
| 492 |
+
"Art. 25 — Data protection by design and by default",
|
| 493 |
+
"Art. 30 — Maintain records of processing activities",
|
| 494 |
+
"Art. 35 — Conduct a Data Protection Impact Assessment (DPIA) when processing likely to result in high risk",
|
| 495 |
+
"Art. 37 — Appoint a Data Protection Officer (DPO) if required (public authority, large-scale monitoring, special categories)",
|
| 496 |
+
"Art. 5(1)(c) — Data minimisation: process only what is necessary for specified purpose",
|
| 497 |
+
],
|
| 498 |
+
"ai_relevant_note": "DPIA under Art. 35 GDPR can be combined with FRIA under EU AI Act Art. 26(5) into a single assessment document.",
|
| 499 |
+
},
|
| 500 |
+
|
| 501 |
+
"ePrivacy Directive (2002/58/EC)": {
|
| 502 |
+
"key_obligations": [
|
| 503 |
+
"Obtain consent for use of cookies and tracking technologies on AI interfaces",
|
| 504 |
+
"Ensure confidentiality of electronic communications processed by AI systems",
|
| 505 |
+
"Restrictions on processing traffic and location data for AI purposes without consent",
|
| 506 |
+
],
|
| 507 |
+
},
|
| 508 |
+
|
| 509 |
+
"UAE Federal PDPL (Decree-Law 45/2021)": {
|
| 510 |
+
"key_obligations": [
|
| 511 |
+
"Establish a lawful basis for processing personal data in the UAE",
|
| 512 |
+
"Obtain explicit consent for processing sensitive personal data",
|
| 513 |
+
"Inform data subjects about processing purposes, categories, and rights",
|
| 514 |
+
"Implement appropriate security measures for personal data",
|
| 515 |
+
"Restrict cross-border transfer of personal data (adequacy or safeguards required)",
|
| 516 |
+
"Data subjects have the right to object to automated decision-making including profiling",
|
| 517 |
+
],
|
| 518 |
+
},
|
| 519 |
+
|
| 520 |
+
"DIFC Data Protection Law (Law No. 5 of 2020)": {
|
| 521 |
+
"key_obligations": [
|
| 522 |
+
"Register with the DIFC Commissioner of Data Protection",
|
| 523 |
+
"Establish a lawful basis for processing (consent, contract, legitimate interests, etc.)",
|
| 524 |
+
"Conduct DPIA for high-risk processing activities",
|
| 525 |
+
"Appoint a DPO if required",
|
| 526 |
+
"Implement data protection by design and by default",
|
| 527 |
+
"Data subjects have the right not to be subject to solely automated decisions with legal effects",
|
| 528 |
+
],
|
| 529 |
+
"ai_relevant_note": "DPIA under DIFC DPL should be coordinated with Algorithmic Impact Assessment under Regulation 10.",
|
| 530 |
+
},
|
| 531 |
+
|
| 532 |
+
"ADGM Data Protection Regulations 2021": {
|
| 533 |
+
"key_obligations": [
|
| 534 |
+
"Register with the ADGM Office of Data Protection",
|
| 535 |
+
"Establish a lawful basis for processing",
|
| 536 |
+
"Conduct risk assessment for high-risk processing",
|
| 537 |
+
"Implement appropriate safeguards for cross-border transfers",
|
| 538 |
+
"Data subjects have right to object to automated decision-making",
|
| 539 |
+
],
|
| 540 |
+
},
|
| 541 |
+
|
| 542 |
+
"State Data Protection Laws (CCPA, CTDPA, etc.)": {
|
| 543 |
+
"key_obligations": [
|
| 544 |
+
"Right to know: inform consumers what personal information is collected and how it is used",
|
| 545 |
+
"Right to delete: honor consumer requests to delete personal data",
|
| 546 |
+
"Right to opt-out of sale/sharing of personal information",
|
| 547 |
+
"Right to opt-out of automated decision-making (CCPA ADMT)",
|
| 548 |
+
"Conduct data protection risk assessments for high-risk processing",
|
| 549 |
+
],
|
| 550 |
+
},
|
| 551 |
+
|
| 552 |
+
"HIPAA (Health Insurance Portability and Accountability Act)": {
|
| 553 |
+
"key_obligations": [
|
| 554 |
+
"Ensure AI systems handling Protected Health Information (PHI) comply with Privacy Rule",
|
| 555 |
+
"Implement Security Rule administrative, physical, and technical safeguards for ePHI",
|
| 556 |
+
"Execute Business Associate Agreements (BAA) with AI vendors processing PHI",
|
| 557 |
+
"Minimum necessary: limit AI access to only the PHI needed for the specific function",
|
| 558 |
+
],
|
| 559 |
+
},
|
| 560 |
+
|
| 561 |
+
"COPPA (Children's Online Privacy Protection Act)": {
|
| 562 |
+
"key_obligations": [
|
| 563 |
+
"Obtain verifiable parental consent before collecting personal information from children under 13",
|
| 564 |
+
"Provide parents with notice of data practices and right to review/delete child's data",
|
| 565 |
+
"Limit data collection to what is reasonably necessary for the AI system's activity",
|
| 566 |
+
"Implement reasonable security measures for children's personal data",
|
| 567 |
+
],
|
| 568 |
+
},
|
| 569 |
+
|
| 570 |
+
"FERPA (Family Educational Rights and Privacy Act)": {
|
| 571 |
+
"key_obligations": [
|
| 572 |
+
"Obtain consent before disclosing student education records to AI system providers (unless exception applies)",
|
| 573 |
+
"Ensure AI vendors qualify as 'school officials' with legitimate educational interest if processing student records",
|
| 574 |
+
"Maintain student right to inspect and request correction of education records used by AI",
|
| 575 |
+
],
|
| 576 |
+
},
|
| 577 |
+
|
| 578 |
+
"Illinois BIPA (Biometric Information Privacy Act)": {
|
| 579 |
+
"key_obligations": [
|
| 580 |
+
"Develop and publish a written biometric data retention/destruction policy",
|
| 581 |
+
"Obtain informed written consent before collecting biometric identifiers",
|
| 582 |
+
"Provide specific disclosures about collection purpose and retention period",
|
| 583 |
+
"Prohibition on selling, leasing, or profiting from biometric data",
|
| 584 |
+
"Private right of action: individuals can sue for statutory damages ($1,000-$5,000 per violation)",
|
| 585 |
+
],
|
| 586 |
+
},
|
| 587 |
+
|
| 588 |
+
|
| 589 |
+
# ════════════════════════════════════════════
|
| 590 |
+
# OTHERS — AWARENESS FLAGS
|
| 591 |
+
# ════════════════════════════════════════════
|
| 592 |
+
|
| 593 |
+
"Copyright Directive (2019/790)": {
|
| 594 |
+
"flags": [
|
| 595 |
+
"Art. 4 — Text and data mining (TDM) exception for research organisations and cultural heritage institutions",
|
| 596 |
+
"Art. 4 — Commercial TDM allowed unless rightsholder has expressly reserved rights (opt-out)",
|
| 597 |
+
"Training AI on copyrighted content requires compliance with TDM provisions",
|
| 598 |
+
"Consult legal counsel on licensing requirements for training data",
|
| 599 |
+
],
|
| 600 |
+
},
|
| 601 |
+
|
| 602 |
+
"NIS2 Directive (2022/2555)": {
|
| 603 |
+
"flags": [
|
| 604 |
+
"Entities operating AI in essential/important sectors must implement cybersecurity risk management measures",
|
| 605 |
+
"Incident reporting obligations to national CSIRT within 24 hours (early warning) and 72 hours (full notification)",
|
| 606 |
+
"Supply chain security requirements apply to AI component providers",
|
| 607 |
+
"Consult legal counsel for sector-specific NIS2 implementation in your Member State",
|
| 608 |
+
],
|
| 609 |
+
},
|
| 610 |
+
|
| 611 |
+
"Product Liability Directive (2024/2853)": {
|
| 612 |
+
"flags": [
|
| 613 |
+
"AI software is classified as a 'product' — standalone liability for defective AI",
|
| 614 |
+
"Defectiveness can be presumed if provider fails to disclose information or comply with safety requirements",
|
| 615 |
+
"Strict liability for providers — no need to prove fault",
|
| 616 |
+
"2-year transposition period — Member States must implement by late 2026",
|
| 617 |
+
],
|
| 618 |
+
},
|
| 619 |
+
|
| 620 |
+
"Equal Treatment Directives": {
|
| 621 |
+
"flags": [
|
| 622 |
+
"AI systems making or supporting employment, education, or service decisions must not discriminate on protected grounds",
|
| 623 |
+
"Applies to gender (2006/54/EC), racial/ethnic origin (2000/43/EC), religion/age/disability/sexual orientation (2000/78/EC)",
|
| 624 |
+
"Indirect discrimination through AI proxy variables (e.g. postal code correlating with ethnicity) can be unlawful",
|
| 625 |
+
],
|
| 626 |
+
},
|
| 627 |
+
|
| 628 |
+
"Consumer Rights Directive / GPSR": {
|
| 629 |
+
"flags": [
|
| 630 |
+
"AI systems interacting directly with consumers must provide clear pre-contractual information",
|
| 631 |
+
"General Product Safety Regulation (2023/988) applies to consumer-facing AI products",
|
| 632 |
+
"Safety obligations throughout product lifecycle including post-market monitoring",
|
| 633 |
+
],
|
| 634 |
+
},
|
| 635 |
+
|
| 636 |
+
"Medical Device Regulation (MDR 2017/745)": {
|
| 637 |
+
"flags": [
|
| 638 |
+
"AI-based clinical decision support, diagnostic or therapeutic systems may qualify as medical devices",
|
| 639 |
+
"Requires CE marking via conformity assessment (self-assessment or Notified Body depending on class)",
|
| 640 |
+
"Clinical evaluation and post-market clinical follow-up required",
|
| 641 |
+
"Dual regulation: MDR + AI Act apply simultaneously for AI-based medical devices",
|
| 642 |
+
],
|
| 643 |
+
},
|
| 644 |
+
|
| 645 |
+
"Machinery Regulation (2023/1230)": {
|
| 646 |
+
"flags": [
|
| 647 |
+
"AI-integrated robots, drones, autonomous vehicles and industrial machinery must comply",
|
| 648 |
+
"Safety components with AI that learn or evolve are classified as high-risk machinery",
|
| 649 |
+
"New cybersecurity requirements for connected machinery throughout lifecycle",
|
| 650 |
+
"Conformity assessment required — self-assessment or third-party depending on Annex I category",
|
| 651 |
+
"Applies from January 20, 2027 (replaces Machinery Directive 2006/42/EC)",
|
| 652 |
+
],
|
| 653 |
+
},
|
| 654 |
+
|
| 655 |
+
"Digital Services Act (DSA 2022/2065)": {
|
| 656 |
+
"flags": [
|
| 657 |
+
"Online platforms using algorithmic recommendation systems must offer non-profiling alternative",
|
| 658 |
+
"Transparency obligations: provide information on recommendation system parameters",
|
| 659 |
+
"Systemic risk assessment required for very large online platforms (>45M EU users)",
|
| 660 |
+
"Content moderation decisions using AI must be explained to affected users",
|
| 661 |
+
],
|
| 662 |
+
},
|
| 663 |
+
|
| 664 |
+
"Radio Equipment Directive (RED 2014/53)": {
|
| 665 |
+
"flags": [
|
| 666 |
+
"Connected devices with AI must meet cybersecurity, data protection, and anti-fraud requirements",
|
| 667 |
+
"Delegated acts require compliance with privacy-by-design for radio equipment processing personal data",
|
| 668 |
+
"Applies to IoT devices, wearables, smart home devices with embedded AI",
|
| 669 |
+
],
|
| 670 |
+
},
|
| 671 |
+
|
| 672 |
+
"FTC Act Section 5 (Unfair/Deceptive Practices)": {
|
| 673 |
+
"flags": [
|
| 674 |
+
"FTC has actively pursued enforcement against deceptive or unfair AI practices",
|
| 675 |
+
"AI systems must not deceive consumers about capabilities, data use, or human involvement",
|
| 676 |
+
"Unfair practices include AI systems causing substantial consumer injury",
|
| 677 |
+
"FTC guidance emphasises transparency, fairness, and accountability in AI",
|
| 678 |
+
],
|
| 679 |
+
},
|
| 680 |
+
|
| 681 |
+
"Title VII (Civil Rights Act)": {
|
| 682 |
+
"flags": [
|
| 683 |
+
"AI-driven employment decisions must not result in disparate treatment or disparate impact based on race, color, religion, sex, or national origin",
|
| 684 |
+
"EEOC has issued guidance on AI and algorithmic hiring tools",
|
| 685 |
+
"Employers are liable for discriminatory AI even if provided by a third-party vendor",
|
| 686 |
+
],
|
| 687 |
+
},
|
| 688 |
+
|
| 689 |
+
"ADA (Americans with Disabilities Act)": {
|
| 690 |
+
"flags": [
|
| 691 |
+
"AI systems in employment and public accommodation must not discriminate against individuals with disabilities",
|
| 692 |
+
"Reasonable accommodation obligations extend to AI-driven processes",
|
| 693 |
+
"AI accessibility requirements for public-facing systems",
|
| 694 |
+
],
|
| 695 |
+
},
|
| 696 |
+
|
| 697 |
+
"ECOA (Equal Credit Opportunity Act)": {
|
| 698 |
+
"flags": [
|
| 699 |
+
"AI credit scoring and lending decisions must not discriminate on prohibited bases",
|
| 700 |
+
"Adverse action notices required when AI contributes to credit denial",
|
| 701 |
+
"Model explainability requirements for credit decisions",
|
| 702 |
+
],
|
| 703 |
+
},
|
| 704 |
+
|
| 705 |
+
"FCRA (Fair Credit Reporting Act)": {
|
| 706 |
+
"flags": [
|
| 707 |
+
"AI systems generating consumer reports must ensure accuracy",
|
| 708 |
+
"Consumers have right to dispute inaccurate AI-generated information",
|
| 709 |
+
"Permissible purpose requirements apply to AI-based consumer report use",
|
| 710 |
+
],
|
| 711 |
+
},
|
| 712 |
+
|
| 713 |
+
"Fair Housing Act": {
|
| 714 |
+
"flags": [
|
| 715 |
+
"AI in housing advertising, tenant screening, and lending must not discriminate",
|
| 716 |
+
"Algorithmic redlining and proxy discrimination are enforceable violations",
|
| 717 |
+
"HUD has investigated algorithmic discrimination in housing platforms",
|
| 718 |
+
],
|
| 719 |
+
},
|
| 720 |
+
|
| 721 |
+
"Copyright Law (Decree-Law 38/2021) — No TDM exception": {
|
| 722 |
+
"flags": [
|
| 723 |
+
"CRITICAL: UAE has no text and data mining exception — all training data must be licensed",
|
| 724 |
+
"Using copyrighted content to train AI models without licence is potentially infringing",
|
| 725 |
+
"Consult legal counsel on licensing requirements for all training data used in UAE context",
|
| 726 |
+
],
|
| 727 |
+
},
|
| 728 |
+
|
| 729 |
+
"Cybercrime Law (Decree-Law 34/2021)": {
|
| 730 |
+
"flags": [
|
| 731 |
+
"Art. 42 — Creating or disseminating deepfakes or manipulated content using AI is a criminal offence",
|
| 732 |
+
"Unauthorised access to AI systems or data is punishable",
|
| 733 |
+
"Interception of electronic communications by AI systems without authorisation is prohibited",
|
| 734 |
+
],
|
| 735 |
+
},
|
| 736 |
+
|
| 737 |
+
"Civil Transactions Law (Federal Law 5/1985)": {
|
| 738 |
+
"flags": [
|
| 739 |
+
"Art. 292 — 'Guardian of things' doctrine may apply to AI system operators for harm caused",
|
| 740 |
+
"General tort liability framework applies to AI-caused damage",
|
| 741 |
+
"No AI-specific liability framework yet — general civil law applies",
|
| 742 |
+
],
|
| 743 |
+
},
|
| 744 |
+
|
| 745 |
+
"Consumer Protection (Federal Law 15/2020)": {
|
| 746 |
+
"flags": [
|
| 747 |
+
"AI interacting with consumers must not engage in deceptive or misleading practices",
|
| 748 |
+
"Product safety requirements apply to AI-powered consumer products",
|
| 749 |
+
],
|
| 750 |
+
},
|
| 751 |
+
|
| 752 |
+
"Anti-Discrimination (Decree-Law 34/2023)": {
|
| 753 |
+
"flags": [
|
| 754 |
+
"Prohibition of discrimination based on race, colour, ethnic origin, religion, or disability applies to AI decisions",
|
| 755 |
+
"Potential liability if AI system produces discriminatory outcomes in UAE",
|
| 756 |
+
],
|
| 757 |
+
},
|
| 758 |
+
|
| 759 |
+
"Labour Law (Decree-Law 33/2021)": {
|
| 760 |
+
"flags": [
|
| 761 |
+
"AI in employment (hiring, monitoring, evaluation, termination) must respect worker rights",
|
| 762 |
+
"Workplace surveillance using AI must be proportionate and disclosed to employees",
|
| 763 |
+
"Worker data processed by AI subject to data protection obligations",
|
| 764 |
+
],
|
| 765 |
+
},
|
| 766 |
+
}
|
| 767 |
+
|
| 768 |
+
|
| 769 |
+
# ─────────────────────────────────────────────────
|
| 770 |
+
# SECTION 3: OVERLAP ANALYSIS
|
| 771 |
+
# Obligations that can be combined across regulations
|
| 772 |
+
# ─────────────────────────────────────────────────
|
| 773 |
+
|
| 774 |
+
OVERLAP_ANALYSIS = [
|
| 775 |
+
{
|
| 776 |
+
"id": "dpia_fria",
|
| 777 |
+
"title": "Impact Assessment — GDPR DPIA + EU AI Act FRIA",
|
| 778 |
+
"regulations": ["GDPR (Regulation 2016/679)", "EU AI Act (Regulation 2024/1689)"],
|
| 779 |
+
"recommendation": "Combine the GDPR Data Protection Impact Assessment (Art. 35) with the EU AI Act Fundamental Rights Impact Assessment (Art. 26(5)) into a single comprehensive assessment document. The AI Act FRIA builds on DPIA requirements — a unified document avoids duplication and ensures both legal bases are covered.",
|
| 780 |
+
"shared_elements": [
|
| 781 |
+
"Description of processing/AI system and purpose",
|
| 782 |
+
"Assessment of necessity and proportionality",
|
| 783 |
+
"Risk assessment to rights and freedoms",
|
| 784 |
+
"Mitigation measures",
|
| 785 |
+
"Monitoring and review mechanisms",
|
| 786 |
+
],
|
| 787 |
+
},
|
| 788 |
+
{
|
| 789 |
+
"id": "dpia_difc_aia",
|
| 790 |
+
"title": "Impact Assessment — DIFC DPL DPIA + DIFC Regulation 10 AIA",
|
| 791 |
+
"regulations": ["DIFC Data Protection Law (Law No. 5 of 2020)", "DIFC Regulation 10 (AI Processing)"],
|
| 792 |
+
"recommendation": "Combine the DIFC Data Protection Law DPIA with the Regulation 10 Algorithmic Impact Assessment into a single document. Both require risk assessment of personal data processing with overlapping content requirements.",
|
| 793 |
+
"shared_elements": [
|
| 794 |
+
"Description of AI processing activities",
|
| 795 |
+
"Assessment of risks to data subjects",
|
| 796 |
+
"Fairness, accuracy, and bias evaluation",
|
| 797 |
+
"Mitigation and monitoring measures",
|
| 798 |
+
],
|
| 799 |
+
},
|
| 800 |
+
{
|
| 801 |
+
"id": "transparency_multi",
|
| 802 |
+
"title": "Transparency Obligations — EU AI Act + GDPR + Consumer Rights",
|
| 803 |
+
"regulations": ["EU AI Act (Regulation 2024/1689)", "GDPR (Regulation 2016/679)", "Consumer Rights Directive / GPSR"],
|
| 804 |
+
"recommendation": "Consolidate transparency disclosures: AI Act Art. 50 (AI interaction/content disclosure), GDPR Art. 13-14 (data processing information), and Consumer Rights Directive (pre-contractual information) can be delivered through a unified transparency notice.",
|
| 805 |
+
"shared_elements": [
|
| 806 |
+
"Disclosure that user is interacting with AI",
|
| 807 |
+
"Purpose and logic of processing",
|
| 808 |
+
"Data categories used",
|
| 809 |
+
"Rights available to individuals",
|
| 810 |
+
],
|
| 811 |
+
},
|
| 812 |
+
{
|
| 813 |
+
"id": "human_oversight_multi",
|
| 814 |
+
"title": "Human Oversight — EU AI Act + GDPR Art. 22 + Colorado",
|
| 815 |
+
"regulations": ["EU AI Act (Regulation 2024/1689)", "GDPR (Regulation 2016/679)", "Colorado AI Act (SB 24-205)"],
|
| 816 |
+
"recommendation": "Implement a unified human oversight mechanism that satisfies: EU AI Act Art. 14 (human oversight for high-risk), GDPR Art. 22 (right to human intervention in automated decisions), and Colorado's requirement for human review of consequential decisions. A single human-in-the-loop process can address all three.",
|
| 817 |
+
"shared_elements": [
|
| 818 |
+
"Right to request human review",
|
| 819 |
+
"Competent, trained human reviewers",
|
| 820 |
+
"Ability to override AI decisions",
|
| 821 |
+
"Documentation of human oversight process",
|
| 822 |
+
],
|
| 823 |
+
},
|
| 824 |
+
{
|
| 825 |
+
"id": "risk_management_multi",
|
| 826 |
+
"title": "Risk Management — EU AI Act + Colorado + NIST AI RMF",
|
| 827 |
+
"regulations": ["EU AI Act (Regulation 2024/1689)", "Colorado AI Act (SB 24-205)"],
|
| 828 |
+
"recommendation": "Build your risk management system on the NIST AI Risk Management Framework, which serves as an affirmative defense under Colorado law and aligns closely with EU AI Act Art. 9 requirements. One framework can satisfy both jurisdictions.",
|
| 829 |
+
"shared_elements": [
|
| 830 |
+
"Risk identification and assessment",
|
| 831 |
+
"Bias and discrimination testing",
|
| 832 |
+
"Documentation and record-keeping",
|
| 833 |
+
"Monitoring and continuous improvement",
|
| 834 |
+
],
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"id": "bias_testing_multi",
|
| 838 |
+
"title": "Bias & Discrimination Testing — Colorado + Title VII + EU Equal Treatment",
|
| 839 |
+
"regulations": ["Colorado AI Act (SB 24-205)", "Title VII (Civil Rights Act)", "Equal Treatment Directives"],
|
| 840 |
+
"recommendation": "Implement a unified bias testing programme covering protected characteristics across both US and EU frameworks. Test for disparate impact (US) and indirect discrimination (EU) simultaneously using the same test suite.",
|
| 841 |
+
"shared_elements": [
|
| 842 |
+
"Testing across protected characteristics",
|
| 843 |
+
"Disparate impact / indirect discrimination analysis",
|
| 844 |
+
"Documentation of results and remediation",
|
| 845 |
+
"Ongoing monitoring post-deployment",
|
| 846 |
+
],
|
| 847 |
+
},
|
| 848 |
+
{
|
| 849 |
+
"id": "incident_reporting",
|
| 850 |
+
"title": "Incident Reporting — EU AI Act + NIS2 + GDPR",
|
| 851 |
+
"regulations": ["EU AI Act (Regulation 2024/1689)", "NIS2 Directive (2022/2555)", "GDPR (Regulation 2016/679)"],
|
| 852 |
+
"recommendation": "Create a unified incident response plan covering: AI serious incidents (AI Act Art. 73), cybersecurity incidents (NIS2 — 24h/72h), and personal data breaches (GDPR Art. 33 — 72h). Parallel notification channels but shared internal triage process.",
|
| 853 |
+
"shared_elements": [
|
| 854 |
+
"Internal incident detection and triage",
|
| 855 |
+
"Root cause analysis",
|
| 856 |
+
"Notification to authorities within required timeframes",
|
| 857 |
+
"Remediation and documentation",
|
| 858 |
+
],
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"id": "technical_documentation",
|
| 862 |
+
"title": "Technical Documentation — EU AI Act + GPAI + Machinery Regulation",
|
| 863 |
+
"regulations": ["EU AI Act (Regulation 2024/1689)", "EU AI Act — GPAI Framework (Chapter V)", "Machinery Regulation (2023/1230)"],
|
| 864 |
+
"recommendation": "Structure technical documentation to satisfy EU AI Act Annex IV (high-risk) or Annex XI (GPAI), and if applicable, Machinery Regulation Annex III requirements in a single document set. Significant overlap in system description, risk analysis, and testing documentation.",
|
| 865 |
+
"shared_elements": [
|
| 866 |
+
"System description and intended purpose",
|
| 867 |
+
"Risk analysis and mitigation",
|
| 868 |
+
"Design and development process",
|
| 869 |
+
"Testing and validation results",
|
| 870 |
+
],
|
| 871 |
+
},
|
| 872 |
+
]
|
| 873 |
+
|
| 874 |
+
|
| 875 |
+
# ─────────────────────────────────────────────────
|
| 876 |
+
# SECTION 4: GAP ANALYSIS (AI-specific only)
|
| 877 |
+
# Coverage % when compliant with regulation A,
|
| 878 |
+
# what % of regulation B is already met
|
| 879 |
+
# ─────────────────────────────────────────────────
|
| 880 |
+
|
| 881 |
+
# Matrix format: GAP_ANALYSIS[source_reg][target_reg] = coverage %
|
| 882 |
+
# Read as: "If compliant with source_reg, you cover X% of target_reg"
|
| 883 |
+
|
| 884 |
+
GAP_ANALYSIS = {
|
| 885 |
+
"EU AI Act (Regulation 2024/1689)": {
|
| 886 |
+
"Colorado AI Act (SB 24-205)": {
|
| 887 |
+
"coverage": 72,
|
| 888 |
+
"covered": [
|
| 889 |
+
"Risk management system maps to Colorado's risk management policy requirement",
|
| 890 |
+
"Technical documentation satisfies Colorado's documentation obligations",
|
| 891 |
+
"Human oversight provisions cover Colorado's human review requirements",
|
| 892 |
+
"Post-market monitoring aligns with ongoing discrimination testing",
|
| 893 |
+
],
|
| 894 |
+
"gaps": [
|
| 895 |
+
"Colorado-specific annual impact assessment format",
|
| 896 |
+
"Colorado-specific consumer notification requirements (right to explanation, right to appeal)",
|
| 897 |
+
"Attorney General notification within 90 days of discovering discrimination",
|
| 898 |
+
"Colorado's affirmative defense documentation (NIST AI RMF compliance)",
|
| 899 |
+
],
|
| 900 |
+
},
|
| 901 |
+
"Texas TRAIGA (HB 1709)": {
|
| 902 |
+
"coverage": 85,
|
| 903 |
+
"covered": [
|
| 904 |
+
"AI Act Art. 50 transparency obligations largely cover Texas disclosure requirements",
|
| 905 |
+
"AI-generated content labelling satisfies Texas synthetic content marking",
|
| 906 |
+
],
|
| 907 |
+
"gaps": [
|
| 908 |
+
"Texas-specific election-related prohibition on deceptive AI content",
|
| 909 |
+
],
|
| 910 |
+
},
|
| 911 |
+
"Utah AI Policy Act (SB 149)": {
|
| 912 |
+
"coverage": 80,
|
| 913 |
+
"covered": [
|
| 914 |
+
"AI Act transparency obligations cover Utah disclosure requirements",
|
| 915 |
+
"AI system documentation supports Utah regulated-industry disclosures",
|
| 916 |
+
],
|
| 917 |
+
"gaps": [
|
| 918 |
+
"Utah-specific regulated occupation disclosure format",
|
| 919 |
+
],
|
| 920 |
+
},
|
| 921 |
+
"California CCPA / ADMT Regulations": {
|
| 922 |
+
"coverage": 60,
|
| 923 |
+
"covered": [
|
| 924 |
+
"Risk management and documentation align with California impact assessment requirements",
|
| 925 |
+
"Transparency provisions overlap with pre-use notice requirements",
|
| 926 |
+
],
|
| 927 |
+
"gaps": [
|
| 928 |
+
"CCPA-specific consumer opt-out mechanism for ADMT",
|
| 929 |
+
"California-specific access request response procedures",
|
| 930 |
+
"CCPA threshold applicability analysis ($25M / 100K consumers)",
|
| 931 |
+
],
|
| 932 |
+
},
|
| 933 |
+
"Illinois AI Video Interview Act (HB 3773)": {
|
| 934 |
+
"coverage": 50,
|
| 935 |
+
"covered": [
|
| 936 |
+
"Transparency obligations partially cover Illinois notification requirements",
|
| 937 |
+
],
|
| 938 |
+
"gaps": [
|
| 939 |
+
"Illinois-specific informed written consent for video analysis",
|
| 940 |
+
"30-day video destruction requirement on applicant request",
|
| 941 |
+
"Annual demographic reporting to Department of Commerce",
|
| 942 |
+
],
|
| 943 |
+
},
|
| 944 |
+
"DIFC Regulation 10 (AI Processing)": {
|
| 945 |
+
"coverage": 74,
|
| 946 |
+
"covered": [
|
| 947 |
+
"Risk management system maps to DIFC Algorithmic Impact Assessment",
|
| 948 |
+
"Human oversight requirements align",
|
| 949 |
+
"Transparency and explainability provisions align",
|
| 950 |
+
"Post-market monitoring covers ongoing monitoring requirement",
|
| 951 |
+
],
|
| 952 |
+
"gaps": [
|
| 953 |
+
"DIFC-specific registration with Commissioner of Data Protection",
|
| 954 |
+
"DIFC-specific AIA format and reporting requirements",
|
| 955 |
+
"DIFC DPL compliance (separate from Regulation 10)",
|
| 956 |
+
],
|
| 957 |
+
},
|
| 958 |
+
},
|
| 959 |
+
|
| 960 |
+
"Colorado AI Act (SB 24-205)": {
|
| 961 |
+
"EU AI Act (Regulation 2024/1689)": {
|
| 962 |
+
"coverage": 55,
|
| 963 |
+
"covered": [
|
| 964 |
+
"Risk management policy partially satisfies EU AI Act Art. 9",
|
| 965 |
+
"Impact assessment partially covers FRIA requirements",
|
| 966 |
+
"Algorithmic discrimination testing aligns with bias requirements",
|
| 967 |
+
],
|
| 968 |
+
"gaps": [
|
| 969 |
+
"EU AI Act conformity assessment and CE marking",
|
| 970 |
+
"Technical documentation to EU Annex IV standard",
|
| 971 |
+
"Quality Management System (QMS)",
|
| 972 |
+
"EU database registration",
|
| 973 |
+
"Post-market monitoring system to EU standard",
|
| 974 |
+
"Serious incident reporting to EU authorities",
|
| 975 |
+
"Data governance requirements (Art. 10)",
|
| 976 |
+
],
|
| 977 |
+
},
|
| 978 |
+
"DIFC Regulation 10 (AI Processing)": {
|
| 979 |
+
"coverage": 65,
|
| 980 |
+
"covered": [
|
| 981 |
+
"Impact assessment maps to Algorithmic Impact Assessment",
|
| 982 |
+
"Risk management policy aligns",
|
| 983 |
+
"Bias testing covers fairness monitoring",
|
| 984 |
+
],
|
| 985 |
+
"gaps": [
|
| 986 |
+
"DIFC-specific data protection requirements",
|
| 987 |
+
"DIFC Commissioner registration and reporting",
|
| 988 |
+
"DIFC-specific consent and transparency requirements",
|
| 989 |
+
],
|
| 990 |
+
},
|
| 991 |
+
},
|
| 992 |
+
|
| 993 |
+
"DIFC Regulation 10 (AI Processing)": {
|
| 994 |
+
"EU AI Act (Regulation 2024/1689)": {
|
| 995 |
+
"coverage": 45,
|
| 996 |
+
"covered": [
|
| 997 |
+
"Algorithmic Impact Assessment partially covers FRIA",
|
| 998 |
+
"Human oversight requirements align",
|
| 999 |
+
"Transparency requirements partially align",
|
| 1000 |
+
],
|
| 1001 |
+
"gaps": [
|
| 1002 |
+
"Full conformity assessment and CE marking",
|
| 1003 |
+
"Technical documentation to EU standard",
|
| 1004 |
+
"Quality Management System",
|
| 1005 |
+
"EU database registration",
|
| 1006 |
+
"Post-market monitoring to EU standard",
|
| 1007 |
+
"Data governance (Art. 10) — specific training data requirements",
|
| 1008 |
+
"Logging and record-keeping to EU specification",
|
| 1009 |
+
],
|
| 1010 |
+
},
|
| 1011 |
+
"Colorado AI Act (SB 24-205)": {
|
| 1012 |
+
"coverage": 55,
|
| 1013 |
+
"covered": [
|
| 1014 |
+
"Algorithmic Impact Assessment maps to annual impact assessment",
|
| 1015 |
+
"Fairness monitoring covers discrimination testing",
|
| 1016 |
+
"Transparency requirements partially align",
|
| 1017 |
+
],
|
| 1018 |
+
"gaps": [
|
| 1019 |
+
"Colorado-specific consumer notification format",
|
| 1020 |
+
"Attorney General notification requirement",
|
| 1021 |
+
"Colorado-specific affirmative defense documentation",
|
| 1022 |
+
],
|
| 1023 |
+
},
|
| 1024 |
+
},
|
| 1025 |
+
}
|
| 1026 |
+
|
| 1027 |
+
|
| 1028 |
+
# ─────────────────────────────────────────────────
|
| 1029 |
+
# SECTION 5: REGULATION URLS
|
| 1030 |
+
# ─────────────────────────────────────────────────
|
| 1031 |
+
|
| 1032 |
+
REGULATION_URLS = {
|
| 1033 |
+
"EU AI Act (Regulation 2024/1689)": "https://eur-lex.europa.eu/eli/reg/2024/1689/oj",
|
| 1034 |
+
"EU AI Act — GPAI Framework (Chapter V)": "https://eur-lex.europa.eu/eli/reg/2024/1689/oj",
|
| 1035 |
+
"Colorado AI Act (SB 24-205)": "https://leg.colorado.gov/bills/sb24-205",
|
| 1036 |
+
"Texas TRAIGA (HB 1709)": "https://capitol.texas.gov/BillLookup/History.aspx?LegSess=89R&Bill=HB1709",
|
| 1037 |
+
"Utah AI Policy Act (SB 149)": "https://le.utah.gov/~2024/bills/static/SB0149.html",
|
| 1038 |
+
"California CCPA / ADMT Regulations": "https://cppa.ca.gov/regulations/",
|
| 1039 |
+
"Illinois AI Video Interview Act (HB 3773)": "https://www.ilga.gov/legislation/publicacts/fulltext.asp?Name=101-0260",
|
| 1040 |
+
"DIFC Regulation 10 (AI Processing)": "https://www.difc.ae/business/laws-regulations/data-protection/",
|
| 1041 |
+
"GDPR (Regulation 2016/679)": "https://eur-lex.europa.eu/eli/reg/2016/679/oj",
|
| 1042 |
+
"ePrivacy Directive (2002/58/EC)": "https://eur-lex.europa.eu/eli/dir/2002/58/oj",
|
| 1043 |
+
"UAE Federal PDPL (Decree-Law 45/2021)": "https://u.ae/en/about-the-uae/digital-uae/data/data-protection-laws",
|
| 1044 |
+
"DIFC Data Protection Law (Law No. 5 of 2020)": "https://www.difc.ae/business/laws-regulations/data-protection/",
|
| 1045 |
+
"ADGM Data Protection Regulations 2021": "https://en.adgm.thomsonreuters.com/rulebook/data-protection-regulations-2021",
|
| 1046 |
+
"State Data Protection Laws (CCPA, CTDPA, etc.)": "https://cppa.ca.gov/regulations/",
|
| 1047 |
+
"HIPAA (Health Insurance Portability and Accountability Act)": "https://www.hhs.gov/hipaa/index.html",
|
| 1048 |
+
"COPPA (Children's Online Privacy Protection Act)": "https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa",
|
| 1049 |
+
"FERPA (Family Educational Rights and Privacy Act)": "https://www2.ed.gov/policy/gen/guid/fpco/ferpa/index.html",
|
| 1050 |
+
"Illinois BIPA (Biometric Information Privacy Act)": "https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004",
|
| 1051 |
+
"Copyright Directive (2019/790)": "https://eur-lex.europa.eu/eli/dir/2019/790/oj",
|
| 1052 |
+
"NIS2 Directive (2022/2555)": "https://eur-lex.europa.eu/eli/dir/2022/2555/oj",
|
| 1053 |
+
"Product Liability Directive (2024/2853)": "https://eur-lex.europa.eu/eli/dir/2024/2853/oj",
|
| 1054 |
+
"Equal Treatment Directives": "https://eur-lex.europa.eu/eli/dir/2000/78/oj",
|
| 1055 |
+
"Consumer Rights Directive / GPSR": "https://eur-lex.europa.eu/eli/reg/2023/988/oj",
|
| 1056 |
+
"Medical Device Regulation (MDR 2017/745)": "https://eur-lex.europa.eu/eli/reg/2017/745/oj",
|
| 1057 |
+
"Machinery Regulation (2023/1230)": "https://eur-lex.europa.eu/eli/reg/2023/1230/oj",
|
| 1058 |
+
"Digital Services Act (DSA 2022/2065)": "https://eur-lex.europa.eu/eli/reg/2022/2065/oj",
|
| 1059 |
+
"Radio Equipment Directive (RED 2014/53)": "https://eur-lex.europa.eu/eli/dir/2014/53/oj",
|
| 1060 |
+
"FTC Act Section 5 (Unfair/Deceptive Practices)": "https://www.ftc.gov/legal-library/browse/statutes/federal-trade-commission-act",
|
| 1061 |
+
"Title VII (Civil Rights Act)": "https://www.eeoc.gov/statutes/title-vii-civil-rights-act-1964",
|
| 1062 |
+
"ADA (Americans with Disabilities Act)": "https://www.ada.gov/law-and-regs/ada/",
|
| 1063 |
+
"ECOA (Equal Credit Opportunity Act)": "https://www.consumerfinance.gov/rules-policy/regulations/1002/",
|
| 1064 |
+
"FCRA (Fair Credit Reporting Act)": "https://www.ftc.gov/legal-library/browse/statutes/fair-credit-reporting-act",
|
| 1065 |
+
"Fair Housing Act": "https://www.justice.gov/crt/fair-housing-act-1",
|
| 1066 |
+
"Copyright Law (Decree-Law 38/2021) — No TDM exception": "https://u.ae/en/about-the-uae/digital-uae/data/data-protection-laws",
|
| 1067 |
+
"Cybercrime Law (Decree-Law 34/2021)": "https://u.ae/en/information-and-services/justice-safety-and-the-law/cyber-safety-and-digital-security",
|
| 1068 |
+
"Civil Transactions Law (Federal Law 5/1985)": "https://elaws.moj.gov.ae/",
|
| 1069 |
+
"Consumer Protection (Federal Law 15/2020)": "https://u.ae/en/information-and-services/business/consumer-protection",
|
| 1070 |
+
"Anti-Discrimination (Decree-Law 34/2023)": "https://u.ae/en/information-and-services/social-affairs/combating-discrimination-and-hatred",
|
| 1071 |
+
"Labour Law (Decree-Law 33/2021)": "https://u.ae/en/information-and-services/jobs/labour-law-and-general-information",
|
| 1072 |
+
}
|
| 1073 |
+
|
| 1074 |
+
|
| 1075 |
+
# ─────────────────────────────────────────────────
|
| 1076 |
+
# SECTION 6: OTHER REGULATIONS — ONE-LINERS
|
| 1077 |
+
# ─────────────────────────────────────────────────
|
| 1078 |
+
|
| 1079 |
+
OTHER_REG_ONE_LINERS = {
|
| 1080 |
+
"Copyright Directive (2019/790)": "Text and data mining for AI training requires compliance with opt-out provisions — commercial use needs rightsholder permission unless research exception applies.",
|
| 1081 |
+
"NIS2 Directive (2022/2555)": "AI systems in essential/important sectors must meet cybersecurity risk management and incident reporting obligations.",
|
| 1082 |
+
"Product Liability Directive (2024/2853)": "AI software is a 'product' under EU law — providers face strict liability for defective AI without needing to prove fault.",
|
| 1083 |
+
"Equal Treatment Directives": "AI-driven decisions in employment, education, or services must not discriminate on protected grounds (gender, race, age, disability, religion).",
|
| 1084 |
+
"Consumer Rights Directive / GPSR": "Consumer-facing AI products must provide clear pre-contractual information and meet general product safety requirements.",
|
| 1085 |
+
"Medical Device Regulation (MDR 2017/745)": "AI-based diagnostic, clinical decision support, or therapeutic systems may require CE marking as medical devices.",
|
| 1086 |
+
"Machinery Regulation (2023/1230)": "AI in robots, drones, and autonomous machinery must meet safety and cybersecurity requirements — applies from January 2027.",
|
| 1087 |
+
"Digital Services Act (DSA 2022/2065)": "Online platforms using AI recommendation systems must offer non-profiling alternatives and explain algorithmic decisions.",
|
| 1088 |
+
"Radio Equipment Directive (RED 2014/53)": "Connected devices with embedded AI (IoT, wearables) must comply with cybersecurity and privacy-by-design requirements.",
|
| 1089 |
+
"FTC Act Section 5 (Unfair/Deceptive Practices)": "AI systems must not deceive consumers about capabilities, data use, or human involvement — FTC actively enforces.",
|
| 1090 |
+
"Title VII (Civil Rights Act)": "AI employment tools must not cause disparate impact based on race, colour, religion, sex, or national origin.",
|
| 1091 |
+
"ADA (Americans with Disabilities Act)": "AI in employment and public accommodation must not discriminate against individuals with disabilities and must allow reasonable accommodation.",
|
| 1092 |
+
"ECOA (Equal Credit Opportunity Act)": "AI credit scoring and lending decisions must be non-discriminatory, with adverse action notices when AI contributes to denial.",
|
| 1093 |
+
"FCRA (Fair Credit Reporting Act)": "AI systems generating consumer reports must ensure accuracy and allow consumers to dispute inaccurate information.",
|
| 1094 |
+
"Fair Housing Act": "AI in housing advertising, tenant screening, and lending must not discriminate — algorithmic redlining is an enforceable violation.",
|
| 1095 |
+
"Copyright Law (Decree-Law 38/2021) — No TDM exception": "CRITICAL: UAE has no text and data mining exception — all copyrighted training data must be licensed.",
|
| 1096 |
+
"Cybercrime Law (Decree-Law 34/2021)": "Creating or disseminating deepfakes and accessing AI systems without authorisation are criminal offences in the UAE.",
|
| 1097 |
+
"Civil Transactions Law (Federal Law 5/1985)": "General tort liability applies to AI-caused damage under the 'guardian of things' doctrine — operators may be liable.",
|
| 1098 |
+
"Consumer Protection (Federal Law 15/2020)": "AI interacting with UAE consumers must not engage in deceptive or misleading practices.",
|
| 1099 |
+
"Anti-Discrimination (Decree-Law 34/2023)": "AI decisions in the UAE must not discriminate based on race, colour, ethnic origin, religion, or disability.",
|
| 1100 |
+
"Labour Law (Decree-Law 33/2021)": "AI in employment (hiring, monitoring, evaluation) must respect worker rights and disclose workplace surveillance.",
|
| 1101 |
+
}
|
| 1102 |
+
|
| 1103 |
+
|
| 1104 |
+
# ─────────────────────────────────────────────────
|
| 1105 |
+
# SECTION 7: CONTEXTUAL NOTES
|
| 1106 |
+
# ─────────────────────────────────────────────────
|
| 1107 |
+
|
| 1108 |
+
DIFC_CONTROLLER_NOTE = (
|
| 1109 |
+
"Under DIFC Regulation 10, a 'deployer' is the entity that directs, authorises, or benefits from the operation of the AI system and its output — "
|
| 1110 |
+
"comparable to a data controller. An 'operator' is the entity that runs the system on behalf of the deployer — comparable to a data processor. "
|
| 1111 |
+
"Most compliance obligations fall on the deployer."
|
| 1112 |
+
)
|
| 1113 |
+
|
| 1114 |
+
|
| 1115 |
+
# ─────────────────────────────────────────────────
|
| 1116 |
+
# SECTION 8: DISCLAIMER
|
| 1117 |
+
# ─────────────────────────────────────────────────
|
| 1118 |
+
|
| 1119 |
+
DISCLAIMER = (
|
| 1120 |
+
"This analysis is provided for informational and orientation purposes only. "
|
| 1121 |
+
"It does not constitute legal advice. The regulatory landscape is evolving rapidly — "
|
| 1122 |
+
"obligations, deadlines, and enforcement approaches may change. "
|
| 1123 |
+
"Always consult qualified legal counsel for definitive compliance guidance tailored to your specific situation."
|
| 1124 |
+
)
|
requirements.txt
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
streamlit==1.41.1
|
| 2 |
+
reportlab
|