GodsDevProject commited on
Commit
6923029
·
verified ·
1 Parent(s): 579db03

Create MODEL_CARDS.md

Browse files
Files changed (1) hide show
  1. MODEL_CARDS.md +94 -0
MODEL_CARDS.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Cards — AI Components
2
+
3
+ This document describes all AI-related components used in **Federal FOIA Intelligence Search**.
4
+
5
+ ---
6
+
7
+ ## Model Name
8
+
9
+ **General-Purpose Large Language Model (LLM)**
10
+ (Provided via Hugging Face / OpenAI runtime)
11
+
12
+ > ⚠️ The application does **not** ship or host its own trained model.
13
+
14
+ ---
15
+
16
+ ## Intended Use
17
+
18
+ **Allowed Uses**
19
+ - Summarization of publicly released FOIA records
20
+ - Contextual explanation of document metadata
21
+ - Research assistance for journalists, academics, and legal professionals
22
+
23
+ **Explicitly Disallowed Uses**
24
+ - Legal advice
25
+ - Evidence generation
26
+ - Intelligence analysis
27
+ - Surveillance, profiling, or targeting
28
+ - Automated decision-making
29
+
30
+ ---
31
+
32
+ ## Training Data Summary
33
+
34
+ - Model training data is **external** to this application
35
+ - This application does **not train, fine-tune, or adapt** models
36
+ - User inputs are **not retained** for training
37
+
38
+ ---
39
+
40
+ ## Input Data Constraints
41
+
42
+ - Public FOIA metadata
43
+ - Optional, user-approved PDF text extraction
44
+ - User-supplied questions only
45
+
46
+ **No ingestion of:**
47
+ - Private data
48
+ - Classified information
49
+ - Authentication-protected materials
50
+
51
+ ---
52
+
53
+ ## Output Constraints
54
+
55
+ - Outputs are explicitly labeled as AI-generated
56
+ - Outputs are citation-anchored
57
+ - Outputs include an integrity hash
58
+ - Outputs are not persisted
59
+
60
+ ---
61
+
62
+ ## Risk Mitigation
63
+
64
+ | Risk | Mitigation |
65
+ |----|----|
66
+ | Hallucination | Citation anchoring + disclosure |
67
+ | Over-reliance | Warnings + opt-in |
68
+ | Data leakage | No persistence |
69
+ | Misuse | Feature gating |
70
+
71
+ ---
72
+
73
+ ## Ethical Considerations
74
+
75
+ This AI component is intentionally:
76
+ - Non-autonomous
77
+ - Non-persistent
78
+ - User-controlled
79
+ - Auditable
80
+
81
+ ---
82
+
83
+ ## Limitations
84
+
85
+ - May misinterpret scanned PDFs
86
+ - Does not validate document authenticity
87
+ - Cannot access non-public records
88
+
89
+ ---
90
+
91
+ ## Contact
92
+
93
+ For AI safety inquiries:
94
+ **Project Maintainer: Ezra Godschild**