Datasets:
license: cc-by-nc-4.0
doi: 10.5281/zenodo.18157450
extra_gated_prompt: >-
By requesting access to this dataset, you agree to:
1. Use this dataset only for research, journalism, policy, or education
purposes 2. NOT attempt to re-identify hashed usernames 3. NOT use this data
to train AI systems to generate non-consensual imagery 4. Cite this dataset in
any publications 5. Report any ethical concerns to [nana.nwachukwu@gmail.com]
extra_gated_fields:
Name: text
Affiliation: text
Intended use: text
I agree to the terms above: checkbox
task_categories:
- text-classification
language:
- en
tags:
- TfGBV
- AI safety
- content moderation
- NCII
- gender-based violence
- Grok
- deepfakes
pretty_name: TfGBV-Grok Dataset
size_categories:
- n<1K
TfGBV-Grok Dataset: AI-Facilitated Non-Consensual Intimate Image Generation
⚠️ WORK IN PROGRESS: This dataset is actively being developed. There may be classification errors or inconsistencies. It will be updated as I process the rest of the raw data. Feedback and corrections are welcome — please email nana.nwachukwu@gmail.com.
Dataset Description
This dataset documents 565 instances of users publicly requesting Grok AI (integrated into X/Twitter) to generate non-consensual intimate imagery — digitally "undressing" women without their knowledge or consent.
The dataset was curated to support research on technology-facilitated gender-based violence (TfGBV) and to inform policy responses to emerging AI harms.
Dataset Summary
| Statistic | Value |
|---|---|
| Total records | 565 |
| Perpetrator requests | 481 |
| Prompt sharing (attack methods) | 48 |
| Victim testimonies | 7 |
| Reports of abuse | 10 |
| Commentary | 13 |
| Benign/unrelated Grok use | 6 |
Actor Type Classification
Each record is classified by who is speaking:
| Actor_Type | Count | Description |
|---|---|---|
PERPETRATOR_REQUEST |
481 | Users directly requesting Grok to generate NCII |
PROMPT_SHARING |
48 | Users sharing attack methods/prompts (JSON exploits, etc.) |
COMMENTARY |
13 | Users discussing or criticizing the phenomenon |
REPORTING_ABUSE |
10 | Users calling out or questioning harmful behavior |
VICTIM_TESTIMONY |
7 | Victims reporting that their images were targeted |
BENIGN_GROK_USE |
6 | Unrelated Grok use (e.g., Halloween costumes) |
Harm Type Classification
Each record is also classified by the type of harm:
| Harm_Type | Count | Description |
|---|---|---|
NCSII_REQUEST |
418 | Direct requests to remove/replace clothing |
SHARES_NCSII_METHOD |
42 | Sharing prompts/methods for NCII generation |
DISCUSSES_NCSII |
28 | Discussion of the NCII phenomenon |
PROMPT_INJECTION |
8 | Structured JSON attacks to bypass safeguards |
NOT_NCSII_RELATED |
6 | Benign content, not related to NCII |
SHARES_BODY_MOD_METHOD |
6 | Sharing methods for body modification requests |
BODY_MODIFICATION |
5 | Requests to alter body features |
DISCUSSES_MINOR_TARGETING |
2 | Discussion of minors being targeted |
Data Fields
| Column | Type | Description |
|---|---|---|
Record_ID |
int | Unique identifier (1-565) |
Tweet_ID |
int | Original tweet ID — retained for verification |
Username_Hash |
string | SHA-256 hash of username (truncated to 12 characters) |
Date |
string | Standardized date (YYYY-MM-DD) |
Content_Snippet |
string | Text content of the tweet |
Actor_Type |
string | Who is speaking (perpetrator, victim, commentator, etc.) |
Harm_Type |
string | Type of harm documented |
Important Note on Classification This dataset distinguishes between:
- Perpetrators — users actively requesting or sharing methods for NCII generation
- Victims — users reporting that theirs/ someone else's images were targeted
- Commentators — users discussing, criticizing, or reporting the phenomenon
This distinction is crucial for accurate research. Not every record represents a perpetrator request — some document the broader ecosystem of harm, including victim experiences and community responses.
Intended Uses
✅ Permitted uses:
- Academic research on TfGBV and AI harms
- Policy analysis and advocacy
- Journalism and public interest reporting
- AI safety tool development (detection, moderation)
- Educational purposes
❌ Prohibited uses:
- Re-identification of hashed usernames
- Harassment or doxxing of any individuals
- Training AI systems to generate NCII
- Commercial use without authorization
- Redistribution without permission
Access
This dataset is gated to ensure responsible use. By requesting access, you agree to:
- Use the data only for permitted purposes listed above
- NOT attempt to re-identify individuals
- NOT use the data to develop tools that facilitate image-based abuse
- Cite this dataset in any publications or outputs
- Report any ethical concerns or incidents
Ethical Considerations
Privacy
- Usernames have been pseudonymized via SHA-256 hashing
- Tweet IDs are retained for verification purposes only
- No victim names or identifiers were found in the dataset
Sensitivity
- This dataset documents harmful behavior and contains references to sexual content
- Two records reference minors — handle with appropriate care
- Content may be distressing to some researchers
Limitations
- Dataset captures only publicly visible posts
- May not represent the full scope of the phenomenon
- Content snippets are partial, not full tweets
Citation
@dataset{tfgbv_grok_2026,
author = {[Nana Mgbechikwere Nwachukwu]},
title = {TfGBV-Grok Dataset: AI-Facilitated Non-Consensual Intimate Image Generation},
year = {2026},
doi: 10.5281/zenodo.18157450
publisher = {Zenodo},
url = {https://huggingface.co/datasets/[Mtechlaw]/TfGBV-Grok-NCII-Dataset}
}
Contact
For questions, access requests, or to report concerns:
[Nana Mgbechikwere Nwachukwu]
Email: [nana.nwachukwu@gmail.com]
Changelog
- v2.0 (January 2026): Added 'Actor_Type' classification to distinguish perpetrators from victims, commentators, and reporters. Improved 'Harm_Type' taxonomy.
- v1.0 (January 2026): Initial release