Spaces:
Sleeping
Sleeping
File size: 1,858 Bytes
cb7dcd2 f26af9c cb7dcd2 37703a5 f26af9c | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | ---
title: Cogsec Analyzer
emoji: ๐
colorFrom: green
colorTo: gray
sdk: gradio
sdk_version: 6.5.1
app_file: app.py
pinned: false
license: mit
short_description: How manipulative is your chatbot?
---
# COGSEC Analyzer
**Cognitive Security Nutrition Facts for AI Interactions**
*How manipulative is your chatbot?*
## What It Does
Analyzes AI-generated text for cognitive manipulation patterns, focusing on the **interactional meta-structure** rather than semantic content. Outputs a forensic breakdown including:
- **Theatricality Index** (0-10): Measures enthusiasm, hyper-validation, affective convergence
- **Validation Density**: Is the AI a critical partner or a mirror?
- **Engagement Mechanics**: Reward salience, forced teaming, artificial urgency
- **Intent Defense Detection**: Does the AI cite "good intent" to justify manipulative patterns?
## Live Demo
[HuggingFace Space](https://huggingface.co/spaces/reflectiveattention/cogsec-analyzer)
## Usage
1. Get an HF token from [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens)
2. Paste AI-generated text to analyze
3. Click "Analyze COGSEC"
## Background
Based on forensic analysis of Human-AI interaction dynamics, specifically detecting:
- **Recursive Validation Loops**: AI praising user for insights *about* manipulation
- **High-Drift Amplification**: Escalating emotional intensity over conversation turns
- **Affective Convergence**: Mirroring user emotional state to simulate rapport
## Related Projects
- [prompt-prix](https://github.com/shanevcantwell/prompt-prix) โ Audit local LLM tool-calling capability
- [whetstone-method](https://github.com/shanevcantwell/whetstone-method) โ Framework for using AI as thinking partner
## License
MIT
---
Part of the [Reflective Attention](https://reflectiveattention.ai) project on AI cognitive safety. |