File size: 2,761 Bytes
f4943ec
3f73565
 
 
 
 
 
 
 
 
 
 
 
f4943ec
 
3f73565
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57ffc57
 
 
3f73565
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
language: en
tags:
  - ai-security
  - llm-security
  - agentic-ai
  - red-teaming
  - adversarial-ml
  - governance
  - risk-management
  - safety
  - cybersecurity
license: apache-2.0
---

# Cogensec

Cogensec builds security and governance for agentic AI systems.

We treat AI as decision-making infrastructure, not just software.
That means securing identity, intent, memory, autonomy, and trust across models, tools, and multi-agent workflows.

## What you’ll find here

We publish practical artifacts for builders, researchers, and security teams:

- **Security evaluation suites** for LLMs and agentic workflows  
- **Adversarial datasets** for testing misuse, jailbreaks, prompt injection, and tool abuse  
- **Reference agents** and **defensive patterns** (guardrails, policies, enforcement primitives)  
- **Research notes and reproducible experiments** focused on real-world deployment risks  
- **Governance templates** and guidance aligned to modern AI risk frameworks

## Our focus areas

- **Agent security**: tool misuse, agentic escalation, multi-agent coordination risks  
- **Non-human identity**: authentication, authorization, and lifecycle for agents and tools  
- **Memory governance**: retention, leakage, poisoning, and policy enforcement  
- **Intent and control**: goal integrity, autonomy boundaries, and safe orchestration  
- **Trust and provenance**: context integrity, auditability, attestation, and monitoring

## How to use our work

- Start with pinned repositories for the most current releases.
- Each repo includes:
  - installation and quickstart
  - evaluation methodology
  - dataset/model cards where applicable
  - reproducibility notes and limitations

## Responsible use

Cogensec publishes security research to improve safety in AI systems.
Some materials may describe adversarial behavior to support testing and defense.

- Use responsibly and ethically.
- Don’t deploy findings to harm others or evade safeguards.
- Report issues or concerns through the contact channels below.

## Contributing

We welcome:
- issue reports with reproduction steps
- benchmark proposals and test cases
- dataset improvements and labeling fixes
- PRs that improve documentation and reproducibility

If you want to collaborate on research or run joint evaluations, reach out.

## Contact

- Website: cogensec.com  
- GitHub: github.com/cogensec
- Twitter: x.com/cogen_sec
- LinkedIn: linkedin.com/company/cogensec  

## Citation

If you use Cogensec artifacts in research, please cite the relevant repository.
Where provided, use the `CITATION.cff` file.

## License

Unless stated otherwise, repositories here are released under the **Apache-2.0** license.
Datasets may have their own terms, always check the dataset card.