AI & ML interests

None defined yet.

Recent Activity

astroware-core  updated a model about 9 hours ago
astroware/Halo0.8B-guard-v1
astroware-core  published a model about 9 hours ago
astroware/Halo0.8B-guard-v1
View all activity

Organization Card

🛡️ Astroware Inc.

AI Security Research & Agentic Safety

🌐 Website  |  💻 GitHub  |  🐦 Twitter


Who We Are

Astroware is an AI security startup focused on safety, alignment, and agentic security research. We build tools and models that make AI systems safer to deploy at scale, with a particular focus on guard models and constitutional AI classifiers that act as runtime security layers for AI agents.

We are a Delaware C-Corp with a globally distributed team spanning Dubai and Bengaluru.

What We're Working On

🔒 Guard Models & Constitutional Classifiers

Our core research area. We develop guard models that serve as runtime security layers for AI agents, preventing jailbreaks, prompt injection, and unsafe behavior. Our classifiers are built on a constitutional AI framework with structured severity tiers across harmful and benign behavioral categories.

🔱 Trishool — Agentic Security Network

Trishool is our adversarial evaluation and agentic security platform, live on Bittensor Subnet 23. It stress-tests AI agents and guard models through real-world adversarial challenges, including jailbreak attacks against protected AI systems. Trishool's Phase 2 focuses on positioning guard models as the critical runtime defense layer for autonomous AI agents.

⚖️ Alignment Research

We conduct alignment training research for large language models, including constitutional frameworks, severity-tiered taxonomies, and structured datasets for supervised fine-tuning and reinforcement learning.

Our Focus Areas

  • Guard model development for runtime AI agent protection
  • Adversarial red-teaming and jailbreak evaluation
  • Constitutional AI classifiers and alignment frameworks
  • Agentic security for multi-agent and autonomous systems
  • Open-source contributions to AI safety tooling

Open Source

We believe AI security benefits from open collaboration. We actively contribute to open-source AI safety projects and publish our guard model research, adversarial evaluation tools, and security architectures for the community to build on.


Building the security layer for the agentic AI era. 🚀