AI safety research with focus on adversarial testing and robustness evaluation. Building the Crucible Evaluation Platform, an open-source model testing framework for systematic vulnerability assessment. Exploring model interpretability, alignment techniques, and ecological paradigms for human/AI co-existence that serves various lifeforms of all types on Earth. Interested in bridging theoretical safety research with practical evaluation tools that can scale across model architectures while considering broader impacts on Earth's living systems.