Glemad

    Research

    Our research teams investigate the safety, inner workings, and real-world impacts of autonomous defense intelligence, so that advanced systems strengthen security as they become increasingly capable.

    Research areas

    Autonomous Defense

    We investigate how AI systems can interpret threat signals, reason about infrastructure state, and take defensive actions within strict, auditable boundaries. Our work spans active defense, anomaly detection, and the formal modeling of agent behavior in adversarial environments.

    Threat Intelligence & Adversarial Robustness

    We study how attackers adapt to automated defenses and how defensive systems can anticipate, simulate, and withstand evolving tactics. This includes red-teaming methodologies, attack surface modeling, and the evaluation of agent resilience under sustained pressure.

    System Interpretability & Alignment

    We develop methods to make autonomous decisions inspectable and debuggable. Our research focuses on tracing how agents arrive at conclusions, surfacing uncertainty, and ensuring that system objectives remain aligned with operator intent and organizational risk frameworks.

    Safety Evaluation & Red Teaming

    We build rigorous testing frameworks to evaluate the safety and reliability of defensive AI systems before and after deployment. This includes benchmark design, adversarial evaluation, and continuous monitoring protocols that catch misalignment and failure modes early.

    Recent publications

    View all research

    Work with us

    We collaborate with academic institutions, industry partners, and public agencies on research that advances the state of autonomous defense.