* Building and scaling global AI security teams, while providing mentorship to senior managers and fostering an innovation-driven security culture.
* Overseeing advanced adversarial evaluations for GenAI models, agentic AI, and multi-agent (A2A) systems, and delivering executive-level threat intelligence and risk assessments to inform corporate AI strategy.
* Defining and implementing AI red-teaming frameworks aligned with OWASP AI Security guidelines, MITRE ATLAS, and NIST AI RMF, and operationalizing automated red-team engines to continuously stress-test models at scale.
* Partnering with product and engineering to design and deploy enterprise-ready AI guardrails, including policy enforcement layers, monitoring pipelines, and anomaly detection systems – and championing secure deployment practices for GenAI (including agent orchestration via MCP and A2A workflows).
About Alice:
Alice is the leading provider of security and safety solutions for online experiences, safeguarding more than 3 billion users, top foundation models, and the world’s largest enterprises and tech platforms every day. As a trusted ally to major technology firms and Fortune 500 brands that build user-generated and GenAI products, Alice empowers security, AI, and policy teams with low-latency Real-Time Guardrails and a continuous Red Teaming program that pressure-tests systems with adversarial prompts and emerging threat techniques. Powered by deep threat intelligence, unmatched harmful-content detection, and coverage of 117+ languages, Alice enables organizations to deliver engaging and trustworthy experiences at global scale while operating safely and responsibly across all threat landscapes.
Hybrid:
Yes
Must-Have
* 12+ years of relevant industry experience in cybersecurity, machine learning security, or related fields (with a focus on enterprise-scale products and AI systems).
* Extensive leadership experience managing and scaling security or R&D organizations, with a strong track record of building high-performance teams and driving complex projects to completion.
* Deep expertise in cybersecurity and AI – proven understanding of AI threats, adversarial machine learning, LLM vulnerabilities, and AI safety frameworks (OWASP Top 10 for LLMs, NIST AI Risk Management Framework, etc.).
* Strategic mindset and execution skills, with the ability to set vision and direction for AI security initiatives and also dive into technical details when needed.
Nice-to-Have
* Demonstrated thought leadership in AI security by publishing research, speaking at industry events (Black Hat, DEF CON, OWASP Global AppSec), or contributing to AI security standards and open-source projects.
* Experience building or deploying AI security products and tools, such as red teaming automation platforms, guardrail frameworks, or AI monitoring and anomaly detection systems.
* Hands-on familiarity with agentic AI frameworks and protocols (e.g. LangChain, AutoGen, MCP, A2A) and cloud-based AI environments, showing you understand how to secure complex AI orchestration workflows.
* A background in AI trust and safety or adversarial ML research, with insight into emerging threats and mitigation techniques for GenAI applications.



















