As a Principal Security Engineer, youll be responsible for helping secure leading edge generative AI product capabilities. You will work with internal AI/ML development teams to perform dynamic and static application security reviews of AI Systems throughout the MLOps lifecycle. In this role youll be responsible for identifying vulnerabilities, assisting with remediation planning and providing development security support. A key part of this position is understanding, discovering and documenting vulnerabilities in proprietary AI/ML systems which use technologies such as large language models (LLMs).
What you get to do in this role:
Conduct security testing and vulnerability assessments for AI systems, particularly those utilizing large language models (LLMs).
Develop and implement security benchmarks and evaluation protocols for LLMs.
Identify and mitigate potential security threats and vulnerabilities in AI systems.
Collaborate with AI developers to integrate security measures into the development lifecycle.
Stay updated on the latest AI security trends and technologies.
Provide detailed reports and recommendations based on security evaluations
Requirements:
An analytical mind for problem solving, abstract thought, and offensive security tactics.
Strong interpersonal skills (written and oral communication) and the ability to work collaboratively in a team environment.
Bachelors degree in computer science/engineering or equivalent experience. Post graduate degree and/or related certifications in Machine Learning or Artificial Intelligence recommended.
8+ years in a role performing AI/ML Security reviews and/or assisting in security evaluations during model training
High level of language reading comprehension for Python code and experiencing performing source code reviews for security issues
Strong understanding of machine learning frameworks (e.g., TensorFlow, PyTorch).
Strong understanding of Natural Language Processing (NLP) and related frameworks (e.g. nltk, spacy)
Experience training machine learning models including transformer based LLMs.
In-depth experience with exploiting OWASP LLM Top 10 application vulnerabilities, such as prompt injection and data poisoning
In-depth experience with exploiting OWASP Top 10 application vulnerabilities, such as deserialization and injection attacks.
Experience performing Threat Modeling, Penetration Testing and/or Red Team recommended.
Knowledge of regulatory and compliance standards related to AI and data security.
Ability to articulate complex issues to executives and customers.
An analytical mind for problem solving, abstract thought, and offensive security tactics.
Strong interpersonal skills (written and oral communication) and the ability to work collaboratively in a team environment.
Bachelors degree in computer science/engineering or equivalent experience. Post graduate degree and/or related certifications in Machine Learning or Artificial Intelligence recommended.
8+ years in a role performing AI/ML Security reviews and/or assisting in security evaluations during model training
High level of language reading comprehension for Python code and experiencing performing source code reviews for security issues
Strong understanding of machine learning frameworks (e.g., TensorFlow, PyTorch).
Strong understanding of Natural Language Processing (NLP) and related frameworks (e.g. nltk, spacy)
Experience training machine learning models including transformer based LLMs.
In-depth experience with exploiting OWASP LLM Top 10 application vulnerabilities, such as prompt injection and data poisoning
In-depth experience with exploiting OWASP Top 10 application vulnerabilities, such as deserialization and injection attacks.
Experience performing Threat Modeling, Penetration Testing and/or Red Team recommended.
Knowledge of regulatory and compliance standards related to AI and data security.
Ability to articulate complex issues to executives and customers.
This position is open to all candidates.