Join as the first engineer and researcher behind GenAI Protect, building LLM-powered discovery, classification and policy-enforcement services that secure enterprise generative-AI usage.
Key Responsibilities
Design and implement scalable inference services (Python/FastAPI, Triton, ONNX) with GPU orchestration.
Develop and fine-tune transformer models for prompt inspection, PII/red-flag detection and intent analysis.
Collect datasets, build evaluation pipelines and drive continuous model improvement.
Collaborate with Threat-Intel and Product to respond to emerging GenAI attack vectors.
Publish internal/external tech blogs; help hire and mentor future AI-security engineers.
Key Responsibilities
Design and implement scalable inference services (Python/FastAPI, Triton, ONNX) with GPU orchestration.
Develop and fine-tune transformer models for prompt inspection, PII/red-flag detection and intent analysis.
Collect datasets, build evaluation pipelines and drive continuous model improvement.
Collaborate with Threat-Intel and Product to respond to emerging GenAI attack vectors.
Publish internal/external tech blogs; help hire and mentor future AI-security engineers.
Requirements:
8+ yrs software/ML engineering; deep hands-on with PyTorch/TensorFlow and NLP.
Experience deploying ML in low-latency SaaS environments under compliance constraints.
Strong background in data-privacy, DLP or AI-security research; publications/patents a plus.
Self-starter comfortable defining architecture, tooling and roadmap from scratch.
8+ yrs software/ML engineering; deep hands-on with PyTorch/TensorFlow and NLP.
Experience deploying ML in low-latency SaaS environments under compliance constraints.
Strong background in data-privacy, DLP or AI-security research; publications/patents a plus.
Self-starter comfortable defining architecture, tooling and roadmap from scratch.
This position is open to all candidates.