– Drive technical decisions around AI evaluation frameworks, monitoring, and observability tools
– Design, build, and deploy AI-powered solutions across yad2s products and platforms
– Work hands-on with Large Language Models (LLMs), retrieval-augmented generation (RAG), embeddings, vector databases, and prompt engineering
– Develop scalable backend services that integrate AI capabilities into production
– Collaborate closely with product, data, and engineering teams to identify high-impact AI opportunities
– Lead by example: write clean, high-quality code, set best practices, and help shape the foundation of the AI team
– Design, build, and deploy AI-powered solutions across yad2s products and platforms
– Work hands-on with Large Language Models (LLMs), retrieval-augmented generation (RAG), embeddings, vector databases, and prompt engineering
– Develop scalable backend services that integrate AI capabilities into production
– Collaborate closely with product, data, and engineering teams to identify high-impact AI opportunities
– Lead by example: write clean, high-quality code, set best practices, and help shape the foundation of the AI team
Requirements:
– Strong software engineering background – solid experience with Python or Node.js, microservices, APIs, and cloud environments (AWS preferred)
– Hands-on AI experience – building and deploying applications with LLMs (OpenAI, Anthropic, HuggingFace, etc.) and generative AI technologies
– Knowledge of Machine Learning fundamentals ( data preprocessing, model fine-tuning, evaluation)
– Experience with vector databases (e.g., Pinecone, Weaviate, FAISS) and search/recommendation systems is a big plus
– Ability to thrive in ambiguity, take ownership, and move fast in a startup-like environment
– Prior experience in building or scaling AI/ML infrastructure in production – an advantage
– Understanding of AI evaluation methodologies and metrics
– Strong software engineering background – solid experience with Python or Node.js, microservices, APIs, and cloud environments (AWS preferred)
– Hands-on AI experience – building and deploying applications with LLMs (OpenAI, Anthropic, HuggingFace, etc.) and generative AI technologies
– Knowledge of Machine Learning fundamentals ( data preprocessing, model fine-tuning, evaluation)
– Experience with vector databases (e.g., Pinecone, Weaviate, FAISS) and search/recommendation systems is a big plus
– Ability to thrive in ambiguity, take ownership, and move fast in a startup-like environment
– Prior experience in building or scaling AI/ML infrastructure in production – an advantage
– Understanding of AI evaluation methodologies and metrics
This position is open to all candidates.


















