In one sentence
The company Data and AI platform is looking for a Software/ML Engineer specialist to join us in Raanana, Israel. In this role, you will join Data Science & MLOps team that develops features on our product based on the Databricks platform, where Spark is at the heart of our implementation.
You will work end-to-end: from data preparation and feature engineering, through ML/LLM modeling, to production-grade deployment, CI/CD, and monitoring.
Youll need excellent technical skills along with strong communication and ownership.
We are a team with open discussions, where every voice counts, and we are open-minded about adopting new technologies when they make sense.
What will your job look like?
Develop production-grade ML/LLM services and pipelines on Databricks, using Spark.
Design, implement, and maintain reusable preprocessing and feature engineering components.
Industrialize the ML lifecycle (experiment tracking, model packaging, deployment, monitoring) and improve reliability, performance, and cost.
Build ML pipelines and model deployments using Jenkins and Databricks Asset Bundles.
Work closely with data scientists and engineers to translate research into scalable, maintainable production code.
Own features from design through production, including writing tests, documenting, and providing operational support.
Come to the office 3 times a week.
The company Data and AI platform is looking for a Software/ML Engineer specialist to join us in Raanana, Israel. In this role, you will join Data Science & MLOps team that develops features on our product based on the Databricks platform, where Spark is at the heart of our implementation.
You will work end-to-end: from data preparation and feature engineering, through ML/LLM modeling, to production-grade deployment, CI/CD, and monitoring.
Youll need excellent technical skills along with strong communication and ownership.
We are a team with open discussions, where every voice counts, and we are open-minded about adopting new technologies when they make sense.
What will your job look like?
Develop production-grade ML/LLM services and pipelines on Databricks, using Spark.
Design, implement, and maintain reusable preprocessing and feature engineering components.
Industrialize the ML lifecycle (experiment tracking, model packaging, deployment, monitoring) and improve reliability, performance, and cost.
Build ML pipelines and model deployments using Jenkins and Databricks Asset Bundles.
Work closely with data scientists and engineers to translate research into scalable, maintainable production code.
Own features from design through production, including writing tests, documenting, and providing operational support.
Come to the office 3 times a week.
Requirements:
Mandatory – Python development specialist with at least 5 years of experience.
Mandatory – Spark development specialist with at least 3 years of experience.
Mandatory – Solid software engineering practices: clean code, testing, packaging, debugging, and performance profiling.
Mandatory – Experience implementing and operating ML pipelines in production (CI/CD, automation, monitoring, rollbacks).
Mandatory – Experience with LLM solutions: prompt engineering, RAG pipelines, fine-tuning, evaluation, and/or agent workflows.
Mandatory – At least 2 years experience working with Linux.
Considered a plus:
Hands-on experience with Databricks (jobs/workflows), Spark optimization, and operating data/ML pipelines at scale.
Experience with Databricks Asset Bundles and production deployment patterns.
Experience with Jenkins pipelines and infrastructure-as-code mindset for repeatable deployments.
Experience with MLflow (tracking/model registry) and/or model serving patterns (batch, real-time).
Mandatory – Python development specialist with at least 5 years of experience.
Mandatory – Spark development specialist with at least 3 years of experience.
Mandatory – Solid software engineering practices: clean code, testing, packaging, debugging, and performance profiling.
Mandatory – Experience implementing and operating ML pipelines in production (CI/CD, automation, monitoring, rollbacks).
Mandatory – Experience with LLM solutions: prompt engineering, RAG pipelines, fine-tuning, evaluation, and/or agent workflows.
Mandatory – At least 2 years experience working with Linux.
Considered a plus:
Hands-on experience with Databricks (jobs/workflows), Spark optimization, and operating data/ML pipelines at scale.
Experience with Databricks Asset Bundles and production deployment patterns.
Experience with Jenkins pipelines and infrastructure-as-code mindset for repeatable deployments.
Experience with MLflow (tracking/model registry) and/or model serving patterns (batch, real-time).
This position is open to all candidates.



















