Were looking for a Data Engineer to build and scale the data infrastructure behind our Sales Streaming platform. This role is about owning pipelines that process massive volumes of data and power real-time, AI-driven features used by millions of sales professionals.
Youll work end to end – from data lakes to real-time streaming – collaborating closely with Data Science, ML, and Product teams to turn complex data into high-impact product capabilities.
This role is based in Tel Aviv. We work in a hybrid model, with 3 days a week in the office.
This might be for you if:
You enjoy building data systems that run at scale and serve real users
You like owning your work end to end, from design to production
Youre a problem solver who enjoys turning messy data into reliable systems
You value autonomy, impact, and fast decision-making
Youre comfortable working in dynamic, AI-forward environments.
Youll work end to end – from data lakes to real-time streaming – collaborating closely with Data Science, ML, and Product teams to turn complex data into high-impact product capabilities.
This role is based in Tel Aviv. We work in a hybrid model, with 3 days a week in the office.
This might be for you if:
You enjoy building data systems that run at scale and serve real users
You like owning your work end to end, from design to production
Youre a problem solver who enjoys turning messy data into reliable systems
You value autonomy, impact, and fast decision-making
Youre comfortable working in dynamic, AI-forward environments.
Requirements:
3+ years of experience building scalable data systems
Strong Python and SQL skills
Experience using GenAI for software development and improving work processes
Hands-on experience with modern data stacks (Spark, Airflow, AWS, Kubernetes)
Experience with batch and streaming data pipelines
A strong builder mindset, curiosity, and willingness to learn.
3+ years of experience building scalable data systems
Strong Python and SQL skills
Experience using GenAI for software development and improving work processes
Hands-on experience with modern data stacks (Spark, Airflow, AWS, Kubernetes)
Experience with batch and streaming data pipelines
A strong builder mindset, curiosity, and willingness to learn.
This position is open to all candidates.


















