We are looking for a talented and experienced Senior Data Engineer to work on our massive data processing and help us deliver more insights from our tens of billions of events.
Job Description:
We are building cutting-edge Datalakes and data solutions, primarily on Databricks for large scale customers.
Our solutions introduce next level automations and aim for low-code/no-code platforms that allows our customers to deliver fast while we hide all the engineering complexity behind.
If you are not afraid of any engineering task, likes to build cool stuff and enjoy when customers use your craft you belong with us.
Full time Job
Location: Tel Aviv, Hybrid.
Job Description:
We are building cutting-edge Datalakes and data solutions, primarily on Databricks for large scale customers.
Our solutions introduce next level automations and aim for low-code/no-code platforms that allows our customers to deliver fast while we hide all the engineering complexity behind.
If you are not afraid of any engineering task, likes to build cool stuff and enjoy when customers use your craft you belong with us.
Full time Job
Location: Tel Aviv, Hybrid.
Requirements:
Strong Python/Scala programming skills (Java a plus).
Expertise in building cloud-scalable, real-time Data Lake solutions.
Databricks operational knowledge, specializing in job cluster optimization.
7+ years of large-scale data engineering experience.
3+ years developing within Cloud Services (Azure, AWS, GCP).
Proficient in Spark, Delta, CDF, ACID, and concurrency models.
Solid DB-SQL knowledge.
Familiarity with data streams processing (Kafka, Spark Streaming).
Experience with ETL tools and pipelines (e.g., Rivery, Fivetran, DBT, Airflow).
Knowledge or experience in AI/ML/MLOps/LLM is a plus.
Certifications in Cloud, Databricks, or others are highly desirable.
Strong Python/Scala programming skills (Java a plus).
Expertise in building cloud-scalable, real-time Data Lake solutions.
Databricks operational knowledge, specializing in job cluster optimization.
7+ years of large-scale data engineering experience.
3+ years developing within Cloud Services (Azure, AWS, GCP).
Proficient in Spark, Delta, CDF, ACID, and concurrency models.
Solid DB-SQL knowledge.
Familiarity with data streams processing (Kafka, Spark Streaming).
Experience with ETL tools and pipelines (e.g., Rivery, Fivetran, DBT, Airflow).
Knowledge or experience in AI/ML/MLOps/LLM is a plus.
Certifications in Cloud, Databricks, or others are highly desirable.
This position is open to all candidates.