Responsibilities:
Deploy and maintain critical data pipelines in production.
Drive strategic technological initiatives and long-term plans from initial exploration and POC to going live in a hectic production environment.
Design infrastructural data services, coordinating with the Architecture team, R&D teams, Data Scientists, and product managers to build scalable data solutions.
Work in Agile process with Product Managers and other tech teams.
End-to-end responsibility for the development of data crunching and manipulation processes within the Optimove product.
Design and implement data pipelines and data marts.
Create data tools for various teams (e.g., onboarding teams) that assist them in building, testing, and optimizing the delivery of the Optimove product.
Explore and implement new data technologies to support Optimoves data infrastructure.
Work closely with the core data science team to implement and maintain ML features and tools.
Requirements:
B.Sc. in Computer Science or equivalent.
7+ years of extensive SQL experience (preferably working in a production environment) .
Experience with programming languages (preferably, Python) a must!
Experience with “Big Data” environments, tools, and data modeling (preferably in a production environment).
Strong capability in schema design and data modeling.
Understanding of micro-services architecture.
Quick, self-learning and good problem-solving capabilities.
Good communication skills and collaborative.
Process and detailed oriented.
Passion to solve complex data problems.
Desired:
Familiarity with Airflow, ETL tools, Snowflake, and MSSQL.
Experience with GCP services.
Experience with Docker and Kubernetes.
Experience with PubSub/Kafka.