As a Data Engineer , you will be a key member of the data team, at the core of a data-driven company, developing scalable, robust data platforms, and data models, and providing business intelligence. You will be working in an evolving, challenging environment with a variety of data sources, technologies, and stakeholders, to deliver the best solutions to support the business and provide operational excellence.
If you are passionate about data, a team player, and proactive, we want to hear from you.
Responsibilities:
Design, Develop & Deploy data pipelines and data models on various Data Lake / DWH layers
Ingest data from and export data to multiple third-party systems and platforms (e.g., Salesforce, Braze, SurveyMonkey).
Architect and implement data-related microservices and products
Ensure the implementation of best practices in data management, including data lineage, observability, and data contracts.
Maintain, support, and refactor legacy models and layers within the DWH
Minimum of 3 years of experience in software development, data engineering, or business intelligence
Proficiency in Python – A must.
Advanced SQL skills – A must
Strong background in data modeling, ETL development, and data warehousing – A must.
Experience with big data technologies, particularly Airflow – A must
Familiarity with tools such as Spark, Hive, Airbyte, Kafka, Clickhouse, Postgres, Great Expectations, Data Hub, or Iceberg is advantageous.
General understanding of cloud environments like AWS, GCP, or Azure – A must
Experience with Terraform, Kubernetes (K8S), or ArgoCD is advantageous.
A bachelors degree in Computer Science, Engineering, or a related field is advantageous but not mandatory