we are looking for a Senior Data Engineer to join our community!
As a senior Data Engineer, you will have a leading role in developing our data platform, creating and extending our data infrastructures to allow research, development and BI across the company. You will work with multiple stakeholders (architects, analysts, data scientists, and more) to help enrich our data with additional insights.
Responsibilities :
Implement robust cloud-based data infrastructure and pipelines as part of our Data Lakehouse infrastructure.
Collaborate with analysts, data scientists and other stakeholders to develop new products and features.
Ensure data integrity by extending our monitoring infrastructure.
Design and develop tools to allow for data driven decision making.
Contribute to a culture of learning and knowledge-sharing within the team.
As a senior Data Engineer, you will have a leading role in developing our data platform, creating and extending our data infrastructures to allow research, development and BI across the company. You will work with multiple stakeholders (architects, analysts, data scientists, and more) to help enrich our data with additional insights.
Responsibilities :
Implement robust cloud-based data infrastructure and pipelines as part of our Data Lakehouse infrastructure.
Collaborate with analysts, data scientists and other stakeholders to develop new products and features.
Ensure data integrity by extending our monitoring infrastructure.
Design and develop tools to allow for data driven decision making.
Contribute to a culture of learning and knowledge-sharing within the team.
Requirements:
5+ years of hands-on experience in building scalable data infrastructures, in particular data lake or data warehouse architectures.
Extensive experience with cloud-based data platforms like AWS, Azure, or GCP, and tools such as Spark, Kafka, Athena, Airflow, DBT, AWS Glue.
Strong programming skills in Python, and significant SQL experience.
Independent learner with a “can do” attitude and a strong sense of ownership.
Experience with BI and analytics software such as Tableau / Metabase – advantage.
Experience with containerization technologies – advantage.
5+ years of hands-on experience in building scalable data infrastructures, in particular data lake or data warehouse architectures.
Extensive experience with cloud-based data platforms like AWS, Azure, or GCP, and tools such as Spark, Kafka, Athena, Airflow, DBT, AWS Glue.
Strong programming skills in Python, and significant SQL experience.
Independent learner with a “can do” attitude and a strong sense of ownership.
Experience with BI and analytics software such as Tableau / Metabase – advantage.
Experience with containerization technologies – advantage.
This position is open to all candidates.