Were looking for a Big Data Engineer to develop and integrate systems that retrieve, process and analyze data from around the digital world, generating customer-facing data. This role will report to our Team Manager, R&D.
So, what will you be doing all day?
Your role as part of the R&D team means your daily responsibilities may include:
Design and implement complex high scale systems using a large variety of technologies.
You will work in a data research team alongside other data engineers, data scientists and data analysts. Together you will tackle complex data challenges and bring new solutions and algorithms to production.
Contribute and improve the existing infrastructure of code and data pipelines, constantly exploring new technologies and eliminating bottlenecks.
You will experiment with various technologies in the domain of Machine Learning and big data processing.
You will work on a monitoring infrastructure for our data pipelines to ensure smooth and reliable data ingestion and calculation.
So, what will you be doing all day?
Your role as part of the R&D team means your daily responsibilities may include:
Design and implement complex high scale systems using a large variety of technologies.
You will work in a data research team alongside other data engineers, data scientists and data analysts. Together you will tackle complex data challenges and bring new solutions and algorithms to production.
Contribute and improve the existing infrastructure of code and data pipelines, constantly exploring new technologies and eliminating bottlenecks.
You will experiment with various technologies in the domain of Machine Learning and big data processing.
You will work on a monitoring infrastructure for our data pipelines to ensure smooth and reliable data ingestion and calculation.
Requirements:
Passionate about data.
Holds a BSc degree in Computer Science\Engineering or a related technical field of study.
Has at least 4 + years of software or data engineering development experience in one or more of the following programming languages: Python, Java, or Scala.
Has strong programming skills and knowledge of Data Structures, Design Patterns and Object Oriented Programming.
Has good understanding and experience of CI/CD practices and Git.
Excellent communication skills with the ability to provide constant dialog between and within data teams.
Can easily prioritize tasks and work independently and with others.
Conveys a strong sense of ownership over the products of the team.
Is comfortable working in a fast-paced dynamic environment.
Advantage:
Has experience with containerization technologies like Docker and Kubernetes.
Experience in designing and productization of complex big data pipelines.
Familiar with a cloud provider (AWS / Azure / GCP).
Experience with Big Data technologies and common frameworks such as Spark, Airflow, Kafka, Parquet, Databricks, EMR, Kubernetes.
Passionate about data.
Holds a BSc degree in Computer Science\Engineering or a related technical field of study.
Has at least 4 + years of software or data engineering development experience in one or more of the following programming languages: Python, Java, or Scala.
Has strong programming skills and knowledge of Data Structures, Design Patterns and Object Oriented Programming.
Has good understanding and experience of CI/CD practices and Git.
Excellent communication skills with the ability to provide constant dialog between and within data teams.
Can easily prioritize tasks and work independently and with others.
Conveys a strong sense of ownership over the products of the team.
Is comfortable working in a fast-paced dynamic environment.
Advantage:
Has experience with containerization technologies like Docker and Kubernetes.
Experience in designing and productization of complex big data pipelines.
Familiar with a cloud provider (AWS / Azure / GCP).
Experience with Big Data technologies and common frameworks such as Spark, Airflow, Kafka, Parquet, Databricks, EMR, Kubernetes.
This position is open to all candidates.