We are seeking an experienced Data Engineer to join our dynamic team of analytics experts. In this role, you will play a pivotal part in both data ops and data engineering functions, contributing to the enhancement and maintenance of our data integration and pipeline processes, as well as ensuring the robustness and scalability of our data platform.
The ideal candidate is someone who thrives in a fast-paced environment, adept at both data engineering practices and DevOps principles.
You should be passionate about optimizing data systems and possess a strong desire to continuously improve our companys data architecture to support evolving products and data initiatives.
In this role, you will be responsible for:
Develop and maintain ETL/ELT/Streaming processes and SQL queries for efficient data movement.
Design and implement scalable, automated processes for large-scale data analyses.
Engage with stakeholders to understand data requirements and build datasets accordingly.
Collaborate with teams to enhance data models and promote data-driven decision-making.
Contribute to the long-term strategy and architecture of the data platform.
Maintain data lake pipelines and ensure adherence to schema standards.
Apply data security standards and seek ways to optimize data flow.
Serve as the point of contact to the DevOps team and handle MLOps operations.
The ideal candidate is someone who thrives in a fast-paced environment, adept at both data engineering practices and DevOps principles.
You should be passionate about optimizing data systems and possess a strong desire to continuously improve our companys data architecture to support evolving products and data initiatives.
In this role, you will be responsible for:
Develop and maintain ETL/ELT/Streaming processes and SQL queries for efficient data movement.
Design and implement scalable, automated processes for large-scale data analyses.
Engage with stakeholders to understand data requirements and build datasets accordingly.
Collaborate with teams to enhance data models and promote data-driven decision-making.
Contribute to the long-term strategy and architecture of the data platform.
Maintain data lake pipelines and ensure adherence to schema standards.
Apply data security standards and seek ways to optimize data flow.
Serve as the point of contact to the DevOps team and handle MLOps operations.
Requirements:
4+ years of experience as a Data Engineer or similar role.
3+ years of Python development experience.
Bachelors or Masters degree in Computer Science or related field.
Proficiency in SQL, data modeling, and building ELT/ETL pipelines.
Experience with AWS cloud environment and big data technologies.
Proficiency in Kubernetes and containerization technologies.
Familiarity with Kafka, Airflow, and DBT is desirable.
Experience with at least one big data environment such as Snowflake, Vertica, or Redshift.
Excellent analytical skills and experience with creating pipelines.
Familiarity with AWS Sagemaker or other ML platforms is advantageous.
Serve as the point of contact to the DevOps team and handle MLOps operations.
4+ years of experience as a Data Engineer or similar role.
3+ years of Python development experience.
Bachelors or Masters degree in Computer Science or related field.
Proficiency in SQL, data modeling, and building ELT/ETL pipelines.
Experience with AWS cloud environment and big data technologies.
Proficiency in Kubernetes and containerization technologies.
Familiarity with Kafka, Airflow, and DBT is desirable.
Experience with at least one big data environment such as Snowflake, Vertica, or Redshift.
Excellent analytical skills and experience with creating pipelines.
Familiarity with AWS Sagemaker or other ML platforms is advantageous.
Serve as the point of contact to the DevOps team and handle MLOps operations.
This position is open to all candidates.