We’re looking for highly talented, experienced, passionate, and data-driven DataOps/ Analytics Engineer.
This role bridges the gap between data engineering and analytics teams, focusing on the orchestration, maintenance, and optimization of our analytics infrastructure.
Orchestration Pipelines: Design and manage data orchestration pipelines using Dagster.
Data Modeling: Develop and enforce guidelines and standards for data modeling in dbt, ensuring comprehensive documentation and adherence to best practices.
Data Warehousing: Maintain and optimize our BigQuery data warehouse, focusing on performance monitoring, cost management, and data quality.
Data Catalog Management: Enhance and maintain our Data Hub data catalog.
CI/CD Management: Implement CI/CD processes, orchestration pipelines, and automated data quality monitoring to streamline workflows and reduce manual interventions.
Infrastructure Management: Utilize Kubernetes for hosting, and manage associated tools such as Elasticsearch and Grafana.
Data Quality: Implement and monitor data quality processes, including alerting and reporting mechanisms.
Development Workflow Optimization: Improve and maintain development workflows to ensure efficient and seamless data product delivery.
Collaborative Design: Work alongside business users and data analysts to design and build data models that enhance data discovery and analyses.
Standards and Best Practices: Contribute to internal standards for style, maintainability, and best practices for a high-scale data infrastructure.
Documentation and Scalability: Maintain documentation, manage technical debt, and ensure scalability of infrastructure.
This role bridges the gap between data engineering and analytics teams, focusing on the orchestration, maintenance, and optimization of our analytics infrastructure.
Orchestration Pipelines: Design and manage data orchestration pipelines using Dagster.
Data Modeling: Develop and enforce guidelines and standards for data modeling in dbt, ensuring comprehensive documentation and adherence to best practices.
Data Warehousing: Maintain and optimize our BigQuery data warehouse, focusing on performance monitoring, cost management, and data quality.
Data Catalog Management: Enhance and maintain our Data Hub data catalog.
CI/CD Management: Implement CI/CD processes, orchestration pipelines, and automated data quality monitoring to streamline workflows and reduce manual interventions.
Infrastructure Management: Utilize Kubernetes for hosting, and manage associated tools such as Elasticsearch and Grafana.
Data Quality: Implement and monitor data quality processes, including alerting and reporting mechanisms.
Development Workflow Optimization: Improve and maintain development workflows to ensure efficient and seamless data product delivery.
Collaborative Design: Work alongside business users and data analysts to design and build data models that enhance data discovery and analyses.
Standards and Best Practices: Contribute to internal standards for style, maintainability, and best practices for a high-scale data infrastructure.
Documentation and Scalability: Maintain documentation, manage technical debt, and ensure scalability of infrastructure.
Requirements:
3-5 years experience in DataOps / Analytics Engineer – Must
Proficiency in Python, Bash, SQL, and Jinja.
Deep understanding of data workflows, database architecture, and data modeling. (DBT)
Deep understanding of data warehousing, including performance monitoring, cost management, data partitioning, and query optimization. (Bigquery).
Familiarity with data cataloging. (DATAHUB).
Hands-on experience with CI/CD processes.
Basic knowledge of Kubernetes, Elasticsearch, and Grafana.
3-5 years experience in DataOps / Analytics Engineer – Must
Proficiency in Python, Bash, SQL, and Jinja.
Deep understanding of data workflows, database architecture, and data modeling. (DBT)
Deep understanding of data warehousing, including performance monitoring, cost management, data partitioning, and query optimization. (Bigquery).
Familiarity with data cataloging. (DATAHUB).
Hands-on experience with CI/CD processes.
Basic knowledge of Kubernetes, Elasticsearch, and Grafana.
This position is open to all candidates.