We are looking for a Data Engineer.
Main responsibilities:
Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Monitor and optimize our (teams) cloud costs.
Design and construct monitoring tools to ensure the efficiency and reliability of data processes.
Main responsibilities:
Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Monitor and optimize our (teams) cloud costs.
Design and construct monitoring tools to ensure the efficiency and reliability of data processes.
Requirements:
3+ Years of Experience in data engineering and big data. - Must
Experience in working with different databases (SQL, Snowflake, Impala, PostgreSQL) Must
Experience in programming languages (Python, OOP Languages) Must
Experience with Data modeling, ETL development, data warehousing Must
Experience with building both batch and streaming data pipelines using PySpark Big Advantage
Experience in Messaging systems (Kafka, RabbitMQ etc) Big Advantage
Experience working with any of the major cloud providers: Azure, Google Cloud , AWS) Big Advantage
Creating and Maintaining Microservices data processes – Big Advantage
Basic knowledge in DevOps concepts (Docker, Kubernetes, Terraform) Advantage
Experience in Design Patterns concepts Advantage
3+ Years of Experience in data engineering and big data. - Must
Experience in working with different databases (SQL, Snowflake, Impala, PostgreSQL) Must
Experience in programming languages (Python, OOP Languages) Must
Experience with Data modeling, ETL development, data warehousing Must
Experience with building both batch and streaming data pipelines using PySpark Big Advantage
Experience in Messaging systems (Kafka, RabbitMQ etc) Big Advantage
Experience working with any of the major cloud providers: Azure, Google Cloud , AWS) Big Advantage
Creating and Maintaining Microservices data processes – Big Advantage
Basic knowledge in DevOps concepts (Docker, Kubernetes, Terraform) Advantage
Experience in Design Patterns concepts Advantage
This position is open to all candidates.