As a DevOps Engineer, you will join our data & engineering team and design and manage scalable, cloud-based infrastructure focused on big data and machine learning. You will ensure high-scale production environments are reliable, secure, and efficient, supporting seamless ML/AI deployments.
What will you do?
Design, build, and maintain scalable and reliable infrastructure for our big data applications
Automate the deployment and scaling of our applications using Kubernetes and related technologies
Implement and maintain security and compliance protocols
Monitor and troubleshoot infrastructure and applications
Collaborate with product teams to define and implement infrastructure requirements
Stay up-to-date with the latest industry trends and technologies
Manage and optimize large GPU clusters
What will you do?
Design, build, and maintain scalable and reliable infrastructure for our big data applications
Automate the deployment and scaling of our applications using Kubernetes and related technologies
Implement and maintain security and compliance protocols
Monitor and troubleshoot infrastructure and applications
Collaborate with product teams to define and implement infrastructure requirements
Stay up-to-date with the latest industry trends and technologies
Manage and optimize large GPU clusters
Requirements:
Bachelor's degree in Computer Science or related field – an advantage
3+ years of experience in DevOps, with a focus on big data technologies such as BigQuery
Experienced with ML/AI deployment workflows such as Vertex AI, SageMaker or Azure AI with expertise in GPU optimization and efficient utilization
Experience with monitoring and logging tools such as Prometheus, Elasticsearch and Grafana
Strong experience with cloud platform (preferably GCP)
Strong experience in container orchestration technologies such as Kuberentes (EKS, AKS, GKE, helm)
Experience with CI/CD (preferably using GitHub Actions)
Managing a high-scale production environment
Preferred qualifications:
Experience in cloud financial management and cost optimization
Strong understanding of security and compliance protocols
Experience with configuration management tools such as Terraform or Pulumi
Experience with Agile methodologies and Scrum
Passion for staying up-to-date with the latest industry trends and technologies
Desired personal traits:
You want to make an impact on humankind
You prioritize We over I
You enjoy getting things done and striving for excellence
You collaborate effectively with people of diverse backgrounds and cultures
You have a growth mindset
You are candid, authentic, and transparent
Bachelor's degree in Computer Science or related field – an advantage
3+ years of experience in DevOps, with a focus on big data technologies such as BigQuery
Experienced with ML/AI deployment workflows such as Vertex AI, SageMaker or Azure AI with expertise in GPU optimization and efficient utilization
Experience with monitoring and logging tools such as Prometheus, Elasticsearch and Grafana
Strong experience with cloud platform (preferably GCP)
Strong experience in container orchestration technologies such as Kuberentes (EKS, AKS, GKE, helm)
Experience with CI/CD (preferably using GitHub Actions)
Managing a high-scale production environment
Preferred qualifications:
Experience in cloud financial management and cost optimization
Strong understanding of security and compliance protocols
Experience with configuration management tools such as Terraform or Pulumi
Experience with Agile methodologies and Scrum
Passion for staying up-to-date with the latest industry trends and technologies
Desired personal traits:
You want to make an impact on humankind
You prioritize We over I
You enjoy getting things done and striving for excellence
You collaborate effectively with people of diverse backgrounds and cultures
You have a growth mindset
You are candid, authentic, and transparent
This position is open to all candidates.