We are seeking a DevOps Engineer with strong experience in managing large-scale data systems and infrastructure. The ideal candidate is passionate about building efficient, scalable, and reliable systems for data-intensive applications.
Responsibilities:
Take an active role across all DevOps areas, including:
Developing and maintaining monitoring and alerting infrastructure to ensure system reliability and performance.
Overseeing the development and upkeep of tools and procedures for monitoring, deployment, and alerting across our SaaS multi-tenant product family.
Design, deploy, and optimize scalable data infrastructure, including data lakes and high-throughput distributed systems.
Ensure reliability and performance in environments processing terabytes of data daily.
Manage, monitor, and maintain self-hosted open-source databases in a Linux environment.
Automate operational tasks and deployment pipelines using Bash and modern CI/CD tools.
Collaborate with Data Engineering and Infrastructure teams to streamline and harden the data lifecyclefrom ingestion to analysis.
Responsibilities:
Take an active role across all DevOps areas, including:
Developing and maintaining monitoring and alerting infrastructure to ensure system reliability and performance.
Overseeing the development and upkeep of tools and procedures for monitoring, deployment, and alerting across our SaaS multi-tenant product family.
Design, deploy, and optimize scalable data infrastructure, including data lakes and high-throughput distributed systems.
Ensure reliability and performance in environments processing terabytes of data daily.
Manage, monitor, and maintain self-hosted open-source databases in a Linux environment.
Automate operational tasks and deployment pipelines using Bash and modern CI/CD tools.
Collaborate with Data Engineering and Infrastructure teams to streamline and harden the data lifecyclefrom ingestion to analysis.
Requirements:
3+ years of experience as a DevOps Engineer in a production environment.
Hands-on experience with any public cloud provider (e.g., GCP, AWS, Azure).
Solid experience working with Kubernetes and Docker containers in production.
Practical experience with CI/CD automation tools such as GitHub Actions and ArgoCD.
Proficiency with Infrastructure as Code (IaC) tools such as Terraform and Helm charts.
Strong Linux system administration skills and Bash scripting proficiency.
Demonstrated ability to manage and optimize open-source databases and large-scale data pipelines.
Proven experience with systems processing ≥ 1TB of data per day.
Preferred/Bonus Skills:
Familiarity with Trino, Apache Iceberg, or Apache Spark.
Experience in multi-tenant SaaS environments.
Exposure to data governance and storage optimization practices.
3+ years of experience as a DevOps Engineer in a production environment.
Hands-on experience with any public cloud provider (e.g., GCP, AWS, Azure).
Solid experience working with Kubernetes and Docker containers in production.
Practical experience with CI/CD automation tools such as GitHub Actions and ArgoCD.
Proficiency with Infrastructure as Code (IaC) tools such as Terraform and Helm charts.
Strong Linux system administration skills and Bash scripting proficiency.
Demonstrated ability to manage and optimize open-source databases and large-scale data pipelines.
Proven experience with systems processing ≥ 1TB of data per day.
Preferred/Bonus Skills:
Familiarity with Trino, Apache Iceberg, or Apache Spark.
Experience in multi-tenant SaaS environments.
Exposure to data governance and storage optimization practices.
This position is open to all candidates.