Our workloads are fully Kubernetes-based, running on spot instances in a multi-cloud environment, with AWS as our primary cloud provider. Our architecture is stateless by design, leveraging cloud-native S3 for elastic storage and as a data lake. This enables advanced data pipelines with Airflow for data transformation and Clickhouse for high-speed queries.
This simple yet innovative architecture allows for rapid development iterations with minimal disruption to day-to-day operations, empowering constant optimization and high agility.
What Youll Do:
Proactively optimize and enhance delivery and live production environments
Partner with R&D teams by standardizing processes and providing self-serve tools for safe and fast execution
Establish and maintain efficient observability and visibility standards to support better decision-making across all teams
Enforce security and compliance best practices in day-to-day operations
Challenge and improve our technology stack by proactively introducing and developing new tools and approaches
6+ years of experience as a DevOps Engineer
Proven culture of metrics-driven decision-making and production optimization
Hands-on experience managing and maintaining cloud-native Kubernetes workloads using ArgoCD or other GitOps approaches
Expertise in developing and maintaining Infrastructure as Code with Terraform, Crossplane, SDKs, or similar tools
Strong understanding of security, networking principles, certification processes, and load balancing
Familiarity with data flows, messaging systems, and data handling methodologies
Solid coding and scripting skills in Python, Bash, Node.js, or equivalent
Comfortable in a fast-paced, startup environment
Excellent communication and collaboration skills
An academic degree in Computer Science, Engineering, or a related field is a plus