Responsibilities:
Own, enhance, and improve core platform
Build and maintain systems for collecting and processing metrics from client environments
Focus on system efficiency and resilient infrastructure
Integrate well-known SaaS platforms into big data repository (e.g., major cloud providers, Datadog, Snowflake)
Optimize Spark and Airflow processes
Contribute to data design and platform architecture while working closely with other business units and engineering teams
Face the challenges of testing and monitoring large-scale data pipelines
7+ years of experience developing and operating large-scale, high-availability systems
7+ years of experience with Python (Experience with Go is a plus)
Experience working with cloud environments (AWS preferred) and big data technologies (Spark, Airflow, S3, Snowflake, EMR)
Familiarity with metrics systems (e.g., Prometheus, cloud monitoring APIs) or time-series data – a strong plus
Autodidact, self-motivated team player with strong communication skills and a passion for solving challenges at scale














