The Role joins the team acting as the backbone of mavens (Zynga & T2). You will build and scale the mission-critical data infrastructure that powers our products, moving data at high volumes and low latency. This is a hands-on, fast-paced role focused on ingestion, processing, and backend engineering.
What Youll Do
Build & Scale: Design robust, observable external data integration pipelines and backend services.
Manage Data: Maintain data lake/warehouse layers using modern formats (Apache Iceberg, BigQuery).
Model: Design schemas for flexibility, performance, and cost-efficiency.
Collaborate: Work globally with engineers to integrate external APIs and support product needs.
Requirements:
Experience: 3+ years in Data or Backend Engineering working on distributed systems.
Core Stack: Strong Python coding and proficient SQL (optimization/modeling).
Architecture: Deep knowledge of batch processing and event-driven architectures (i.e. Kafka, PubSub).
Tools: Experience with Cloud (GCP/AWS), Orchestration (Airflow), and RDBMS (PostgreSQL, BigQuery, Redshift).
Mindset: You are pragmatic, curious, and excited by real-time challenges and clear communication.
Experience: 3+ years in Data or Backend Engineering working on distributed systems.
Core Stack: Strong Python coding and proficient SQL (optimization/modeling).
Architecture: Deep knowledge of batch processing and event-driven architectures (i.e. Kafka, PubSub).
Tools: Experience with Cloud (GCP/AWS), Orchestration (Airflow), and RDBMS (PostgreSQL, BigQuery, Redshift).
Mindset: You are pragmatic, curious, and excited by real-time challenges and clear communication.
This position is open to all candidates.













