Were looking for a Big Data Engineer who loves turning massive, messy datasets into scalable, reliable systems. Youll design and maintain data pipelines, ensure data quality, and collaborate with data scientists and product managers to bring new insights to life.
What youll do:
Design, build, and optimize scalable data pipelines and ETL processes
Own data products from design to production.
Ensure data accuracy, integrity, and performance across large datasets.
Collaborate with data scientists and engineers to deliver production-grade experiment.
Requirements:
What you have:
B.Sc. in computer science or an equivalent.
3+ years in Big Data engineering.
Strong programming skills (Scala, Python, Java).
Hands-on experience with Spark or similar distributed frameworks.
Skilled in SQL and data modeling.
Experience with cloud platforms (AWS / GCP).
A pragmatic, ownership-driven mindset.
What you have:
B.Sc. in computer science or an equivalent.
3+ years in Big Data engineering.
Strong programming skills (Scala, Python, Java).
Hands-on experience with Spark or similar distributed frameworks.
Skilled in SQL and data modeling.
Experience with cloud platforms (AWS / GCP).
A pragmatic, ownership-driven mindset.
Bonus points:
Experience with data warehousing (BigQuery, Redshift, Snowflake).
Exposure to A/B testing or marketing measurement.
Recommended by our employee.
This position is open to all candidates.












