We are looking for a Data Platform Team Lead.
As a Manager of the Data Platform team, you will be taking a central role orchestrating, overseeing and leading the many different aspects of our challenging journey towards a new and modernized big data Lakehouse platform (PBs of data), built using Databricks on Google cloud (GCP). Among the challenges – ingestion at stream in massive scale, providing a platform for processing structured and unstructured data, security and compliance at Enterprise scale, data governance, optimizing performance, storage and cost and many more..
You will lead, mentor, guide, recruit and manage a team of experienced Data Engineers and be responsible for the enablement of our Big Data platform serving developers, data engineers, data analysts, product managers, data science and ML Engineering
You will work closely with our Data PM on leading our Data Strategy and as such, you will learn how data serves our goals, come up with ways to improve our TBs of daily data processes while maintaining high data quality; guide other R&D teams and provide best practices; conduct POCs with latest data tools; and by that, help our clients make smarter decisions that continuously improve their ad-impression quality.
FInd your way to influence and impact a team that utilizes a wide array of languages and technologies, among them – GCP, Databricks, Spark, Python, Scala, SQL, BigQuery, Vertica, Kafka, Docker, Kubernetes, Terraform, Prometheus, Gitlab and more.
As a Manager of the Data Platform team, you will be taking a central role orchestrating, overseeing and leading the many different aspects of our challenging journey towards a new and modernized big data Lakehouse platform (PBs of data), built using Databricks on Google cloud (GCP). Among the challenges – ingestion at stream in massive scale, providing a platform for processing structured and unstructured data, security and compliance at Enterprise scale, data governance, optimizing performance, storage and cost and many more..
You will lead, mentor, guide, recruit and manage a team of experienced Data Engineers and be responsible for the enablement of our Big Data platform serving developers, data engineers, data analysts, product managers, data science and ML Engineering
You will work closely with our Data PM on leading our Data Strategy and as such, you will learn how data serves our goals, come up with ways to improve our TBs of daily data processes while maintaining high data quality; guide other R&D teams and provide best practices; conduct POCs with latest data tools; and by that, help our clients make smarter decisions that continuously improve their ad-impression quality.
FInd your way to influence and impact a team that utilizes a wide array of languages and technologies, among them – GCP, Databricks, Spark, Python, Scala, SQL, BigQuery, Vertica, Kafka, Docker, Kubernetes, Terraform, Prometheus, Gitlab and more.
Requirements:
4+ years of both people and technical management experience, leading a platform/infra backend/data engineering team in high-scale companies
A versatile go to tech geek, passionate about learning and sharing the latest and greatest Big Data technologies out there, and using them to deliver state of the art cost effective solutions
A team player with great interpersonal and communication skills
A leader by example
Actively seek ways to improve development velocity, processes, remove bottlenecks and help those surrounding you grow
4+ years of experience with one of the following languages: Python, Scala or Java
Able to take hard decisions with a can-do attitude
Hands-on in depth experience with at least one streaming/batching technology such as: Kafka/Kinesis/Pulsar and Stream Processing technologies such as: Kafka Streams/Spark/Flink
Familiarity with SQL/NoSQL databases and the different main data architectures
Experience working with a public cloud provider such as GCP/AWS/Azure
4+ years of both people and technical management experience, leading a platform/infra backend/data engineering team in high-scale companies
A versatile go to tech geek, passionate about learning and sharing the latest and greatest Big Data technologies out there, and using them to deliver state of the art cost effective solutions
A team player with great interpersonal and communication skills
A leader by example
Actively seek ways to improve development velocity, processes, remove bottlenecks and help those surrounding you grow
4+ years of experience with one of the following languages: Python, Scala or Java
Able to take hard decisions with a can-do attitude
Hands-on in depth experience with at least one streaming/batching technology such as: Kafka/Kinesis/Pulsar and Stream Processing technologies such as: Kafka Streams/Spark/Flink
Familiarity with SQL/NoSQL databases and the different main data architectures
Experience working with a public cloud provider such as GCP/AWS/Azure
This position is open to all candidates.