Our ideal candidate is experienced with data pipeline builders and data wranglers, and someone who enjoys optimizing data systems and building them from the ground up. Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products and comfortable working in a fast-paced and often pivoting environment.
Responsibilities
* Build and maintain our data repositories with timely and quality data
* Build and maintain data pipelines from internal databases and SaaS applications
* Create and maintain architecture and systems documentation
* Write maintainable, performant code
* Implement the DevOps, DataOps and FinOps philosophy in everything you do
* Collaborate with Data Analysts and Data Scientists to drive efficiencies for their work
* Collaborate with other functions to ensure data needs are addressed
* Constantly search for automation opportunities
* Constantly improve product quality, security, and performance
* Desire to continually keep up with advancements in data engineering practices
* At least 3 years of professional experience building and maintaining production data systems in cloud environments like GCP
* Professional experience using JavaScript and/or other modern programming language
* Demonstrably deep understanding of SQL and analytical data warehouses
* Experience with NOSQL databases, eg: ElasticSearch, Mongo, Firestore, BigTable
* Hands-on experience with data pipeline tools (eg: Dataflow, Airflow, dbt)
* Strong data modeling skills
* Experience with MLOps – advantage
* Familiarity with agile software development methodologies
* Ability to work 3 days a week in-office (Jerusalem or Bnei Brak)