Responsibilities
Join a team of highly motivated developers to work together to fill growing software and scaling needs
Writing well-tested, organized, and robust code to support a rapidly growing organization
Work in a modern software environment with continuously deployed and integrated code in our cloud-based infrastructure
Communicating and working openly with other developers to solve problems and produce high-quality code
Create, maintain and own the core company data pipeline, responsible for scaling up data processing flow to meet the rapid data growth
Consistently evolve data model & data schema based on business and engineering needs
Implement systems tracking data quality and consistency
Develop tools supporting self-service data pipeline management (ETL)
work with stakeholders including Product, Data analytics, R&D teams to assist with data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
4+ years of hands-on experience with Python as a Backend Software Engineer.
Strong software development capabilities
At least 2 years of experience with Docker & k8s.
Experience developing solutions in AWS environment and working with the following services: EKS, EC2, EMR, Kinesis, MSK, RDS, S3 etc.
BSc. in Computer Science, Mathematics, or related fields.
Experience with relational SQL.
Experience with Airflow advantage
Infrastructure development experience is a huge advantage
Experience with relational NoSQL databases advantage
2+ years Experience with big data frameworks such as Spark, Kafka, Snowflake, etc.- advantage.
Experience in building large scale data systems
Autodidact and a problems solver
Excellent communication skills in Hebrew and English