Join us in crafting cutting-edge solutions for the cyber world using Spark/PySpark ETLs and data flow processes. Dive into the realm of multi-Cloud environments while collaborating closely with investigators to fine-tune PySpark performance. Harness the power of top-notch technologies like Databricks to elevate our technical projects, scaling them for efficiency. Embrace innovation as you research and implement new techniques. Evolve with us as a key member of the R&D team.
Technical Impact:
Design and implement complex data processing architectures for cloud security analysis.
Optimize and scale critical PySpark workflows across multi-cloud environments.
Develop innovative solutions for processing and analyzing massive security datasets.
Drive technical excellence through sophisticated ETL implementations.
Contribute to architectural decisions and technical direction.
Core Responsibilities:
Build robust, scalable data pipelines for security event processing.
Optimize performance of large-scale PySpark operations.
Implement advanced data solutions using Databricks and cloud-native technologies.
Research and prototype new data processing methodologies.
Provide technical guidance and best practices for data engineering initiatives.
Location: Tel Aviv, IL.
Hybrid work environment.
Preferred Qualifications:
Experience with security-focused data solutions.
Deep expertise with Splunk and AWS services (S3, SQS, SNS, Stream).
Advanced understanding of distributed systems.
Strong Linux systems knowledge.
Experience with real-time data processing architectures.
Who You Are:
4+ years of hands-on data engineering experience in cloud-based SaaS environments.
Deep expertise in PySpark, Python, and SQL optimization.
Advanced knowledge of AWS, Azure, and GCP cloud architectures.
Proven track record implementing production-scale data systems.
Extensive experience with distributed computing and big data processing.
Strong collaboration skills and technical communication abilities.

















