Who are we? We are a global leader in the automotive Cyber security industry. We protect drivers & manufacturers from cyber attacks on their vehicles. We use top notch technology & have several products for inside & outside the car
The data Team
We are the backbone of data operations, entrusted with managing every aspect of data flow within the organization – from data ingestion and processing to generating valuable insights. Currently comprised of two highly skilled data engineers, we are eager to welcome a third member who shares our passion for transforming data into actionable intelligence.
Why us?
* You can be part of a leading company in the automotive industry
* You can help save lives
* You can work with cool challenging technology
* You can make an impact & help change the world
Responsibilities:
* Lead development projects of critical, high-availability, cloud-scale services and APIs
* Support clients with large amounts of data and scalability in mind
* Take part in all development stages from design to deployment
* Develop and deploy real time/batch data processing pipelines using the latest technologies
* Design and build high-availability, cloud-scale data pipelines (ETLs).
The data Team
We are the backbone of data operations, entrusted with managing every aspect of data flow within the organization – from data ingestion and processing to generating valuable insights. Currently comprised of two highly skilled data engineers, we are eager to welcome a third member who shares our passion for transforming data into actionable intelligence.
Why us?
* You can be part of a leading company in the automotive industry
* You can help save lives
* You can work with cool challenging technology
* You can make an impact & help change the world
Responsibilities:
* Lead development projects of critical, high-availability, cloud-scale services and APIs
* Support clients with large amounts of data and scalability in mind
* Take part in all development stages from design to deployment
* Develop and deploy real time/batch data processing pipelines using the latest technologies
* Design and build high-availability, cloud-scale data pipelines (ETLs).
Requirements:
* +3 years of experience in large scale, distributed server side, backend development
* Extensive experience in stream & batch Big Data pipeline processing using Apache Spark
* Experience with Linux, Docker, and Kubernetes
* Experience in working with cloud providers (e.g., AWS, GCP)
* Strong experience with event streaming platforms like Kafka or its alternatives, such as Amazon MSK, Azure Event Hubs, or Confluent Cloud.
* A team player, highly motivated and a fast learner
* Ability to assume ownership of goals and products
* Passion for designing scalable, distributable and robust platforms and analytic tools
* +3 years of experience in large scale, distributed server side, backend development
* Extensive experience in stream & batch Big Data pipeline processing using Apache Spark
* Experience with Linux, Docker, and Kubernetes
* Experience in working with cloud providers (e.g., AWS, GCP)
* Strong experience with event streaming platforms like Kafka or its alternatives, such as Amazon MSK, Azure Event Hubs, or Confluent Cloud.
* A team player, highly motivated and a fast learner
* Ability to assume ownership of goals and products
* Passion for designing scalable, distributable and robust platforms and analytic tools
Advantages:
* +3 years of experience developing using Scala
* Experience developing using NodeJS (preferably TypeScript), Python, Groovy
* Experience in stream & batch Big Data pipeline processing using Apache Flink
* Experience with Kafka, Airflow, Mongo, Elastic, HDFS or similar technologies
* Experience with system monitoring (Prometheus, Influx or similar)
* Experience in independently managing development projects from scratch to production
* Experience in microservices architecture and flexible system design.
This position is open to all candidates.












