We are looking for a Backend Engineer to join a growing group!
Our Stack:
Scala, Node.js, Rust
Kafka Streams / Akka Streams
Spark
Kafka
ElasticSearch, Redis
Kubernetes
AWS
What You'll Do:
End-to-end development and ownership of our products and features, from design to scalable and predictable production behavior.
Solve diverse complex problems of high scale
Collaborate with other engineers and product managers in order to improve our products
Review code, architecture, and data to identify and troubleshoot problems
Our Stack:
Scala, Node.js, Rust
Kafka Streams / Akka Streams
Spark
Kafka
ElasticSearch, Redis
Kubernetes
AWS
What You'll Do:
End-to-end development and ownership of our products and features, from design to scalable and predictable production behavior.
Solve diverse complex problems of high scale
Collaborate with other engineers and product managers in order to improve our products
Review code, architecture, and data to identify and troubleshoot problems
Requirements:
5+ years of development experience with Scala or another JVM language.
Extensive hands-on experience with scalable and distributed systems architecture and design.
4+ years of hands-on experience with Data Streaming technologies, including Apache Kafka, Spark Streaming, KafkaStreams, or Apache Flink.
Proficiency in data modeling and designing systems to handle large-scale, distributed datasets efficiently.
Experience with containerization and orchestration tools, including Kubernetes and Docker containers.
Strong knowledge of distributed computing paradigms and principles, such as consistency, partitioning, and resilience.
B.Sc. in Computer Science or an equivalent field.
Advantage:
Hands-on experience with RocksDB optimizations and tuning
Production experience in a SaaS environment – Metrics, Logging, Troubleshooting production systems
Api development experience with gRPC
5+ years of development experience with Scala or another JVM language.
Extensive hands-on experience with scalable and distributed systems architecture and design.
4+ years of hands-on experience with Data Streaming technologies, including Apache Kafka, Spark Streaming, KafkaStreams, or Apache Flink.
Proficiency in data modeling and designing systems to handle large-scale, distributed datasets efficiently.
Experience with containerization and orchestration tools, including Kubernetes and Docker containers.
Strong knowledge of distributed computing paradigms and principles, such as consistency, partitioning, and resilience.
B.Sc. in Computer Science or an equivalent field.
Advantage:
Hands-on experience with RocksDB optimizations and tuning
Production experience in a SaaS environment – Metrics, Logging, Troubleshooting production systems
Api development experience with gRPC
This position is open to all candidates.

















