we are at the forefront of fintech + AI innovation, backed by leading venture capital firms. Our mission is to build a fraud-free global commerce ecosystem by leveraging the newest technology, freeing online businesses to focus on their core ideas and growth. We are building the future, and we need you to help shape it. Who We're Looking For – The Dream Maker We're in search of an experienced and skilled Senior data Engineer to join our growing data team. As part of our data team, you'll be at the forefront of crafting a groundbreaking solution that leverages cutting-edge technology to combat fraud. The ideal candidate will have a strong background in designing and implementing large-scale data solutions, with the potential to grow into a leadership role. This position requires a deep understanding of modern data architectures, cloud technologies, and the ability to drive technical initiatives that align with business objectives. Our ultimate goal is to equip our clients with resilient safeguards against chargebacks, empowering them to safeguard their revenue and optimize their profitability. Join us on this thrilling mission to redefine the battle against fraud. Your Arena
* Design, develop, and maintain scalable, robust data pipelines and ETL processes
* Architect and implement complex data models across various Storage solutions
* Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions
* Ensure data quality, consistency, security, and compliance across all data systems
* Play a key role in defining and implementing data strategies that drive business value
* Contribute to the continuous improvement of our data architecture and processes
* Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data -related questions and challenges
* Participate in and sometimes lead code reviews to maintain high coding standards
* Troubleshoot and resolve complex data -related issues in production environments
* Evaluate and recommend new technologies and methodologies to improve our data infrastructure
* Design, develop, and maintain scalable, robust data pipelines and ETL processes
* Architect and implement complex data models across various Storage solutions
* Collaborate with R&D teams, data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality solutions
* Ensure data quality, consistency, security, and compliance across all data systems
* Play a key role in defining and implementing data strategies that drive business value
* Contribute to the continuous improvement of our data architecture and processes
* Champion and implement data engineering best practices across the R&D organization, serving as a technical expert and go-to resource for data -related questions and challenges
* Participate in and sometimes lead code reviews to maintain high coding standards
* Troubleshoot and resolve complex data -related issues in production environments
* Evaluate and recommend new technologies and methodologies to improve our data infrastructure
Requirements:
What It Takes – Must haves: 5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles – Must
* Extensive experience with AWS, GCP, Azure and cloud-native architectures – Must
* Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases – Must
* Designing and implementing data warehouses and data lakes – Must
* Strong understanding of data modeling techniques – Must
* Expertise in data manipulation libraries (e.g., Pandas) and Big Data processing frameworks – Must
* Experience with data validation tools such as Pydantic & Great Expectations – Must
* Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests – Must Nice-to-Haves
* Apache Iceberg – Experience building, managing and maintaining Iceberg lakehouse architecture with S3 Storage and AWS Glue catalog – Strong Advantage
* Apache Spark – Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing – Strong Advantage
* Modern data stack tools – DBT, DuckDB, Dagster or any other data orchestration tool (e.g., Apache Airflow, Prefect) – Advantage
* Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka – Advantage
* Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) – Advantage
* Stream processing technologies (e.g., Apache Kafka, Apache Flink) – Advantage
* Understanding of compliance requirements (e.g., GDPR, CCPA) – Advantage
What It Takes – Must haves: 5+ years of experience in data engineering, with specific, strong proficiency in Python & software engineering principles – Must
* Extensive experience with AWS, GCP, Azure and cloud-native architectures – Must
* Deep knowledge of both relational (e.g., PostgreSQL) and NoSQL databases – Must
* Designing and implementing data warehouses and data lakes – Must
* Strong understanding of data modeling techniques – Must
* Expertise in data manipulation libraries (e.g., Pandas) and Big Data processing frameworks – Must
* Experience with data validation tools such as Pydantic & Great Expectations – Must
* Proficiency in writing and maintaining unit tests (e.g., Pytest) and integration tests – Must Nice-to-Haves
* Apache Iceberg – Experience building, managing and maintaining Iceberg lakehouse architecture with S3 Storage and AWS Glue catalog – Strong Advantage
* Apache Spark – Proficiency in optimizing Spark jobs, understanding partitioning strategies, and leveraging core framework capabilities for large-scale data processing – Strong Advantage
* Modern data stack tools – DBT, DuckDB, Dagster or any other data orchestration tool (e.g., Apache Airflow, Prefect) – Advantage
* Designing and developing backend systems, including- RESTful API design and implementation, microservices architecture, event-driven systems, RabbitMQ, Apache Kafka – Advantage
* Containerization technologies- Docker, Kubernetes, and IaC (e.g., Terraform) – Advantage
* Stream processing technologies (e.g., Apache Kafka, Apache Flink) – Advantage
* Understanding of compliance requirements (e.g., GDPR, CCPA) – Advantage
This position is open to all candidates.
















