We are looking for a Senior Data Engineer.
What Youll Do:
Create new data solutions, maintain existing and be a focal point for all technical aspects of our data activity. You will develop advanced data and analytics to support our analysts and production with validated and reliable data. The ideal candidate is a hands-on professional with strong knowledge of data pipelines, and an ability to translate business needs into flawless data flow.
Create ELT/Streaming processes and SQL queries to bring data to/from the data warehouse and other data sources.
Own the data lake pipelines, maintenance, improvements, and schema.
Create new features from scratch, enhance existing features, and optimize existing functionality.
Collaborate with various stakeholders across the company, like data developers, analysts, data science, etc., to deliver team tasks. Work closely with all business units and engineering teams to develop a long-term data platform architecture strategy.
Implement new tools and development approaches.
Ensure adherence to coding best practices and development of reusable code
Constantly monitor data platform and make recommendations to enhance system architecture On both ETL\ELT and real-time pipelines
What Youll Do:
Create new data solutions, maintain existing and be a focal point for all technical aspects of our data activity. You will develop advanced data and analytics to support our analysts and production with validated and reliable data. The ideal candidate is a hands-on professional with strong knowledge of data pipelines, and an ability to translate business needs into flawless data flow.
Create ELT/Streaming processes and SQL queries to bring data to/from the data warehouse and other data sources.
Own the data lake pipelines, maintenance, improvements, and schema.
Create new features from scratch, enhance existing features, and optimize existing functionality.
Collaborate with various stakeholders across the company, like data developers, analysts, data science, etc., to deliver team tasks. Work closely with all business units and engineering teams to develop a long-term data platform architecture strategy.
Implement new tools and development approaches.
Ensure adherence to coding best practices and development of reusable code
Constantly monitor data platform and make recommendations to enhance system architecture On both ETL\ELT and real-time pipelines
Requirements:
4+ years of experience as a Data Engineer
4+ years of direct experience with SQL (e.g., Redshift/Postgres/MySQL, Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines – MUST
2+ years of Python
3+ years of experience in scalable data architecture, fault-tolerant ETL, and monitoring of data quality in the cloud
Experience working with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, spark-streaming, DBT, Airflow)
Exceptional troubleshooting and problem-solving abilities, debugging, and root-causing defects in large-scale systems.
Deep understanding of distributed data processing architecture and tools such as Kafka, Spark, and Airflow
Experience with design patterns and coding best practices, understanding of data modeling concepts, techniques, and best practices
Proficiency with modern source control systems, especially Git
Basic Linux/Unix system administration skills
Nice to have:
BS or MS degree in Computer Science or a related technical field – An advantage
Experience with data warehouses
NoSQL, Large scale DBs.
Understanding fintech business processes
DataOps – AWS.
Microservices
Experience in DBT
4+ years of experience as a Data Engineer
4+ years of direct experience with SQL (e.g., Redshift/Postgres/MySQL, Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines – MUST
2+ years of Python
3+ years of experience in scalable data architecture, fault-tolerant ETL, and monitoring of data quality in the cloud
Experience working with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, spark-streaming, DBT, Airflow)
Exceptional troubleshooting and problem-solving abilities, debugging, and root-causing defects in large-scale systems.
Deep understanding of distributed data processing architecture and tools such as Kafka, Spark, and Airflow
Experience with design patterns and coding best practices, understanding of data modeling concepts, techniques, and best practices
Proficiency with modern source control systems, especially Git
Basic Linux/Unix system administration skills
Nice to have:
BS or MS degree in Computer Science or a related technical field – An advantage
Experience with data warehouses
NoSQL, Large scale DBs.
Understanding fintech business processes
DataOps – AWS.
Microservices
Experience in DBT
This position is open to all candidates.