What you’ll do:
Present findings and recommendations to both technical and non-technical audiences.
Design, develop, and optimize scalable data pipelines and ETL processes.
Build and maintain robust data architectures and data warehouses.
Collaborate with data scientists, analysts, and other stakeholders to understand data needs and deliver high-quality data solutions.
Ensure data integrity, accuracy, and security across all data platforms.
Develop and implement data models and schemas to support analytics and reporting.
Monitor and troubleshoot data pipelines and infrastructure to ensure reliability and performance.
Automate repetitive tasks to improve efficiency and reduce manual intervention.
Stay current with industry best practices and emerging technologies in data engineering.
What you’ll bring:
Bachelors degree in Computer Science, Information Technology, Engineering, or a related field (Masters preferred).
Proven 5-8 year experience as a Data Engineer or in a similar role.
Strong programming skills in Python, Java, or Scala.
Proficiency in SQL and experience with relational and non-relational databases.
Experience with big data technologies (e.g., Hadoop, Spark, Kafka).
Familiarity with cloud platforms (e.g., AWS, Google Cloud, Azure).
Knowledge of data warehousing solutions (e.g., Redshift, BigQuery, Snowflake).
Strong understanding of ETL/ELT processes and data integration techniques.
Excellent problem-solving skills and attention to detail.
Confident team player, with outstanding verbal and written communications skills.
Nice to have:
Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).
Knowledge of data governance and data quality frameworks.
Familiarity with CI/CD pipelines and DevOps practices.
Experience with real-time data processing and streaming technologies.