Responsibilities:
Develop automated infrastructures for data processing pipelines to support algorithm engineers, data analysts, and domain experts.
Research and implement new technologies and open-source solutions for our data users.
Identify and address technology bottlenecks to enhance our AI generation process.
Ensure data accessibility and efficiency for the organization.
Qualifications & Skills:
Bachelor's degree in Information Systems, Computer Science, or related field.
Minimum 5 years of industry experience in a relevant role.
Proficiency in Python programming.
Familiarity with Data Science Frameworks (e.g., Jupyter, Pandas).
Strong SQL skills.
Experience in designing, creating, and managing databases, preferably SQL-based.
Understanding of Data Warehouse (DWH) concepts and ETL processes.
Experience with data processing in the cloud.
Strong interpersonal skills and the ability to work collaboratively.
Experience in a tech lead or team lead role is a plus.