You will play a key role in developing reliable data pipelines, ensuring data quality and consistency, and enabling data-driven decision-making across the organization.
Responsibilities:
Design, build, and maintain data pipelines and data models to support analytics and reporting needs.
Work with structured data across databases, data lakes, and data warehouses.
Write efficient and maintainable SQL and Python code for data transformation and validation.
Ensure data quality, consistency, and basic data lineage across systems.
Collaborate with Product, Engineering, and business stakeholders to understand data requirements.
Support dashboards and reporting tools by providing clean, well-modeled data.
Contribute to improving data standards, documentation, and best practices.
Requirements:
1-3 years of experience in data engineering, analytics engineering, or a related role.
Bachelors degree in a quantitative, technical, or related field, or equivalent practical experience.
Strong SQL skills with hands-on experience in data querying and transformations.
Experience using Python (or similar) for data processing and automation.
Familiarity with modern data platforms, including data warehouses, databases, and data lakes.
(e.g. Snowflake, Postgres, BigQuery).
Experience with data transformation and modeling tools such as dbt or SQLMesh.
Familiarity with BI and visualization tools (Tableau, Superset, or similar).
Detail-oriented, collaborative, with strong communication skills.
Nice to have:
Experience with workflow orchestration tools such as Airflow or similar.
Proficiency with Git/GitHub for version control and collaboration.
Experience working in an Agile environment (Scrum/Kanban) with cross-functional teams.















