We are looking for an experienced Technical Product Manager with a proven track record in data architecture to contribute to our growing team.
What you will be doing:
– Analyze customer needs and translate them into requirements
– Plan and analyze data pipelines, algorithms, and automated systems to support existing and new business processes
– Write characterization documents, starting with source to target and ending with the visual characterization of how query the data
– Provide troubleshooting and respond to day-to-day requests
– Work with internal teams, including developers, engineers, architects, QA, and stakeholders
– Work with Jira, accompanying the development and testing process
Requirements:
– B.Sc in Computer Science /Computer Engineering/Information Systems
– 2-3 years of experience in Systems analysis or programming – must
– Ability to investigate processes and perform reverse engineering
– Effective communication and interpersonal skills
– Ability to be proactive and take take full ownership
– Experience in writing characterization documents for DWH / data Lake House (HLD, LLD) and data modeling
– Experience in writing complex SQL queries
– Experience with Big Data architectures
– Advantage – Familiarity with different file types – JSON, Parquet, CSV
– Advantage – Familiarity with different data sources – RDS, NoSQL (MongoDB or equivalent), Kafka, APIs
– Advantage – Experience working with Databricks
– B.Sc in Computer Science /Computer Engineering/Information Systems
– 2-3 years of experience in Systems analysis or programming – must
– Ability to investigate processes and perform reverse engineering
– Effective communication and interpersonal skills
– Ability to be proactive and take take full ownership
– Experience in writing characterization documents for DWH / data Lake House (HLD, LLD) and data modeling
– Experience in writing complex SQL queries
– Experience with Big Data architectures
– Advantage – Familiarity with different file types – JSON, Parquet, CSV
– Advantage – Familiarity with different data sources – RDS, NoSQL (MongoDB or equivalent), Kafka, APIs
– Advantage – Experience working with Databricks
This position is open to all candidates.


















