we are looking for a Student Data Process Engineer to join our growing team of data experts.
The hire will be responsible for designing and implementing business workflows within the system and supporting our data team with various data related tasks and configurations.
The ideal candidate has experience in data transformations and is passionate about working with customers, understanding their needs and providing custom solutions.
They must be self-directed and comfortable supporting multiple production implementations for various use cases.
Key Responsibilities:
Design and implement solution-based business workflows for specific use cases, according to customers design and regulatory requirements
Implement and maintain data pipelines in production within the system, with a focus on data validation, pipeline optimization and configuration
Monitor, evaluate the efficiency of and identify potential risks and issues in business processes
Develop and deliver any training required on newly created or updated processes, both to clients and internal staff
Work closely with other departments (Product, RnD, Data, QA) as the main point of reference for business process implementation
Build rule-based data pipelines according to business specifications
Create analytics tools for the data team members to assist them in building and optimizing our product into an innovative industry leader
The hire will be responsible for designing and implementing business workflows within the system and supporting our data team with various data related tasks and configurations.
The ideal candidate has experience in data transformations and is passionate about working with customers, understanding their needs and providing custom solutions.
They must be self-directed and comfortable supporting multiple production implementations for various use cases.
Key Responsibilities:
Design and implement solution-based business workflows for specific use cases, according to customers design and regulatory requirements
Implement and maintain data pipelines in production within the system, with a focus on data validation, pipeline optimization and configuration
Monitor, evaluate the efficiency of and identify potential risks and issues in business processes
Develop and deliver any training required on newly created or updated processes, both to clients and internal staff
Work closely with other departments (Product, RnD, Data, QA) as the main point of reference for business process implementation
Build rule-based data pipelines according to business specifications
Create analytics tools for the data team members to assist them in building and optimizing our product into an innovative industry leader
Requirements:
Third year student in academic disciplines such as Computer Science, Information Systems, Engineering or any other quantitative field
Hands-on experience with SQL
Hands-on experience with Python, specifically data analysis libraries such as Pandas and Numpy
Proven experience with data transformation and validation
Detail oriented and independent learning orientation
Business oriented and able to work with external customers and cross-functional teams
Excellent communication skills and level of English – both spoken and written
Nice to have:
Knowledge of Spark scripting languages: PySpark/Scala/Java/R
Experience working with and optimizing big data pipelines, architectures, and datasets
Experience with XML Programming
Experience with basic Linux
Experience with Zeppelin/Jupyter
Experience with workflow automation platforms such as Jenkins or Apache Airflow
Third year student in academic disciplines such as Computer Science, Information Systems, Engineering or any other quantitative field
Hands-on experience with SQL
Hands-on experience with Python, specifically data analysis libraries such as Pandas and Numpy
Proven experience with data transformation and validation
Detail oriented and independent learning orientation
Business oriented and able to work with external customers and cross-functional teams
Excellent communication skills and level of English – both spoken and written
Nice to have:
Knowledge of Spark scripting languages: PySpark/Scala/Java/R
Experience working with and optimizing big data pipelines, architectures, and datasets
Experience with XML Programming
Experience with basic Linux
Experience with Zeppelin/Jupyter
Experience with workflow automation platforms such as Jenkins or Apache Airflow
This position is open to all candidates.