We are searching for an innovative and experienced Data Engineer that will join us and be part of our reference and alternative data team in our data group.
As a Data Engineer, you will:
Be a part of a cross functional team of data and backend engineers.
Be responsible for ingesting financial data and providing it over numerous APIs in close collaboration with algorithmic teams and other partners.
Lead the architecture, planning, design and development of mission-critical and diverse data pipelines over both public and on-prem cloud solutions.
As a Data Engineer, you will:
Be a part of a cross functional team of data and backend engineers.
Be responsible for ingesting financial data and providing it over numerous APIs in close collaboration with algorithmic teams and other partners.
Lead the architecture, planning, design and development of mission-critical and diverse data pipelines over both public and on-prem cloud solutions.
Requirements:
At least 5 years of experience working as a Data Engineer
At least 5 years of experience working in python development with emphasis on data analysis tools such as NumPy, pandas, polars, Jupyter notebook.
Hands-on experience working with AWS data processing tools and concepts.
Proven understanding in designing, developing and optimizing complex solutions.
Proven experience with the following technologies: Neo4j, MongoDB, Redis, Snowflake
Experience with Docker, Linux, CI/CD tools and concepts, Kubernetes.
Experience with data pipelining tools such as Airflow, Kubeflow or similar.
BSc / MSc degree in Computer Science/ Engineering / Mathematics or Statistics.
Advantages:
Hands-on experience with DataBricks platform.
Experience working on large scale and complex on-premises systems.
Hands-on experience in lower-level programming languages such as C++ or RUST
Familiarity with Capital markets and basic economics knowledge.
At least 5 years of experience working as a Data Engineer
At least 5 years of experience working in python development with emphasis on data analysis tools such as NumPy, pandas, polars, Jupyter notebook.
Hands-on experience working with AWS data processing tools and concepts.
Proven understanding in designing, developing and optimizing complex solutions.
Proven experience with the following technologies: Neo4j, MongoDB, Redis, Snowflake
Experience with Docker, Linux, CI/CD tools and concepts, Kubernetes.
Experience with data pipelining tools such as Airflow, Kubeflow or similar.
BSc / MSc degree in Computer Science/ Engineering / Mathematics or Statistics.
Advantages:
Hands-on experience with DataBricks platform.
Experience working on large scale and complex on-premises systems.
Hands-on experience in lower-level programming languages such as C++ or RUST
Familiarity with Capital markets and basic economics knowledge.
This position is open to all candidates.
















