We are seeking a skilled Data/ Backend Engineer to design and implement complex, high-scale systems that retrieve, process, and analyze data from the digital world. This role involves developing and maintaining backend infrastructure, creating robust data pipelines, and ensuring the seamless operation of data-driven products and services.
Key Responsibilities:
– Design and build high-scale systems and services to support data infrastructure and production systems.
– Develop and maintain data processing pipelines using technologies such as PySpark, Hadoop, and Databricks.
– Implement dockerized high-performance microservices and manage their deployment.
– Monitor and debug backend systems and data pipelines to identify and resolve bottlenecks and failures.
– Work collaboratively with data scientists, analysts, and other engineers to develop and maintain data-driven solutions.
– Ensure data is ingested correctly from various sources and is processed efficiently.
Key Responsibilities:
– Design and build high-scale systems and services to support data infrastructure and production systems.
– Develop and maintain data processing pipelines using technologies such as PySpark, Hadoop, and Databricks.
– Implement dockerized high-performance microservices and manage their deployment.
– Monitor and debug backend systems and data pipelines to identify and resolve bottlenecks and failures.
– Work collaboratively with data scientists, analysts, and other engineers to develop and maintain data-driven solutions.
– Ensure data is ingested correctly from various sources and is processed efficiently.
Requirements:
– BSc degree in Computer Science or equivalent practical experience.
– At least 4+ years of server-side software development experience in languages such as Python, Java, Scala, or Go.
– Experience with Big Data technologies like Hadoop, Spark, Databricks, and Airflow.
– Familiarity with cloud environments such as AWS or GCP and containerization technologies like Docker and Kubernetes.
– Strong problem-solving skills and ability to learn new technologies quickly.
– Excellent communication skills and ability to work in a team-oriented environment.
Nice to Have:
– Experience with web scraping technologies.
– Familiarity with Microservices architecture and API development.
– Knowledge of databases like Redis, PostgreSQL, and Firebolt.
– BSc degree in Computer Science or equivalent practical experience.
– At least 4+ years of server-side software development experience in languages such as Python, Java, Scala, or Go.
– Experience with Big Data technologies like Hadoop, Spark, Databricks, and Airflow.
– Familiarity with cloud environments such as AWS or GCP and containerization technologies like Docker and Kubernetes.
– Strong problem-solving skills and ability to learn new technologies quickly.
– Excellent communication skills and ability to work in a team-oriented environment.
Nice to Have:
– Experience with web scraping technologies.
– Familiarity with Microservices architecture and API development.
– Knowledge of databases like Redis, PostgreSQL, and Firebolt.
This position is open to all candidates.