our company is one of the largest companies for cloud workload automation. We aim to improve performance, reduce complexity, and lower compute infrastructure costs. As a significant contributor within the realm of cloud optimization, weconfronts an array of challenges. we devises solutions for complex core issues, such as workload distribution, usage predictions, and efficient allocation of spare resources – handling large amounts of data.
Your Role:
Utilize ML models and algorithms in production to improve product metrics and drive business outcomes.
Evaluate and select appropriate machine learning techniques and algorithms for specific tasks.
Design, develop, and maintain data pipelines and ETL processes to ingest, transform, and load data from various sources into our data systems.
Conduct experiments and perform statistical analysis to evaluate algorithms performance and identify areas for improvement.
Allowing the machine learning infrastructure to effortlessly accommodate billions of real-time decisions.
Translating business requirements into databases, data warehouses, and data streams.
Your Role:
Utilize ML models and algorithms in production to improve product metrics and drive business outcomes.
Evaluate and select appropriate machine learning techniques and algorithms for specific tasks.
Design, develop, and maintain data pipelines and ETL processes to ingest, transform, and load data from various sources into our data systems.
Conduct experiments and perform statistical analysis to evaluate algorithms performance and identify areas for improvement.
Allowing the machine learning infrastructure to effortlessly accommodate billions of real-time decisions.
Translating business requirements into databases, data warehouses, and data streams.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
A minimum of 5 years of software development (Pyhton/Java/GoLang) in a SAAS platform, involvement in scalable data architecture, resilient ETL processes, and vigilance over data integrity in cloud environments.
Minimum of 2 years experience in machine learning field. Working with machine learning principles, techniques, and best practices (ML algorithm selection, and cross-validation)
Solid understanding of machine learning algorithms, statistical modeling, and data analysis.
Proficiency in engines for extensive data processing (e.g., Spark, Scala, Kafka).
Proven familiarity with working within public cloud environments (e.g., AWS / GCP / Azure)
Advantages:
M.Sc. in computer science or related degree major advantage
Devops experience
A strong portfolio or previous work demonstrating machine learning projects or applications.
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
A minimum of 5 years of software development (Pyhton/Java/GoLang) in a SAAS platform, involvement in scalable data architecture, resilient ETL processes, and vigilance over data integrity in cloud environments.
Minimum of 2 years experience in machine learning field. Working with machine learning principles, techniques, and best practices (ML algorithm selection, and cross-validation)
Solid understanding of machine learning algorithms, statistical modeling, and data analysis.
Proficiency in engines for extensive data processing (e.g., Spark, Scala, Kafka).
Proven familiarity with working within public cloud environments (e.g., AWS / GCP / Azure)
Advantages:
M.Sc. in computer science or related degree major advantage
Devops experience
A strong portfolio or previous work demonstrating machine learning projects or applications.
This position is open to all candidates.