We are interested in welcoming an MLOps Engineer to join our diligent team and design, build, test, document, and debug machine learning infrastructure, following industry and company standards.
Requirements:
Prior experience in designing, building, testing, and maintaining machine learning (ML) infrastructure to empower data scientists to rapidly iterate on model development
2+ years relevant experience in developing continuous integration, CI/CD deployment pipelines (e.g., Jenkins, GitHub Actions), and bringing ML models to CI/CD pipelines
Familiarity with data, feature and pipeline versioning of ML assets using tools such as CML-DVC or similar
Proficient knowledge of Git, Docker, Containers, and Kubernetes
Fluency in Infrastructure as Code tools (e.g., Terraform, Ansible, or Chef, etc.)
Fluency in common system maintenance and scripting languages, such as Python, Bash Shell, etc.
Good knowledge of Linux system administration
E2E production experience with Azure ML, Azure ML pipelines, AWS SageMaker, and GCP AI Platform
Familiarity with setting up hyperparameter tuning/optimization tools, and using them to manage versioning and experiments, model deployment and monitoring, such as Optuna, Kubeflow, AWS SageMaker, Hydrosphere, Seldon, or similar
Prior experience in designing, building, testing, and maintaining machine learning (ML) infrastructure to empower data scientists to rapidly iterate on model development
2+ years relevant experience in developing continuous integration, CI/CD deployment pipelines (e.g., Jenkins, GitHub Actions), and bringing ML models to CI/CD pipelines
Familiarity with data, feature and pipeline versioning of ML assets using tools such as CML-DVC or similar
Proficient knowledge of Git, Docker, Containers, and Kubernetes
Fluency in Infrastructure as Code tools (e.g., Terraform, Ansible, or Chef, etc.)
Fluency in common system maintenance and scripting languages, such as Python, Bash Shell, etc.
Good knowledge of Linux system administration
E2E production experience with Azure ML, Azure ML pipelines, AWS SageMaker, and GCP AI Platform
Familiarity with setting up hyperparameter tuning/optimization tools, and using them to manage versioning and experiments, model deployment and monitoring, such as Optuna, Kubeflow, AWS SageMaker, Hydrosphere, Seldon, or similar
This position is open to all candidates.