The position
What will your job look like?
analyze HPC users batch jobs requirements and usage and to manage the HPC Job Scheduler.
installation of the Linux Infrastructure and all the relevant management tasks: configuration, allocation, monitoring.
help the users to configure their batch flows and to optimize them.
provide a second level support for all batch jobs related issues.
All you need is:
B.A in information systems / industrial engineering / a related field
Minimum of 5 years of experience working with Linux & HPC environment
Strong Linux, Shell and Python coding skills
Experience working with at least one of High Performance Computing Job Scheduler (PBS, Grid Engine, Accelerator, LSF, SLURM, etc.)
Storage & Network Systems experience
Experience collaborating with DevOps Team and associated tools (Jenkins, GIT, Antifactory, Docker, Kubernetes etc.)
Experience working with GPU machines environment
Experience working with cloud (AWS, Google)
Motivation and ability for learning new technologies
Outstanding problem solving skills, Customer oriented vision & a Team Player