Job: [IMMEDIATE JOB OPENING] HPC Systems Administrator, Center for Research Informatics (CRI), The University of Chicago, Chicago, IL, USA
0
gravatar for amyloid.beta.martin
9 weeks ago by
amyloid.beta.martin10 wrote:

https://uchicago.wd5.myworkdayjobs.com/External/job/Hyde-Park-Campus/HPC-Systems-Administrator_JR03610

Department​ 2010011 BSD - Center for Research Informatics

About the Unit​ The Center for Research Informatics (CRI) is an organization within the Biological Sciences Division (BSD) that provides informatics resources and services to BSD faculty. Four main services comprise the CRI's operations: applications development, bioinformatics, IT infrastructure, and a clinical research data warehouse. Through these service lines, the CRI enables research of the highest scientific merit and advances the state of the art of clinical and translational informatics. The CRI recruits exceptional candidates looking to leverage state-of-the-art technologies to deliver innovative and exciting solutions biomedical researchers.

Job Information​

Job Summary:

HPC Systems Administrator: This position will work with the Lead HPC Systems Administrator to build and maintain the BSD High Performance Computing (HPC) environment, assist life-sciences researchers to utilize the HPC resources, work with stakeholders and research partners to successfully troubleshoot computational applications, handle customer requests, and respond to suggestions for improvements and enhancements from end-users.

Responsibilities:

Provisioning, configuration, and operation of computational hardware and analysis software for use by research projects.
Participate in the technical development to enable continuing innovation within the infrastructure.
Ensure that hardware, software, and related procedures adhere to organizational values and enable the research of faculty, staff, collaborators, and other end-users.

Competencies:

Ability to administer GNU/Linux servers.
Understanding of issues in research computing.
Ability to program and debug software in C/C++, Java, and/or Fortran.
Ability to build and install software from source.
Basic knowledge of computer and network security.
Desirable: Understanding of HPC scheduling software (e.g. PBS, Moab, or SLURM), parallel filesystems (e.g. GPFS, Lustre), HPC Interconnects (e.g., Infiniband, OmniPath), and/or AI/Deep Learning/Machine Learning.

Additional Requirements​​

Education, Experience or Certifications:

Education:

Bachelor's degree in computer science, engineering, biological sciences, or related field required.

Experience:

Two or more years work experience in a scientific computing environment required.

Required Documents:

Cover Letter
Resume
Link to public, personal Github or similar repository if available

NOTE: When applying all required document MUST be uploaded under the Resume/CV section of the application

Benefit Eligibility​ Yes

Pay Frequency​ Monthly

Pay Range Depends on Qualifications

Scheduled Weekly Hours​ 37.5

Union​ Non-Union

Job is Exempt?​ Yes

Drug Test Required?​​ No

Does this position require incumbent to operate a vehicle on the job?​ No

Health Screen Required?​​ No

Posting Date​ 2018-12-10-08:00

Remove from Posting On or Before​ 2019-06-10-07:00

Posting Statement:​​

The University of Chicago is an Affirmative Action/Equal Opportunity/Disabled/Veterans Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national or ethnic origin, age, status as an individual with a disability, protected veteran status, genetic information, or other protected classes under the law. For additional information please see the University's Notice of Nondiscrimination.

Staff Job seekers in need of a reasonable accommodation to complete the application process should call 773-702-5800 or submit a request via Applicant Inquiry Form.

The University of Chicago's Annual Security & Fire Safety Report (Report) provides information about University offices and programs that provide safety support, crime and fire statistics, emergency response and communications plans, and other policies and information. The Report can be accessed online at: http://securityreport.uchicago.edu. Paper copies of the Report are available, upon request, from the University of Chicago Police Department, 850 E. 61st Street, Chicago, IL 60637.

hpc genomics job bioinformatics • 367 views
ADD COMMENTlink modified 9 weeks ago by amjadcsu80 • written 9 weeks ago by amyloid.beta.martin10

Does this position provide Sponsorhip for candidates outside North America?

ADD REPLYlink written 9 weeks ago by amjadcsu80

For exceptional candidates

ADD REPLYlink written 9 weeks ago by amyloid.beta.martin10

I don't normally threadcrap on job posts but it seems kind of crazy to advertise an HPC job with no cloud computing component. Do they really think bioinformatics will stay on bare metal machines? Isn't this Robert Grossman's group?

ADD REPLYlink modified 9 weeks ago • written 9 weeks ago by Jeremy Leipzig18k

Yea, I think so. Overall, it's substantially cheaper for compute and, especially, storage. Plus, there are considerable issues with data ownership that prohibit cloud. If I put a patients data up on S3, and don't pay my bill, do I or the patient actually own that data?

ADD REPLYlink modified 9 weeks ago • written 9 weeks ago by amyloid.beta.martin10

cloud is relevant to burst, not tera/petascale workflows. it's also relevant for virtualization of services. but you can do that internal as well with vm farms.

ADD REPLYlink written 9 weeks ago by amyloid.beta.martin10

That sounds like a propositional fallacy - people are already storing patient data in S3 in the present day. I don't think these are insurmountable issues.

ADD REPLYlink written 9 weeks ago by Jeremy Leipzig18k

Sure, people do it. I think you definitely pay for it above and beyond whats really required to provide HPC services to a group of researchers.

ADD REPLYlink written 9 weeks ago by mforde841.2k

That's GDC / CDIS, and yea they work off AWS as far as I know. This is for CRI.

ADD REPLYlink written 9 weeks ago by amyloid.beta.martin10

It works for them. Not hating or anything. Suits their needs. :)

ADD REPLYlink written 9 weeks ago by amyloid.beta.martin10

The tension I am sensing between these groups is giving me a headache and I don't even work there.

ADD REPLYlink written 9 weeks ago by Jeremy Leipzig18k

No tension really. I believe CRI is headed by Bob as well, if I'm not mistaken. Wasn't always too clear on how upper leadership actually works. Just two different groups, services, and ways to orchestrate them. Would be nice to integrate more with GDC as I think it's a valuable resource, and I know some of the guys over there have done some integration with slurm, so maybe sometime in the future they may be able to offer them local compute / storage at a fraction of their operating cost for aws. Who knows? Haven't really crunched the numbers, but might be worthwhile to consider buy in type stuff. I mean when you think about it, what is cloud beyond a buy in to someone elses resources?

ADD REPLYlink modified 9 weeks ago • written 9 weeks ago by mforde841.2k

but really, and not drinking the koolaid over here, uchicago is a great place to work. fantastic culture with chill environment and meaningful work. would highly recommend to anyone. i do think that management structure can be a little flat, but that's not always a bad thing. it's just a different type of environment then you'd expect at say something like pharma, financial traders, etc.

ADD REPLYlink modified 9 weeks ago • written 9 weeks ago by mforde841.2k

"on-demand rent of virtualized computing infrastructure". Even 5 years ago the cloud was not a particularly friendly or cheap place to do scientific computing, but things are so different now.

ADD REPLYlink written 9 weeks ago by Jeremy Leipzig18k

I mean were talking about tens of millions of proc hours, frequently with large memory requirements, sql services, research grade gpus, petabytes of storage, etc. I doubt AWS would even be competitive, and even if they were close youd end up spending enough renting the infrastructure, that you could have bought the infrastructure in the first place.

ADD REPLYlink modified 9 weeks ago • written 9 weeks ago by mforde841.2k

Once again I'm not sure we have to talk about this theoretically - most cores have at least some cloud component, including the University of Chicago. Some are 50/50 hybrids. Others use it by default. I was just surprised that the word "cloud" is not in a 2018 bfx sysadmin job ad, even as a wishlist term.

ADD REPLYlink written 9 weeks ago by Jeremy Leipzig18k

Argonne did a report on this to those interested: https://science.energy.gov/~/media/ascr/pdf/program-documents/docs/Magellan_Final_Report.pdf

It's a bit dated admittedly. However, overall operating costs for cloud are still likely 10x+ more expensive then traditional HPC environments. Sure, CRI has some cloud components, as our infrastructure team supports a relatively large VM farm as well and are interested in possibly even openstack stuff, but that's a separate working group. Scientific computing staff support the HPC environment. HPC labs don't really offload a significant percentage of their workloads to commercial cloud services unless they are running at super high capacity where bursting to cloud then makes financial sense. Love or hate HPC, but that's what it is, and it's not being absorbed by the cloud any time soon, if ever.

ADD REPLYlink modified 9 weeks ago • written 9 weeks ago by mforde841.2k
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1586 users visited in the last hour