Hi all,
I'm trying to help make a case to the admin of my uni to "unlock" funds for a Big Ass Server purchase. Part of the challenge is explaining that existing supercomputer facilities (thousands of low-mem nodes) are inadequate.
Jeremy Leipzig makes some excellent points. However, peer-reviewed publications are stronger evidence than blog posts. Do you know of any publications that support this idea?
A first one that indirectly supports the point is CAGE with the statement that for _de novo_ assembly most software "crashed, often after several days running on a 256-GB multi-core computer."
i would be happy to contribute to any manuscript that has the phrase "big ass" in it
Hehe, here in the UK I've now repeatedly heard the variant "fuck-off big machines". For appropriate searchability both descriptions would have to be used.