I'm trying to help make a case to the admin of my uni to "unlock" funds for a Big Ass Server purchase. Part of the challenge is explaining that existing supercomputer facilities (thousands of low-mem nodes) are inadequate.
Jeremy Leipzig makes some excellent points. However, peer-reviewed publications are stronger evidence than blog posts. Do you know of any publications that support this idea?
A first one that indirectly supports the point is CAGE with the statement that for _de novo_ assembly most software "crashed, often after several days running on a 256-GB multi-core computer."