This is possibly more complex than it looks at first glance and I don't have practical experience with such implementations so this answer is only theoretical.
The problem breaks down into two steps:
- parallelizing the sequential clustering algorithm or designing a novel algorithm.
- having the parallel clustering algorithm run on a grid using for example using the snow package in R or MPI
The first step is essential. In particular, k-means should be easier to parallelize than hierarchical-clustering. k-means could be parallelized by simple data parallelism in each step. Agglomerative hierarchical clustering on the other hand needs access to the full distance matrix in each step and the distance matrix needs to be shared and updated between all compute nodes.
The second step is maybe implementable using the snow package and mpi. sun grid engine should support MPI. The pvclust package is related to hierarchical-clustering and uses the snow package. As far as I understand, the clustering itself is not carried out in parallel but sequential h-clustering is carried out a 1000 times in parallel for bootstrapping.
With "revolution R" (see the link in Istvan's comment) you could benefit a bit from a multi-threaded math library, but that does not mean that the clustering functions are necessarily implemented as parallel algorithms.
Edit: In conclusion (AFAIK):
- there is no out-of-the-box solution yet that combines parallel clustering, R and sun grid engine.
- The effort for programming/testing does maybe not justify the expected gain in speed/memory efficiency except for extremely large datasets.
- There is no guarantee that a parallel implementation has to be more in efficient memory/computation.
- I wouldn't invest too much time into this without knowing the real use-cases.
Edit: The Rgpu package provides implementations of some statistical algorithms using CUDA and GPU (I know, not exactly what you were looking for, but could provide significant speedup if you have a Nvidia graphics card). Provides functions like
gpuHclust. I will give this package I try on a Mac. This option could be limited by the available graphics memory, I guess the distance matrix has to fit into it.
High-Performance computing with R: http://cran.r-project.org/web/views/HighPerformanceComputing.html
Here are some links to some papers I found for "parallel clustering":
About parallel hierarchical clustering:
A parallel k-means implementation : http://www.eecs.northwestern.edu/~wkliao/Kmeans/index.html
I'd not restrict myself to R and go for MAHOUT
your technology stack could be:
SGE + HADDOP (map-reduce) + MAHOUT (parallel machine learning)
You would not have to implement any parallel algorithms, but rather stitch the components together configure. This would give you the flexibility of trying different algorithms on your data.