**0**wrote:

I am trying to cluster several amino acid sequences of a fixed length (13) into K clusters based on the Atchley factors (5 numbers which represent each amino acid.

For example, I have an input vector of strings like the following:

```
key <- HDMD::AAMetric.Atchley
sequences <- sapply(1:10000, function(x) paste(sapply(1:13, function (X) sample(rownames(key), 1)), collapse = ""))
```

However, my actual list of sequences is over 10^5 (specifying for need for computational efficiency).

I then convert these sequences into numeric vectors by the following:

```
m1 <- key[strsplit(paste(sequences, collapse = ""), "")[[1]], ]
p = 13
output <-
do.call(cbind, lapply(1:p, function(i)
m1[seq(i, nrow(m1), by = p), ]))
```

I want to output (which is now 65 dimensional vectors) in an efficient way.

I was originally using Mini-batch kmeans, but I noticed the results were very inconsistent when I repeated. I need a consistent clustering approach.

I also was concerned about the curse of dimensionality, considering at 65 dimensions, Euclidean distance doesn't work.

Many high dimensional clustering algorithms I saw assume that outliers and noise exists in the data, but as these are biological sequences converted to numeric values, there is no noise or outlier.

In addition to this, feature selection will not work, as each of the properties of each amino acid and each amino acid are relevant in the biological context.

How would you recommend clustering these vectors?

**24k**• written 2.9 years ago by keshavmot2 •

**0**

Can you not prioritise certain dimensions and then just focus on those in pairwise plots?

Alternatively, you could attempt to 'summarise' each dimension into a single vector of eigenvalues and, through that, merge the 65 dimensions into a single data matrix (I have done this in the past). You may also take inspiration from t-SNE and other high dimensional mass cytometry data processing algorithms. See Algorithmic Tools for Mining High-Dimensional Cytometry Data.

69k