Hi all,

I am wondering what is **the basic idea** of modifying the original algorithm in order to develop novel algorithm.

For example in genetic risk score calculation, the original equation is `GRS <- log(odd ratio) * (No. of risk allele)`

.

But, the extended ones can be found in LDpred, metaGRS, lassosum as example where the various features was added into the original GRS equation in many way such as Bayesian approach (LDpred).

*For my rough understanding (I dont know if it is correct), the modified one can be developed from the observation in that problem and try to add new variable into the original one in certain ways. After that the new one is subsequently validated.*

What is concept behind this process or any detailed guide ?

Thanks a lot.

Yean

I'd add that implementations of algorithms also make heuristic trade-offs for practical reasons and some new algorithms are developed/derived from others because the originals are not practical under certain circumstances or to account for cases not originally considered. Take for example the various sequence alignment algorithms. We first got Needleman-Wunsch for global alignment then we got Smith-Waterman for local sequence alignments then we got FASTA and BLAST because the others were too slow.

Thanks for both of your guidance :)

I just have some more questions ...

Whenever the originals are not practical under certain circumstances or to account for cases not originally consideredThe hardest part is to find an existing theory to tackle those problem and translate it into mathematical formula right ?

If yes, I still wonder that how they convert those existing theories to tackle those original algorithm's problem into quite complex math formula ?

Are there any typical steps (or rationale) to do this process or it is based on trial and error ?

Thanks