I just discovered "denoising" method in 16S amplicon analysis (DADA2, Deblur) . If I understood, this method can infer reads sequences without errors comming from technical process . ( an error model is computed ).
I wonder why we don't use this method for all projects which involved amplicon sequencing. For exemple, Can denoising help to be more accurate in cancer diagnosis when we need to detect a low frequency mutation which can be confused with an error ?