I have a set of shotgun metagenomic fastq files of varying quality. Is it a good idea to have them all trimmed at the same length? Or is it okay to trim them at different lengths based on the quality of each individual file?
If I trim them all at same length, that length would be based on the file with the lowest quality i.e. the file which is going to get trimmed the most. Obviously, this is not ideal as I would loose a lot of data in other files that are of higher quality.
Downstream processing would involve assembling the reads into scaffolds.