The main idea of this technology is introduced in this paper:
Comprehensive Genome-wide Protein-DNA Interactions Detected at Single-Nucleotide Resolution
It seems can give better result than ChIP-seq.
However, there are much fewer regions that a tag can map to, which leads to a lot of redundant reads.
I don't think only single read should be kept if redundant ones exist.. However, I also think use a maximum number (for example, 5) is also arbitrary. (If there's a distribution model, it would be more reasonable) I saw several threads about how to deal with redundant reads, but didn't find anyone solve it. Does anyone have any idea about that?
The only reason to remove duplicates in other experiments is that the likelihood of producing identical reads naturally is very low when compared to the rate of PCR artifacts. But this does not mean that one should always automatically remove duplicates.
The method that you cite allows for a far more accurate identification of the binding sites, therefore most of the duplicates will be natural ones that should not be removed as they indicate occupancy levels.
You might want to take a look at this paper
"We describe a general approach for utilizing reads that map to multiple locations on the reference genome (multi-reads). Our approach is based on allocating multi-reads as fractional counts using a weighted alignment scheme"