Most of DE tools (such as DE-Seq) applied in RNA-Seq assume that gene expression follows Negative Binomial distribution (because of both technical and biological variation). While DE tools originated from Microarray (such as limma) assumes normal distribution? Is this difference due to some technical difference between RNA-Seq and Microarray?
Microarrays aren't poisson processes. You aren't modeling discrete events. We can't think of it like "what is the probability of having k number of reads for a given gene". That's because microarrays are based on continuous signal intensities.
You can think of sequencing reads as success/fail trials (bernoulli -> binomial -> poisson), you can't think of continuous signal intensities that way.
Just because something has variation (actually, all real data has variation) doesn't mean it's Poisson or shot noise. Just look at any graph of the poisson distribution: the random variable is discrete numbers. Can you get continuous signal intensities to fit such a distribution? No.
Also, you can use limma for RNA-seq (see: limma-voom, which applies some special transformation so you aren't actually fitting raw count data). It works well RNA-seq. Negative binomial is only one way to model RNA-seq data for DE analysis; many packages (e.g. sleuth, limma) don't model it that way.