Question: Top MACS peaks from ChIP-seq look identical to input
2
gravatar for YanO
4.6 years ago by
YanO130
United Kingdom
YanO130 wrote:

I have a ChIP-seq dataset that I've processed using MACS. However, when I upload the bigwig files to a browser, and have a look at the top peaks as called by MACS, the experimental track looks identical to the control/input track. I'm confused as to whether such results for the peaks are trustworthy, and how exactly to proceed. 

Any thoughts much appreciated.

chip-seq macs • 4.8k views
ADD COMMENTlink modified 4.5 years ago by spacemorrissey230 • written 4.6 years ago by YanO130
2
gravatar for Ming Tang
4.6 years ago by
Ming Tang2.5k
Houston/MD Anderson Cancer Center
Ming Tang2.5k wrote:

You may check whether those peaks are in the blacklist https://sites.google.com/site/anshulkundaje/projects/blacklists

ADD COMMENTlink written 4.6 years ago by Ming Tang2.5k

Thanks for the good suggestion. Some do indeed fall into these regions. I get the impression this is not quite the full story though...

ADD REPLYlink written 4.6 years ago by YanO130
2
gravatar for Gary
4.6 years ago by
Gary470
Taiwan/Taichung/China Medical University Hospital
Gary470 wrote:

I just guess. Is it possible that you have much more reads of Input (Control samples) compared to the treat (IP samples) and didn’t correct the total number of reads for bigwig files converted from bam files? MACS automatically corrects the total number of reads for peak calling, but BEDTools needs users to manually set the total number of reads for each sample.

ADD COMMENTlink written 4.6 years ago by Gary470

Ah yes actually, I do have much more reads for input than treatment. But, as you say, I thought MACS automatically corrects for this. Is it not reasonable to think that for my most significant peaks, I should surely be able to see some difference between the two conditions when I look at my bigwig files, no?

Is there a standard way to normalise the bigwig files/subtract the input? Just cut random reads from input? Thanks!

ADD REPLYlink written 4.6 years ago by YanO130
1

I made an example to explain for you: http://68.181.92.180/Gary/Temporal/LargeInput.png (the link could be deleted when I clear my disk next time).

 

Green track is the IP (treat sample) with 28 million reads, Red track is its Input with 32 million reads, Blue track is its Input with 63 million reads, and Orange track is its Input with 90 million reads. No matter what Input sample I used, all 5 peaks (i.e. MACS_peak_1 ~ MACS_peak_5) can be called by MACS 1.4 (Red peaks based on 32 million reads Input, Blue peaks based on 63 million reads, and Orange peaks based on 90 million reads). Unfortunately, even I have used the total number of reads to correct all bigwig files, the number of total reads still largely influence the density of enrichment level. I guess that your Input sample could be similar to the Blue track and the total number of reads of your Input sample could be 2 times more than your IP sample.

 

I don’t know any tool can subtract the input for bigwig files. You can try to find this kind of tool in a great website: OMICtools (http://omictools.com/), or ask the question in Biostars. If you find the tool, please share with me. Thanks.

ADD REPLYlink written 4.6 years ago by Gary470

Thanks for this Gary! I guess my question now is - do you trust those peaks called by MACS? Does it not look like the signal in the treatment sample is being driven by the input?

ADD REPLYlink written 4.6 years ago by YanO130
2

It is very hard to say and it usually depends on your biological question. In practice, I try my best to analyze every ChIP-Seq data that has been paid by my boss. Of course, low quality data could result in Garbage In and Garbage Out. Based on my analysis results, it is my boss’ responsibility to determine to (1) believe the negative or garbage result, or (2) pay more money and resource to improve the data quality. Based on my experience, most problems occur before sequencing, including (1) experimental design (e.g. confounding factor), (2) sample preparation (e.g. mixed cell types or contamination), (3) ChIP (e.g. low enrichment), and (4) library preparation (e.g. PCR bias). I use NGS-QC-Generator (Mendoza-Parra et al., 2013) and CHANCE (Diaz, Nellore & Song, 2012) to determine the quality of our ChIP-Seq data. Both of them tell us a lot of information. If possible, I use GREAT (McLean et al., 2010) to test whether the peak-associated genes are significantly enriched in biological functions we are interested in.

 

MACS is very excellent compared to other peak calling tools I have tried. If the data quality is good enough, I always use 10X fold enrichment to filter out weak peaks, just like what Ian suggested. If the data is not so good, I could use 5X fold enrichment to filter out candidate false positive peaks. Sometimes, I don’t use any threshold for a low enriched ChIP-Seq data. Unfortunately, no golden standard can be followed until today. You have to try your best to dig out any biological meaning from NGS data you have. The below is an example for your reference.

 

Wapinski, et al (Wapinski et al., 2013)

…we performed Chromatin Immunoprecipitation followed by high throughput sequencing (ChIP-seq) for all three BAM factors in MEFs 48 hours after induction of all three factors. We readily identified 5,902 significant Ascl1-occupied loci upon filtering high quality peaks based on at least 20 fold enrichment and less than 5% FDR (Figure 2A). In contrast, Brn2 and Myt1l ChIP-seq required substantial technical optimization (see materials and methods, Figures S2B and S2C). Under optimized conditions Brn2 ChIP-seq revealed 2,354 peaks with at least 5-fold enrichment and less than 5% FDR. Using the same criteria, Myt1l ChIP-seq showed only minimal occupancy, despite extensive cross-linking and sonication optimization and use of different epitope tagging strategies. After optimization, we identified 475 peaks with at least 2-fold enrichment and less than 5% FDR.

 

Return to your question, the golden standard is a wet bench work to determine whether a peak is true or not (no matter its enrichment level), e.g. tested by a reporter gene assay. If you don’t do wet bench works, please run a replicate, the reproducibility between two ChIP-Seq data can tell us something as suggested by ENCODE project (Landt et al., 2012). Finally, I share a bigwig data with you: http://68.181.92.180/Gary/Temporal/Med1_Fuchs.png (the link could be deleted when I clear my disk next time). It has been published in Nature this month (Adam et al., 2015). Their Med1 ChIP-Seq (first red track) was bad with low enriched peaks, but still good enough to support their paper to be accepted by Nature. It means that whether your ChIP-Seq data is good or not really depends on your biological question and analyzing strategies. I hope this will be of some help.

 

 

Reference

Adam, R.C., Yang, H., Rockowitz, S., Larsen, S.B., Nikolova, M., Oristian, D.S., Polak, L., Kadaja, M., Asare, A., Zheng, D. & Fuchs, E. (2015). Pioneer factors govern super-enhancer dynamics in stem cell plasticity and lineage choice. Nature.

Diaz, A., Nellore, A. & Song, J.S. (2012). CHANCE: comprehensive software for quality control and validation of ChIP-seq data. Genome Biol. 13.

Landt, S.G., Marinov, G.K., Kundaje, A., Kheradpour, P., Pauli, F., Batzoglou, S., Bernstein, B.E., Bickel, P., Brown, J.B., Cayting, P., Chen, Y.W., DeSalvo, G., Epstein, C., Fisher-Aylor, K.I., Euskirchen, G., Gerstein, M., Gertz, J., Hartemink, A.J., Hoffman, M.M., Iyer, V.R., Jung, Y.L., Karmakar, S., Kellis, M., Kharchenko, P.V., Li, Q.H., Liu, T., Liu, X.S., Ma, L.J., Milosavljevic, A., Myers, R.M., Park, P.J., Pazin, M.J., Perry, M.D., Raha, D., Reddy, T.E., Rozowsky, J., Shoresh, N., Sidow, A., Slattery, M., Stamatoyannopoulos, J.A., Tolstorukov, M.Y., White, K.P., Xi, S., Farnham, P.J., Lieb, J.D., Wold, B.J. & Snyder, M. (2012). ChIP-seq guidelines and practices of the ENCODE and modENCODE consortia. Genome Res. 22, 1813-1831.

McLean, C.Y., Bristor, D., Hiller, M., Clarke, S.L., Schaar, B.T., Lowe, C.B., Wenger, A.M. & Bejerano, G. (2010). GREAT improves functional interpretation of cis-regulatory regions. Nat. Biotechnol. 28, 495-U155.

Mendoza-Parra, M.A., Van Gool, W., Mohamed Saleem, M.A., Ceschin, D.G. & Gronemeyer, H. (2013). A quality control system for profiles obtained by ChIP sequencing. Nucleic Acids Res 41, e196.

Wapinski, O.L., Vierbuchen, T., Qu, K., Lee, Q.Y., Chanda, S., Fuentes, D.R., Giresi, P.G., Ng, Y.H., Marro, S., Neff, N.F., Drechsel, D., Martynoga, B., Castro, D.S., Webb, A.E., Sudhof, T.C., Brunet, A., Guillemot, F., Chang, H.Y. & Wernig, M. (2013). Hierarchical mechanisms for direct reprogramming of fibroblasts to neurons. Cell 155, 621-635.

 

ADD REPLYlink written 4.6 years ago by Gary470

Gary thank you so much for such a detailed answer! Much appreciated. I'll have to put a bit more work in to dig out something biologically meaningful from my data so.

ADD REPLYlink written 4.5 years ago by YanO130
1

see here wigmath http://palpant.us/java-genomics-toolkit/

https://github.com/Ensembl/WiggleTools

 

 

ADD REPLYlink written 4.6 years ago by Ming Tang2.5k

Hi tangming2005, Thanks a lot.

ADD REPLYlink written 4.6 years ago by Gary470
1

what version of MACS are you using? from Tao Liu: https://groups.google.com/forum/#!msg/macs-announcement/LZpliDkdN-8/VO9EYBHWUmIJ

"

MACS14: No. They are raw pileup with tag extension.  Scaling is only used while calling peaks. Note that control track is generated by same tag extension as treatment. So it's not exactly the same as local bias from MACS, which is a maximum of average tag number from a small window(1kbp) and a larger window(10kbp). In brief, they are raw fragment pileup.

 

MACS2: Yes. They are scaled. Especially with current --SPMR option, you will get values of signal per million reads. And control track is consistent with local bias calculated from MACS."

 

 

ADD REPLYlink written 4.6 years ago by Ming Tang2.5k

Thanks for this useful information tangming2005. I use MACS1.4, the most stable version. Maybe I have to try MACS2 in the near future.

ADD REPLYlink written 4.6 years ago by Gary470
0
gravatar for Ian
4.6 years ago by
Ian5.5k
University of Manchester, UK
Ian5.5k wrote:

I have seen this sort of result before.  In my experience these regions will tend to have low fold enrichment (FE) values.  I would recommend using MACS2.1.0 as the returned regions will be retained as they meet the threshold for qvalue (false discovery rate), then sort the regions by fold enrichment.  I set an arbitrary threshold of FE>=10 for well enriched regions, and a lower threshold of FE>=5 for peaks of "interest".  You may well find that the regions you see have FE scores less than 5.

If you are using MACS 1.4 then I would recommend sorting the regions by FDR and only consider those of FDR<10.  Then consider FE as described above.

 

ADD COMMENTlink modified 4.6 years ago • written 4.6 years ago by Ian5.5k

Thanks for the help, but this doesn't seem to be the issue. The top peaks I've selected to look at all have an FDR of 0 and FE of at least 8, with many much higher.

ADD REPLYlink written 4.6 years ago by YanO130
0
gravatar for spacemorrissey
4.5 years ago by
United States
spacemorrissey230 wrote:

It is important to remember, that when MACS is calling a peak, it does not look at the input file.  People often assume that MACS would ignore a region where the treatment signal is not higher than the input signal.  However, when MACS is calling peaks, it is only comparing the treatment signal to the treatment signal in a window around that position.  The input signal is only used in generating the FDR value that MACS assigns to a given peak.

 

DO NOT pick you top picks by the FDR from MACS!  This will almost always give you peaks that should be blacklisted to start with.  The p value that MACS generates is equally useless for identifying which peaks are most trustworthy.

ADD COMMENTlink written 4.5 years ago by spacemorrissey230

Could you expand what should be used  to pick 'trustworthy' peaks then?

ADD REPLYlink written 4.5 years ago by Saulius Lukauskas530
Please log in to add an answer.

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 1925 users visited in the last hour