User: dr_bantz

gravatar for dr_bantz
dr_bantz50
Reputation:
50
Status:
New User
Location:
Last seen:
3 hours ago
Joined:
1 year, 7 months ago
Email:
s************@gmail.com

Posts by dr_bantz

<prev • 16 results • page 1 of 2 • next >
1
vote
1
answer
299
views
1
answers
Answer: A: Need help about mRNA-Seq result
... cummeRbund (part of the Tuxedo suite) accepts FPKM input. http://compbio.mit.edu/cummeRbund/ ...
written 9 weeks ago by dr_bantz50
0
votes
0
answers
384
views
0
answers
Comment: C: Error with samtools - indexing alignments: Writing to standard output failed: br
... Sounds like your bam files might be too big. See here: https://www.biostars.org/p/190503/ ...
written 4 months ago by dr_bantz50
1
vote
2
answers
357
views
2
answers
Comment: C: Deseq2 pairwise comparision
... You've missed an apostrophe and a comma in there and the variables in the data frame have different lengths (ie, one of them has the wrong number of samples). ...
written 4 months ago by dr_bantz50
1
vote
2
answers
357
views
2
answers
Answer: A: Deseq2 pairwise comparision
... The 'colData' argument specifies the sample information. This should be a one column dataframe containing the condition for each sample, with the name of the samples as the row names. colData <- data.frame(condition = conditions) row.names(colData) <- names where "conditions" is a ...
written 4 months ago by dr_bantz50
0
votes
2
answers
357
views
2
answers
Comment: C: Deseq2 pairwise comparision
... "paired end" is to do with the technology used for the sequencing itself (I imagine you used single end - either way it's not relevant to your question). The link igor posted gives some guidelines as to how to deal with having samples encompassing multiple variables (conditions/cell lines). You say ...
written 4 months ago by dr_bantz50
0
votes
3
answers
293
views
3
answers
Answer: A: Huge matrix memory issue
... Thanks for all the suggestions! In the end I went for a row-by-row appraoch, as suggested by Jean-Karim Heriche karl.stamm. Using this approach as it is would take 10 days to run on the cluster. However, the approach has some helpful advantages, one of which being that I am now able to embarrassing ...
written 6 months ago by dr_bantz50
0
votes
3
answers
293
views
3
answers
Comment: C: Huge matrix memory issue
... Each i,jth cell would contain the number of times gene i and gene j are mutated in the same train, so this would be have to be computed. The plan is to then make a second matrix with the Poisson probability for the value i,j based on the occurrence of mutation in gene i and j. Your parallelization i ...
written 6 months ago by dr_bantz50
0
votes
3
answers
293
views
6 follow
3
answers
Huge matrix memory issue
... Hi everyone, I have a WGS dataset consisting of ~2000 samples. I want to look at the co-occurrence of mutations (ie. how often a given pair of genes is mutated in the same strain). The way I'm doing this requires the creation of a 30,000 X 30,000 matrix to represent all pairwise comparisons of gen ...
genome written 6 months ago by dr_bantz50
0
votes
3
answers
473
views
3
answers
Comment: C: Python script to trim 3' A nucleotides running slow
... Awesome, it's turbo-fast now! I just had to add an extra '\n' at the end of the f.write statement (and of course I added fqFile = open(in_file, 'r') before the for loop). Thanks a lot! ...
written 17 months ago by dr_bantz50
5
votes
3
answers
473
views
3
answers
Python script to trim 3' A nucleotides running slow
... I wrote a python script to trim any 3' nucleotides from all reads in a fastq file (this is necessary for particular samples due to the library prep method). The script works, but it's very very slow. Any ideas as to how to speed it up? I suspect the step where the trimmed read is appended to the oup ...
python written 17 months ago by dr_bantz50 • updated 17 months ago by John11k

Latest awards to dr_bantz

No awards yet. Soon to come :-)

Help
Access

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by Biostar version 2.3.0
Traffic: 774 users visited in the last hour