Entering edit mode
5.4 years ago
akh22
▴
120
I am using edgeR's glm approach to determine dge of two groups, wide type (WT) and transgenic (TR) mice, both of which receive four sets of treatment, A, B, A+B, and saline) as follows,
mouse treatment
1: TR A+B
2: TR A
3: TR B
4: TR Saline
5: WT A+B
6: WT A
7: WT B
8: WT Saline
I have a design. matrix like this;
TR-A+B TR-A TR-B TR-saline WT-A+B WT-A WT-B WT-Saline
1 1 0 0 0 0 0 0 0
2 1 0 0 0 0 0 0 0
3 0 1 0 0 0 0 0 0
4 0 1 0 0 0 0 0 0
5 0 0 1 0 0 0 0 0
6 0 0 1 0 0 0 0 0
7 0 0 0 1 0 0 0 0
8 0 0 0 1 0 0 0 0
9 1 0 0 0 1 0 0 0
10 0 0 0 0 1 0 0 0
11 0 0 0 0 0 1 0 0
12 0 0 0 0 0 1 0 0
13 0 0 0 0 0 0 1 0
14 0 0 0 0 0 1 1 0
15 0 0 0 0 0 0 0 1
16 0 0 0 0 0 0 0 1
To determine dge of TR vs WT truly due to combo A+B treatment, should a following contrast be used
(TR-A+B - TR-A - TR-B - TR-saline)-(WT-A+B - WT-A - WT-B - WT-Saline ) ?
Or can I use this by creating an new interactive matrix ,
model, matrix( ~ treatment*mouse, data=d)
(Intercept), mouseWT, treatmentA+B, treatmentB, treatmentSaline, mouseWT:treatmentA+B `
mouseWT:treatmentB mouseWT:treatmentSaline`
and do `glmQLFTest ?
Thanks.