Think about what "false discovery rate" actually means. Whatever your FDR cutoff, that's the proportion of genes that you call as differentially expressed that you expect really aren't differentially expressed. So if you call 500 genes as DE with an FDR cutoff of 0.1, you expect 50 of them to be false positives. It all comes down to tolerance for Type 1 vs. Type 2 errors. In my current project, we're using a very strict FDR of 0.01, because we want to be really sure that any gene we call is the real thing. If we had more of a tolerance for false positives in the name of discovery, we'd use 0.05 or 0.1. I've seen some very good projects that went all the way up to 0.25! But you have to calibrate it to the goals of the project.
One thing you should never do, IMO, is decide on the FDR cutoff based on how many positives you're getting. Decide on a cutoff a priori, and then the number of positives you get is, well, what you get. If you don't get anything at 0.1, sorry, that probably means your experiment just isn't producing significant results. If you get a bunch more than you were expecting at 0.05, that means your experimental condition is producing more DE than you thought. Either way, adjusting the cutoff after the fact is closely related to "p-hacking," and it's a terrible practice.