Today StackOverflow have temporarily banned the use of GPT based answers (see here). People who are suspected to be posting ChatGPT answers without vetting them will get sanctions. Perhaps biostars should follow suit and place a notice along the same lines.
For example, take the most recent post at the time of writing. ChatGPT returned the answer attached in screenshots. This may not be the best example but it was quick to produce and would detract from the valuable help that is currently being provided in the replies.
Regardless, I'd be interested to hear what others think...
I'd almost want to go the other direction and have the ChatGPT answer come up on question posting, e.g.:
1) User enters question
2) User hits [Post] button
3) Window pops up containing the ChatGPT answer and "Does this answer your question?" [Yes] [No]
4) On YES, question is posted with ChatGPT answer pre-marked as the accepted answer; on NO the question is posted alone and without a GPT response
That would be great! However, I suspect lower quality questions might get lower quality answers. This will disproportionately impact newer bioinformaticians who might struggle to vet the proposed answers in some unfortunate cases where the answer produces a semi-plausible looking result and not a straight error or empty file.
I can definitely see it going that way in the future for all sorts of support services though
How are beginners going to know if this is a good answer that actually solved their problem ? They are the very last people who could actually verify that an answer is a good and correct. ChatGPT seems to be very good at posting answers which sound and look right but are actually a load of garbage. The good thing about this community is that most people answering are experts and there are a lot of intricacies in bioinformatics which can only be addressed by experts. For that reason I don’t see this working for the current time.
Presumably by running the code and checking if the output is correct. Or is is established that beginners are so hopeless they cannot even do this?
That might work for simple coding questions like 'How do I merge a data frame with another?' (although the experience from Stack Overflow is that even simple questions can be got wrong). But it would fail with questions like 'What are best practices RNAseq experiments with x replicates with an over dispersed phenotype and low quality RNA.... etc' or 'What's the best algorithm for this de novo assembly....'. Bioinformatics can be very complex and context specific, so beginners (and often experts as well!) have absolutely no clue whether answers to even moderately complex questions with no simple yes/no answer are correct or not and marking them as correct on that basis could seriously mislead future readers.
This isn't meant to rip on beginners, but more to highlight how much knowledge is needed to even ascertain whether something is correct or not.
In my limited experience here, by far the most common case is the simple one...
That's actually pretty smart. Roughly 80% of the questions here, especially those that are coding-related, can be ansered by ChatGPT.
ChatGPT is what prompted @Istvan to post this yesterday: Add artificial intelligence training related disclaimers to your profile
I saw that and it prompted me to make a similar discussion when I saw the news from StackOverflow. Perhaps its not a novel enough thought and should have gone in a reply to that post.