Close

What is ChatGPT, and how does it work?

If you’ve been paying any attention to the news lately, there’s no doubt you’ve caught wind of some of the endless chatter surrounding ChatGPT. ChatGPT is a type of generative artificial intelligence (AI) technology, or chatbot, developed by OpenAI that launched in November 2022. Generative AI is a subset of AI algorithms that generate predictive outputs based on data that models have been trained on. The “GPT” in ChatGPT stands for “generative pre-trained transformer,” a type of large language model that can understand and reply to prompts or specific question inputs using natural language. To that end, ChatGPT is not a search engine; it predicts the next best portion of text in a sequence to produce an output based on specified data input.

Despite mixed reactions regarding its usefulness, ChatGPT has been touted as a powerful tool with the potential to change the landscape of scientific writing. ChatGPT has the ability to explore scientific literature, summarize vast amounts of complex information, and suggest new research hypotheses. It can increase the efficiency of executing time-intensive tasks critical to the scientific writing process, such as editing, formatting, language translation, and proofreading.

In spite of its proposed benefits, the use of ChatGPT in scientific writing comes with a fair number of limitations, liabilities, and ethical issues. The most concerning of these is the potential to disseminate inaccurate scientific information. There is an increased likelihood in the occurrence of ethical issues, including plagiarism, author attribution, and copyright infringement. ChatGPT has also been known to produce text that appears professionally researched but is nonsensical. In addition, because ChatGPT’s output depends on specified data input, it is critical to ensure the right questions are asked to obtain the most relevant results. While tools that help predict the presence of AI-generated text exist, none of them are 100% accurate and most have numerous limitations, such as character or word count minimums and difficulty with the use of languages other than English.

The scientific community is largely in consensus that the use of ChatGPT and other similar technologies should be proactively regulated by guidelines and policies. The use of generative AI is in no way a replacement for a human expert when it comes to critical thinking, statistical analysis, evaluation and synthesis of highly technical content, and the provision of tailored recommendations. Considering ChatGPT has such far-reaching implications for scientific writing, several leading journals and groups of journals have proactively taken a stance on its use. Some journals prohibit the use of AI-generated text in submitted work and ban nonhuman technologies from identifying as authors. Others require explicit transparency surrounding the use or nonuse of AI-based natural language processing in manuscript preparation. We can expect the dialogue to continue in this area as the use of this potentially powerful technology becomes increasingly prevalent. The scientific and medical communities will continue to stay abreast of the latest developments with ChatGPT and other up-and-coming technologies.

Note: As with all of our work, this blog post was written and edited by human experts, without the use of ChatGPT.

The Right Partner. The Right Resource. Contact Us