Saturday, January 28, 2023
Home Tech News ChatGPT is fooling scientists by writing accurate research paper summaries

ChatGPT is fooling scientists by writing accurate research paper summaries

The ChatGPT artificial intelligence model is so good that even trained scientists can’t differentiate it from a human (Picture: Getty)

The ChatGPT artificial intelligence (AI) has become good enough to fool trained scientists into thinking they are reading text written by a human.

A team of researchers used the AI to generate fake research paper abstracts to test whether other scientists could spot them.

Abstracts are neat summaries added to the top of research papers to give an overall picture of what’s being studied. ChatGPT was tasked with writing 50 medical research abstracts after being ‘trained’ on a selection from the likes of The British Medical Journal (BMJ) and Nature Medicine.

The chatbot, which has taken the internet by storm since being released to the public in November, didn’t disappoint.

Not only did the computer’s text pass successfully through an anti-plagiarism detector, but the actual scientists couldn’t spot the fakes. The human reviewers correctly identified only 68 per cent ChatGPT’s abstracts and 86 per cent of the authentic ones.

The group of medical researchers believed that 32 per cent of the AI-generated abstracts were real.

‘I am very worried,’ said Sandra Wachter, who studies technology and regulation at the University of Oxford.

Professor Wachter was not involved in the research but told nature.com: ‘If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics.’

Science concept; Experimental scientists in chemical laboratory. Development and scientific research in laboratory

A team of scientists couldn’t always tell which abstracts were real and which were generated by AI (Picture: Getty)

The researchers that conducted the test, led by Catherine Gao at Northwestern University in Chicago, Illinois, said the ethical boundaries of this new tool have yet to be determined.

‘ChatGPT writes believable scientific abstracts, though with completely generated data,’ they explained in the pre-print write-up of their study.

‘These are original without any plagiarism detected but are often identifiable using an AI output detector and skeptical human reviewers.

‘Abstract evaluation for journals and medical conferences must adapt policy and practice to maintain rigorous scientific standards; we suggest inclusion of AI output detectors in the editorial process and clear disclosure if these technologies are used.

‘The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined.’


MORE : ChatGPT is the chatbot phenomenon taking the internet by storm right now


MORE : A flood of unofficial ChatGPT apps are charging people to use the free AI tool

- Advertisment -

Most Popular

Jacket found in U.K. may have origins with Indigenous Manitobans 170 years ago

A rare article of clothing that found its way to a vintage clothing business in England may have Manitoba roots dating back more than...

Facebook is allowing Trump back. The platform hasn’t learned its lesson | Jan-Werner Müller

Facebook is allowing Trump back. The platform hasn’t learned its lessonJan-Werner MüllerTrump has never shown the slightest repentance for his role in what Facebook...

Hospital couldn’t legally hold VPD officer who died by suicide, inquest hears

WARNING: This story contains disrobing details that may not be suitable for all readers. Discretion is advised.  The hospital psychiatrist who released a Vancouver police...

B.C. mom calls for de-escalation training after autistic son handcuffed in hospital

A B.C. mother is calling for better de-escalation training for law enforcement after her 12-year-old son, who has autism, was handcuffed at BC Children’s...