9I制作厂免费

Subscribe to the OSS Weekly Newsletter!

Artificial Intelligence Sneaks Into Scientific Publications

Can you trust that what you read in a scientific paper was authored by a human?

Roughly three and a half million scientific papers are published globally every year in an estimated 47,000 academic journals. That鈥檚 an astonishing six papers every minute! Some are very good, some very bad, most are mediocre. The vast majority of science and medical journals are peer-reviewed, but peer-review is not a guarantee that the results reported and conclusions arrived at are reliable. Studies have shown that the same paper sent to different reviewers may be highly praised or rejected, depending on the reviewer鈥檚 expertise or bias. There are also 鈥減redatory journals鈥 that claim to be peer-reviewed but will in fact publish any paper for a fee.

A reviewer of a paper submitted to a reputable journal has to assume that the results they are reviewing were properly arrived at and are free of experimental errors and computational mistakes. These can creep in and may not be detected until someone tries and fails to duplicate the results or takes the time to check a calculation carefully. Then there is the issue of authors selectively reporting data to favour a preconceived conclusion, or most disturbingly, there may be out and out fraud, as was the case with that dreadful 1998 paper by Andrew Wakefield in The Lancet that falsely linked autism to vaccines. It was eventually retracted as are thousands of other papers ever year.

Obviously, the peer-review process is not perfect, but it gives us the best shot at finding an answer coming to a scientific question especially when results from many papers point in the same direction. But it takes experience to separate the wheat from the chaff, and there is a lot of chaff due to the 鈥減ublish or perish鈥 environment that exists in academia. Too often performance is judged by the quantity, not the quality of publications. It is not rare for a researcher to milk experimental data with a view towards grinding out more articles. And that now brings us to an issue that has emerged as a result of the proliferation of artificial intelligence (AI) programs.

Most journals have a 鈥淟etters to the Editor section鈥 where correspondents may critique previous papers that have been published, or report a result that may be of immediate interest. Crick and Watson鈥檚 famous determination of the structure of DNA was described in 1953 in such a Letter to the Editor in the journal 鈥淣ature,鈥 and Warren and Marshall similarly reported their finding that ulcers can be caused by Helicobacter pylori bacteria in The Lancet in 1983. The first warning about thalidomide causing severe birth defects appeared as a Letter to the Editor in The Lancet by obstetrician William McBride in 1961. These letters were clearly groundbreaking and deserve to be counted as a publication by the scientists. However, most letters do not have a significant impact but still are included by their authors in their list of publications in order to boost their academic reputation.

Dr. Carlos Chaccour, an internist at the University of Navarra in Spain, who specializes in the treatment of malaria, uncovered a scheme whereby some scientists use artificial intelligence to compose letters to the editor of different journals on a host of subjects so that they can pad their resumes. Dr. Chaccour鈥檚 uncovering of this ruse began when he was forwarded a letter that had been submitted to the New England Journal of Medicine (NEJM) criticizing an article he had published in the journal about the treatment of malaria with ivermectin. This is a common procedure to give the author a chance to respond.

The criticism was that Dr. Chaccour had neglected to refer to a seminal paper in the field that had been previously published showing that mosquitoes become resistant to ivermectin. He was amused by this because the paper he allegedly had neglected was his own and it did not show that mosquitoes become resistant. The letter had another reference to a paper that supposedly described that controlling malaria with ivermectin did not work. That too was a paper by Dr. Chaccour and it said no such thing. Thus began a trip down a deep rabbit hole.

The letter had been written by a physician who previously had published no letters to the editor, but in 2025, just as chatbots came on the scene, unleased a flurry of 84 letters on 58 topics. These, Dr. Chaccour thought, had to be AI generated! Digging deeper, he discovered a sudden emergence of prolific letter writers, including one who in 2024 published 234 letters after none the previous year. He also found letters signed by the same Chinese authors that appeared in cardiology, emergency medicine, endocrinology, gastroenterology, hepatology and immunology journals. These corresponders must be very, very talented doctors indeed, having expertise in all these fields!

One has to wonder if the chatbot letters are just the tip of the iceberg. Could there be totally fictional research papers being submitted to journals written by artificial intelligence programs? That is an ominous prospect. On the other hand, programs like Chat GTP or Perplexity can deliver useful information in a fraction of a second. For example, when I asked about letters to the editor that had a great impact, both came up with the ones I mentioned about thalidomide, Helicobacter and DNA. Artificial intelligence is clearly a double-edged sword.

Incredibly, the physician who wrote the original letter to the NEJM 鈥渢o raise robust objections鈥 about Dr. Chaccour鈥檚 paper has the initials B.S. How fitting!


Back to top