Artificial Intelligence

AI could accelerate scientific fraud as well as progress

[ad_1]

The aim of the session, organised by the Royal Society in partnership with Humane Intelligence, an American non-profit, was to break those guardrails. Some results were merely daft: one participant got the chatbot to claim ducks could be used as indicators of air quality (apparently, they readily absorb lead). Another prompted it to claim health authorities back lavender oil for treating long covid. (They do not.) But the most successful efforts were those that prompted the machine to produce the titles, publication dates and host journals of non-existent academic articles. “It’s one of the easiest challenges we’ve set,” said Jutta Williams of Humane Intelligence.

AI has the potential to be a big boon to science. Optimists talk of machines producing readable summaries of complicated areas of research; tirelessly analysing oceans of data to suggest new drugs or exotic materials and even, one day, coming up with hypotheses of their own. But AI comes with downsides, too. It can make it easier for scientists to game the system, or even commit outright fraud. And the models themselves are subject to subtle biases.

Start with the simplest problem: academic misconduct. Some journals allow researchers to use LLMs to help write papers, provided they say as much. But not everybody is willing to admit to it. Sometimes, the fact that LLMs have been used is obvious. Guillaume Cabanac, a computer scientist at the University of Toulouse, has uncovered dozens of papers that contain phrases such as “regenerate response”—the text of a button in some versions of ChatGPT that commands the program to rewrite its most recent answer, presumably copied into the manuscript by mistake.

The scale of the problem is impossible to know. But indirect measures can shed some light. In 2022, when LLMs were available only to those in the know, the number of research-integrity cases investigated by Taylor and Francis, a big publisher of scientific papers, rose from around 800 in 2021 to about 2,900. Early figures from 2023 suggest the number was on course to double. One possible telltale is odd synonyms: “haze figuring” as another way to say “cloud computing”, for example, or “counterfeit consciousness” instead of “AI”.

Even honest researchers could find themselves dealing with data that has been polluted by AI. Last year Robert West and his students at the Swiss Federal Institute of Technology enlisted remote workers via Mechanical Turk, a website which allows users to list odd jobs, to summarise long stretches of text. In a paper published in June, albeit one that has not yet been peer-reviewed, the team revealed that over a third of all the responses they received had been produced with the help of chatbots.

Dr West’s team was able to compare the responses they received with another set of data that had been generated entirely by humans, leaving them well-placed to detect the deception. Not all scientists who use Mechanical Turk will be so fortunate. Many disciplines, particularly in the social sciences, rely on similar platforms to find respondents willing to answer questionnaires. The quality of their research seems unlikely to improve if many of the responses come from machines rather than real people. Dr West is now planning to apply similar scrutiny to other crowdsourcing platforms he prefers not to name.

It is not just text that can be doctored. Between 2016 and 2020, Elisabeth Bik, a microbiologist at Stanford University, and an authority on dodgy images in scientific papers, identified dozens of papers containing images that, despite coming from different labs, seemed to have identical features. Over a thousand other papers have since been identified, by Dr Bik and others. Dr Bik’s best guess is that the images were produced by AI, and created deliberately to support a paper’s conclusions.

For now, there is no way to reliably identify machine-generated content, whether it is images or words. In a paper published last year Rahul Kumar, a researcher at Brock University, in Canada, found that academics could correctly spot only around a quarter of computer-generated text. AI firms have tried embedding “watermarks”, but these have proved easy to spoof. “We might now be at the phase where we no longer can distinguish real from fake photos,” says Dr Bik.

Producing dodgy papers is not the only problem. There may be subtler issues with AI models, especially if they are used in the process of scientific discovery itself. Much of the data used to train them, for instance, will by necessity be somewhat old. That risks leaving models stuck behind the cutting edge in fast-moving fields.

Another problem arises when AI models are trained on AI-generated data. Training a machine on synthetic MRI scans, for example, can get around issues of patient confidentiality. But sometimes such data can be used unintentionally. LLMs are trained on text scraped from the internet. As they churn out more such text, the risk of LLMs inhaling their own outputs grows.

That can cause “model collapse”. In 2023 Ilia Shumailov, a computer scientist at the University of Oxford, co-authored a paper (yet to be peer-reviewed) in which a model was fed handwritten digits and asked to generate digits of its own, which were fed back to it in turn. After a few cycles, the computer’s numbers became more or less illegible. After 20 iterations, it could produce only rough circles or blurry lines. Models trained on their own results, says Dr Shumailov, produce outputs that are significantly less rich and varied than their training data.

Some worry that computer-generated insights might come from models whose inner workings are not understood. Machine-learning systems are “black boxes” that are hard for humans to disassemble. Unexplainable models are not useless, says David Leslie at the Alan Turing Institute, an AI-research outfit in London, but their outputs will need rigorous testing in the real world. That is perhaps less unnerving than it sounds. Checking models against reality is what science is supposed to be about, after all. Since no one fully understands how the human body works, for instance, new drugs must be tested in clinical trials to figure out whether they work.

For now, at least, questions outnumber answers. What is certain is that many of the perverse incentives currently prevalent in science are ripe for exploitation. The emphasis on assessing academic performance by how many papers a researcher can publish, for example, acts as a powerful incentive for fraud at worst, and for gaming the system at best. The threats that machines pose to the scientific method are, at the end of the day, the same ones posed by humans. AI could accelerate the production of fraud and nonsense just as much as it accelerates good science. As the Royal Society has it, nullius in verba: take nobody’s word for it. No thing’s, either.

Curious about the world? To enjoy our mind-expanding science coverage, sign up to Simply Science, our weekly subscriber-only newsletter.

Correction (February 6th 2024): An earlier version of this piece misstated the number of research-integrity cases investigated by Taylor and Francis in 2021. Sorry.

© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it’s all here, just a click away! Login Now!

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint.
Download The Mint News App to get Daily Market Updates.

More
Less

Published: 02 Apr 2024, 05:00 PM IST

[ad_2]

Source link

Back to top button