As if the fight against fake information wasn’t enough to worry about, there are increasingly worried calls from scientists for better ways to deal with a veritable tidal wave of legitimate coronavirus research. The phenomenon, termed an “infodemic” by the World Health Organization, has made it difficult for researchers to fully digest rapidly evolving discoveries, rendering some ongoing research obsolete even before it’s through peer review.
The crush of research over the past months is the particular result of the urgency among researchers to publish results that might be helpful to clinicians, but the difficulty of collating and accessing a growing body of scientific literature is nothing new. Now there’s a call for new techniques, from centralized databases to AI/ML technologies, to help scientists keep abreast of and incorporate findings from new research into ongoing work.
In an opinion article in the journal Patterns, Carnegie Mellon University‘s Ganesh Mani, an investor, technology entrepreneur, and adjunct faculty member in the school’s Institute for Software Research, and Tom Hope, a post-doctoral researcher at the Allen Institute for AI, issued just such a call.
“Given the ever-increasing research volume, it will be hard for humans alone to keep pace,” they write in the article.
They point to the coronavirus research deluge in particular. The scientific response during the pandemic is an exemplar of the growing problem. By mid-August, more than 8,000 preprints of scientific papers related to the novel coronavirus had been posted in online medical, biology, and chemistry archives. Scores more papers dealt with related resarch, such as quarantine-induced depression. In the field of virology, the average time to perform peer review and publish new articles dropped from 117 to 60 days on average.
It now seems increasingly attractive and perhaps necessary to combine human expertise with AI to keep up with the explosion of research. The overabundance of information not only leads to the impossible challenge of digesting everything, but also discriminating between helpful and suspect information and results. AI could help evaluate the research and sort it appropriately.
“We’re going to have that same conversation with vaccines,” Mani predicted. “We’re going to have a lot of debates.”
Of course, technology alone can’t present a full solution. Mani and Hope also propose new policies, such as highlighting negative results in addition to positive findings, which can be important for clinicians and discourages other scientists from going down the same blind alleys, potentially limited redundant research. Other ideas presented in the article include identifying top quality reviewers and linking papers to related papers, retraction sites, or legal rulings.
AI could be the lynchpin, but it may also necessitate a crucial new step in the paper writing process. AI still has trouble with human language, and the authors suggest it may be necessary for researchers to write two versions of research papers, one for people and one for machines.
“Putting such infrastructure in place will help society with the next strategic surprise or grand challenge, which is likely to be equally, if not more, knowledge intensive,” they concluded.
View original article here Source