Main menu

Pages

Risks of artificial intelligence in scientific research

Risks of artificial intelligence in scientific research

Risks of artificial intelligence in scientific research

  We have talked more than once about the dangers of artificial intelligence, the dangers that Geoffrey Hinton, former Google and one of the gods of this technology, warned about when abandoning The Big G. However, many believe that we are not aware of what is coming;  They believe that artificial intelligence will put our knowledge and all science under control.  Speaking of which, do you have any idea about the risks of using AI in scientific research?


  In this article, we will review some of the societal concerns that many distinguished researchers from around the world have pointed out after photos of mice with strangely large genitals spread in recent weeks.  What is the problem?  The images were fake, and created using artificial intelligence.

  One of the controversial images of mice with strangely large reproductive organs

Risks of artificial intelligence in scientific research


  This incident only exacerbated concerns about the use of artificial intelligence to simulate scientific research, which could lead to the rejection of real studies and reports.  Although risks are inherent in new technology, there are other risks that pose potential cognitive risks.


  Two researchers' approach to science and artificial intelligence


  A recent publication in the popular journal Nature addressed this issue through an exchange conducted by two industry professionals, who witnessed with awe the progress of artificial intelligence in the scientific field.

Molly Crockett: Psychologist at Princeton University.  Collaborating on reports on how humans learn and how we make decisions in social situations that can be very conflicting.

 Lisa Misseri: An anthropologist at Yale University.  It focuses on science and technology, and delves into how the scientific community evolves as areas of knowledge emerge.


  Originally, Crockett and Messeri's publication was born as a response to a now five-year-old memo.  At the time, this note was published in the Proceedings of the National Academy of Sciences stating that “researchers can use machine learning to predict the reproducibility of studies based solely on analysis of their transcripts.”  Five years later, they both questioned that statement.


  These two scientists decided to conduct an in-depth analysis of how scientists are constrained by artificial intelligence rather than liberated.  How will artificial intelligence and other technologies impact the academic process?

  Risks of artificial intelligence in science


  For their analysis, they created four categories of AI insights into the scientific domain.


 Like Oracle: it can help researchers search, read, and summarize almost endless scientific literature

 “Substitute”: Capable of replacing humans, linking data and providing information

 Quantum: Leveraging big data tools, going beyond human knowledge

 As a 'referee': able to evaluate the competence of the research and the validity of theoretical frameworks


  Each category has advantages and disadvantages.  Naturally, they are concerned about weaknesses.  They consider that there may be "delusions of understanding".  This means that AI-powered scientists end up interpreting that they understand a topic more than they really do.  If this support turns into dependency, it will be even worse, as cognitive margins could decline dramatically within a few generations.


  In turn, within illusions of understanding, they subcategorize some such as interpretive and exploratory.  Scientists who are unable to explain their conclusions, or explore hypotheses that fall short of existing ones, are two positions that Crockett and Messeri emphasize.  But the thing that causes them the most fear is not one of these things, but the illusion of objectivity: the exaggerated confidence that AI is more objective than us when it is not.

  Will we produce more science with less understanding?


  Crockett and Messeri tend to imagine that if scientists embrace AI without asking more questions about it, what will happen is that we will produce more science — or more scientific content — even if we understand less about the universe.  More scientific articles are being published than ever before, but only some are worthy of attention.  And those that do deserve it can be accused of being fake, like the picture of mice.


  Although they do not oppose its use, specialists warn of the dangers of artificial intelligence in scientific research.

Comments