Home AI - Artificial Intelligence Experts Doubt AI’s Readiness to Serve as a ‘Co-Scientist’

Experts Doubt AI’s Readiness to Serve as a ‘Co-Scientist’

by admin

In the previous month, Google unveiled its “AI co-scientist,” a tool aimed at assisting researchers in formulating hypotheses and structuring research plans. Google promoted it as a means to discover new knowledge; however, experts argue that it — along with similar tools — falls short of its impressive marketing claims.

“This initial tool, while intriguing, doesn’t appear to hold much promise for practical use,” Sarah Beery, a computer vision expert at MIT, shared with TechCrunch. “I’m uncertain whether there’s a genuine need for such a hypothesis-generating tool within the scientific community.”

Google joins a host of tech leaders advocating for the potential of AI to revolutionize scientific research, especially in data-intensive domains like biomedicine. Earlier this year, OpenAI’s CEO, Sam Altman, suggested that “superintelligent” AI tools could significantly accelerate scientific innovation and discovery. Likewise, Anthropic CEO Dario Amodei has asserted that AI could play a role in developing cures for various cancers.

Nevertheless, numerous researchers express skepticism, arguing that today’s AI lacks the utility necessary to effectively guide scientific inquiry. They contend that tools like Google’s AI co-scientist seem to be more about hype than genuine progress, a sentiment echoed by the absence of empirical support.

In a blog post detailing the AI co-scientist, Google claimed the tool had exhibited promise in areas like drug repurposing for acute myeloid leukemia, a blood cancer impacting bone marrow. However, the findings were so ambiguous that “no credible scientist would regard them seriously,” remarked Favia Dubyk, a pathologist associated with Northwest Medical Center-Tucson in Arizona.

“This could serve as a decent starting point for researchers; however, the lack of specificity raises concerns and erodes my trust,” Dubyk explained to TechCrunch. “The absence of clear information makes it difficult to ascertain if this can genuinely be beneficial.”

This isn’t the first instance of Google facing backlash from the scientific community for touting an AI advancement without substantiating the results.

In 2020, Google claimed that its AI system designed to detect breast tumors outperformed human radiologists. In response, researchers from Harvard and Stanford published a critique in the journal Nature, arguing that the lack of detailed methodology and code in Google’s research “diminished its scientific worth.”

Scientists have further criticized Google for downplaying the limitations of its AI tools intended for scientific applications like materials engineering. In 2023, the company stated that around 40 “new materials” had been synthesized with its AI system, GNoME. Yet, an independent analysis revealed that none of these materials were genuinely innovative.

“We won’t fully grasp the strengths and weaknesses of tools like Google’s ‘co-scientist’ until they undergo thorough, independent assessments across various scientific fields,” stated Ashique KhudaBukhsh, an assistant professor of software engineering at the Rochester Institute of Technology. “AI often excels in controlled settings but may struggle when applied to real-world scenarios.”

Complex Processes

One challenge in creating AI tools for scientific discovery lies in anticipating myriad confounding variables. While AI could be beneficial in areas requiring extensive exploration, such as filtering through extensive options, its capability for innovative problem-solving that leads to scientific breakthroughs remains uncertain.

“History shows that pivotal scientific advancements, such as the development of mRNA vaccines, have been propelled by human intuition and resilience, even amidst skepticism,” KhudaBukhsh commented. “AI, in its current form, may not be well-equipped to replicate that journey.”

Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan, posits that tools like Google’s AI co-scientist misdirect their focus regarding valuable scientific efforts.

Sinapayen recognizes the genuine potential in AI for automating complex or monotonous tasks, such as summarizing new research or adhering to grant application formats. However, she contends that there is minimal interest within the research community for an AI co-scientist that generates hypotheses — an endeavor from which many scientists derive intellectual satisfaction.

“For a lot of scientists, myself included, generating hypotheses is the most enjoyable aspect of our work,” Sinapayen remarked to TechCrunch. “Why would I relinquish that joy to a computer, only to be left with the tedious tasks? Generally, many innovations in generative AI misunderstand human motivations, leading to proposals for tools that automate the very aspects we find rewarding.”

Beery emphasized that a major hurdle in the scientific method is the design and execution of studies and analyses to validate or refute a hypothesis — a capability that may be out of reach for current AI systems. After all, AI lacks the ability to employ physical tools for experiments, and often struggles with problems characterized by extremely limited data availability.

“Most scientific inquiries cannot be conducted entirely in a virtual environment; there is typically a substantial physical component, like gathering data and performing lab experiments,” Beery noted. “A significant limitation of systems like Google’s AI co-scientist, which impacts their usability, is the absence of contextual information regarding the lab and the researcher, their specific research aims, previous work, skill sets, and available resources.”

AI Risks

Concerns over AI’s technical limitations and risks — including its tendency to generate inaccurate information — also contribute to scientists’ hesitation in endorsing it for serious applications.

KhudaBukhsh fears that AI tools might inadvertently introduce noise into scientific literature rather than facilitating progress.

This has become a pressing issue. A recent study revealed that AI-generated “junk science” is inundating Google Scholar, the tech giant’s free academic literature search engine.

“If AI-generated research is not closely monitored, it could flood the scientific community with subpar or misleading studies, thereby overwhelming the peer-review process,” KhudaBukhsh cautioned. “An already strained peer-review process is a notable challenge in fields like computer science, where leading conferences have witnessed an exponential increase in submissions.”

Even well-conceived studies could be compromised by erratic AI, Sinapayen cautioned. While she appreciates the concept of a tool to assist with literature reviews and synthesis, she remains skeptical about AI’s reliability to perform those tasks competently.

“Various current tools claim efficacy in those areas, but they are not tasks I would entrust to today’s AI,” Sinapayen explained, also expressing concern about the training of many AI systems and their energy consumption. “Even if all ethical concerns were addressed, the unreliability of present AI means I’m hesitant to base my work on their results.”

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles