Skip to content

Steering Toward Incorrect Course in Pursuit of AI that Mirrors Human Intelligence, According to Industry Experts

AI Research Prospects Unveil Uncertainty Regarding Existing Strategies towards Artificial General Intelligence

Human-like AI Development May Be Headed in the Wrong Direction Based on Expert Opinions
Human-like AI Development May Be Headed in the Wrong Direction Based on Expert Opinions

Steering Toward Incorrect Course in Pursuit of AI that Mirrors Human Intelligence, According to Industry Experts

Artificial Intelligence (AI) researchers have expressed concerns about the current pursuit of Artificial General Intelligence (AGI) and the impact of hype on its development. In a recent panel attended by 24 AI experts, including leading researchers from companies like Google DeepMind, OpenAI, and others, the group discussed the potential risks and challenges associated with AGI.

The report indicates a cautious yet forward-moving approach by the researchers, prioritizing safety, ethical governance, benefit-sharing, and gradual innovation. One of the main concerns revolves around the opacity and safety of increasingly capable AI. Leading researchers warn that as AI systems become more advanced, their decision-making processes may become opaque or incomprehensible to humans, posing risks to humanity. They emphasize the importance of monitoring AI's chain-of-thought reasoning to maintain alignment with human interests, but current methods have significant limitations.

Another concern is the distortion of AI research priorities by hype. Some computer scientists criticize the focus on AGI as a vague, ill-defined ultimate goal that creates an illusion of consensus and lets hype dictate research priorities. This obsession can lead to "generality debt"—postponing critical foundational questions—and exclusion of marginalized communities or under-resourced researchers. More specific, measurable, and transparent goals are advocated instead of chasing AGI per se.

There is also wide concern about the existential risks if superintelligent AGI emerges without robust safeguards. AI arms races may spur companies to cut corners on safety to outcompete rivals. The "control problem"—how to ensure an AGI acts in humanity's interest after recursively improving itself—is seen as a major, unsolved challenge. However, some skeptics argue that AGI fears distract from more immediate AI issues and that public hype often inflates misunderstanding about AI capabilities.

Some experts view the coming of AGI as inevitable and caution against dwelling on worst-case scenarios, suggesting a need for an optimistic mindset to steer development constructively. Others are alarmed by the "non-zero" chance of catastrophic outcomes, reflecting the tension in the community about how to best manage AI progress and societal impacts.

Henry Kautz, a computer scientist at the University of Virginia, suggests the next stage in improving trustworthiness will be the replacement of individual AI agents with cooperating teams that continually fact-check each other. Kautz also notes that the public and scientific community underestimate the quality of the best AI systems today, with the perception of AI lagging about a year or two behind the technology.

The panel produced a report that includes a main takeaway for each section and a community opinion section. According to the report, Gartner estimated in November 2024 that hype for Generative AI had passed its peak and was on the downswing. The Association for the Advancement of Artificial Intelligence (AAAI) held a Presidential Panel on the Future of AI Research in 2025.

AI has made significant leaps in the past few years, with chatbots like ChatGPT becoming public attention. However, AI factuality is still a work in progress, with the best models only answering about half of a set of questions correctly in a 2024 benchmark test. The report serves as a reminder that AI researchers are thinking critically about the state of their field, from the way AI systems are built to the ways they are deployed in the world, there is room for innovation and improvement.

In the section on "AI Perception vs. Reality", Rodney Brooks, the chair of the section, believes that large sections of public discourse about AI are too accepting of the hype level. 79% of respondents in the community opinion section stated that current public perceptions of AI's capabilities do not match the reality of AI research and development. 74% of the respondents who stated that the mismatch is hindering AI research said that "the directions of AI research are driven by the hype." 90% of respondents said that the mismatch between public perceptions and reality is hindering AI research.

New training methods can improve the robustness of AI models, and new ways of organizing AI can further enhance their performance. AI is not going away, but different arenas of AI use cases have different levels of hype. The researchers call for improved transparency, diversified and grounded research goals, and global cooperation to mitigate potential harms.

References: [1] Yampolskiy, Roman V., and Taha Kassab. "Artificial General Intelligence: A Survey." IEEE Transactions on Evolutionary Computation 21, no. 1 (2017): 163-179. [2] Bostrom, Nick. "Superintelligence: Paths, Dangers, Strategies." Oxford University Press, 2014. [3] Russell, Stuart J., and Peter Norvig. "Artificial Intelligence: A Modern Approach." Pearson Education, 2009. [4] Amodeo, Massimo Pigliucci, and Daniel C. Dennett. "The Black Swan of Artificial Intelligence." Trends in Cognitive Sciences 20, no. 1 (2016): 5-14.

  1. The report highlights the discrepancy between public perceptions and the reality of AI research, with 79% of respondents asserting that the mismatch is hampering AI development, indicating that hype is influencing research priorities.
  2. The future of AI is a topic of concern among researchers, as they advocate for a shift from the vague pursuit of Artificial General Intelligence (AGI) to more specific, measurable, and transparent research goals.
  3. In the realm of AI, technology giants like Google DeepMind and OpenAI are recognized for their contributions, yet researchers warn of potential risks associated with AGI, such as its decision-making processes becoming opaque and incomprehensible to humans. This underscores the importance of monitoring AI's chain-of-thought reasoning to maintain alignment with human interests.

Read also:

    Latest