Yann LeCun, Meta’s chief AI scientist, has been honored with a TIME100 Impact Award for his contributions to the field of artificial intelligence. In an interview with TIME, LeCun discussed the barriers to achieving artificial general intelligence (AGI), the merits of Meta’s open-source approach, and his dismissal of claims that AI poses an existential risk to humanity.
LeCun believes that training large language models (LLMs) on more computing power and data alone will not lead to AGI. While he acknowledges the impressive capabilities of LLMs when trained at scale, he notes that they have limitations. These systems often hallucinate and lack a true understanding of the real world. They require vast amounts of data and are unable to reason or plan beyond what they have been trained on. LeCun asserts that LLMs are useful but not a path towards human-level intelligence or AGI.
LeCun expresses his dislike for the term AGI and highlights the misunderstanding surrounding it. He clarifies that Meta’s Fundamental AI Research team’s mission is to achieve human-level intelligence. LeCun argues that human intelligence is not general, as it lacks certain characteristics that intelligent beings possess. These characteristics include understanding the physical world and planning sequences, which current AI systems do not possess. While Meta may pivot towards building AGI as a central goal, LeCun emphasizes that the term itself is not an accurate representation of their mission.
In summary, LeCun believes that training LLMs alone will not lead to AGI and that current AI systems have limitations in understanding the real world and lack reasoning and planning abilities. He dislikes the term AGI and argues that human-level intelligence is not truly general. Despite Meta’s potential pivot towards AGI, their focus remains on achieving human-level intelligence rather than a broad definition of AGI.