CES: AI Pioneer Yann LeCun on AI Agents, Human Intelligence
January 8, 2025
During CES 2025 in Las Vegas this week, Meta Vice President and Chief AI Scientist Yann LeCun had a compelling conversation with Wing Venture Capital Head of Research Rajeev Chand on the latest hot button topics in the rapidly evolving field of artificial intelligence. Some of the conclusions were that AI agents will become ubiquitous — but not for 10 to 15 years, human intelligence means different things to different AI experts, and nuclear power remains the best and safest source for powering AI. And, for those looking for more of LeCun’s tweets, he said he no longer posts on X.
Chand began the conversation by asking for LeCun’s response to the statement by OpenAI’s Sam Altman that “we know how to do AGI,” referring to Artificial General Intelligence, the ability to create machines that match or surpass human intelligence. LeCun responded that he doesn’t like the term as a way to talk about human intelligence.
“Human intelligence is very specialized — we don’t have general intelligence,” he said. “If we talk about human intelligence, what do we mean by it?” It’s often measured by benchmarks, but he noted, “for every task we can benchmark, we can build a specialized system to beat the benchmark.”
This very problem has existed for many years, he continued, noting that any time a machine can beat a human at chess, the conclusion is that we’re ever closer to machines that can do everything. “It’s a terrible definition,” he concluded. “We’ll have systems that can do a lot of things but it doesn’t mean they have human intelligence with the capacity to understand the physical world, solve problems and have some common sense.”
He noted that Altman did not specify what type of architecture he was referring to. “In the past, Sam said we’re thousands of days away from AGI,” LeCun explained. “I agree. The question is, how many thousands? If the plan we’re working on at Meta succeeds and we don’t hit any obstacle, I don’t see this happening for five to six years.”
LeCun pointed out that, “there are tasks that are purely intellectual” and, with sufficient training, data and architecture, these tasks might be automated “in a relatively short period of time.” But we’re not going to have an automated plumber any time soon, because “it’s incredibly complicated and requires a deep understanding of the physical world.”
He reported that “the basic paradigm of a lot of AI systems today are on self-supervised running.” “You only train the system to predict missing information from the input, like filling in a missing word,” he said. “But you just cannot predict all the details in the real world, so these systems have been a failure.”
LeCun’s vision is JEPA (Joint Embedding Predictive Architecture), which aims to build models of the physical world and then plan a sequence of actions, which he calls object-driven AI.
“The new key words are Action Condition World Models, which are those types of models that allows a system to plan and reason,” said LeCun. In this model, all the unpredictable details are eliminated, which makes the “prediction problem” easier and moves the system towards learning relevant information. There are no commercially available systems that use this model.
LeCun revealed that the company is, “about to submit a paper that the JEPA systems have acquired some common sense.”
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.