Meta’s chief A.I. scientist calls A.I. doomers ‘preposterous’ and predicts LLMs are just a passing fad

Meta Chief A.I. Scientist Yann LeCun
Meta Chief A.I. Scientist Yann LeCun speaking at the Viva Tech conference in Paris, June 14. LeCun says that those worrying that A.I. poses an existential risk to humanity are being "preposterous."
Chesnot/Getty Images

Hello, and welcome to Eye on A.I. I’m writing this week from Paris, where yesterday I attended an event Meta held to showcase its latest A.I. research for members of the press, and where Viva Tech, one of Europe’s largest technology trade shows, kicked off today.

This past week was one in which a number of A.I. optimists made an effort to counter the increasingly vocal and influential A.I. doom narrative—the fear that A.I. poses a grave and existential risk to humanity and so must be heavily regulated and licensed through new national and international agencies.

One of the optimists firing a rhetorical artillery barrage was venture capitalist Marc Andreessen, who penned a 7,000-word essay accusing the doomers of being either “Baptists” (essentially messianic zealots informed by cult-like social movements, such as Effective Altruism, that have adopted the danger of extinction from runaway A.I. as a core tenant, masked in the rationalist terms, but essentially no more scientific than transubstantiation) or “Bootleggers” (cynical players whose commercial interests align in hyping A.I. doom in order to prompt a regulatory response that cements their market position and hobbles the competition). I don’t agree with portions of Andreessen’s analysis—he uses strawmen, doesn’t engage deeply in taking apart counterarguments, and he is way too sanguine about the serious shortcomings of today’s A.I. systems and the risk of real harm they pose to individuals and democracy. But his essay is worth reading and the Baptist and Bootlegger analogy (which Andreessen borrowed from economic historian Bruce Yandle) is worth thinking about.

Yann LeCun, Meta’s chief A.I. scientist, is also a prominent A.I. optimist and it was bracing to hear his views at yesterday’s Meta event. LeCun thinks that the scenario that most worries the doomers—that we will somehow accidentally stumble into creating superintelligent A.I. that will escape our control before we even realize what’s happened—“is just preposterous.” LeCun says that is simply “not the way anything works in the world.” LeCun is confident that we humans, collectively, “are not stupid enough to roll out infinite power to systems that have not been thoroughly tested inside of sandboxes and virtual environments.”

Like Andreessen, LeCun thinks many recent proposals to regulate A.I. (such as a framework supported by companies like OpenAI and Microsoft to create new national and international A.I. agencies with licensing authority for the training of very large A.I. models) are a terrible idea. In LeCun’s view, the companies supporting this are motivated by a desire to quash competition from open-source A.I. models and less well-resourced startups.

It’s probably no coincidence that Meta, Lecun’s employer, has planted its flag firmly in the open-source camp. Unlike many of its other Big Tech brethren, Meta has made many of its most advanced A.I. models and datasets freely available. (Two U.S. senators wrote to the company last week questioning whether it had been irresponsible in the way it had released its powerful LLaMA large language model. Meta had tried to “gate” access to the model, releasing it only to select researchers for non-commercial purposes, but quickly the entire model, including all of its weights, leaked online and it has now been used by many people beyond the original select research partners. The fear is that LLaMA will become a ready tool for those looking to pump out misinformation, run scams, or carry out cyberattacks.)

Others warning about A.I. risk, including LeCun’s friends and fellow Turing Award winners Geoff Hinton and Yoshua Bengio, who along with LeCun are often referred to as “the godfathers of A.I.,” are perhaps guilty of a failure of imagination, he suggested. LeCun, whose father was an aeronautical engineer and who remains fascinated by aircraft, says that asking people to talk about A.I. Safety today is like asking people in 1930 to opine on the safety of the turbojet engine, a technology that had not even been invented yet. Like superpowerful A.I., turbojets sound scary at first, he says. And yet today, thanks to careful engineering and safety protocols, they are one of the most reliable technologies in existence. “Who could have imagined in 1930 that you could cross an ocean in complete safety, at near the speed of sound, with a couple of hundred other people?” he says.

Although LeCun has long championed the A.I. methods—in particular deep neural networks trained using self-supervised learning—that have brought about the generative A.I. boom, he’s not a huge fan of today’s large language models. Their intelligence, such as it is, he says, is too brittle. While they can seem brilliant one second, they can seem utterly stupid the next. They tend to confabulate, they are not reliably steerable, are not controllable, and there is growing evidence that there may not be any way to put in place guardrails around their behavior that can’t be easily overcome. In fact, he thinks fears about A.I. posing a risk to humanity are partly the result of people mistakenly extrapolating from today’s large language models to future superintelligence. “A lot of people are imagining all kinds of catastrophe scenarios because of A.I. And it’s because they have in mind these auto-regressive LLMs that kind of spew nonsense sometimes. They say it’s not safe. They are right. It’s not. But it’s also not the future,” he says. (LLMs are auto-regressive because each word they output is then fed back into them to help the predict the next word in the sequence.)

LeCun boldly predicted that within a few years LLMs, the engine of today’s generative A.I. revolution, will be almost completely abandoned in favor of better, more robust, algorithms. At the A.I. day, LeCun also discussed his own thoughts about what is needed to get to more humanlike A.I. And he explained a new computer vision model called I-JEPA, which Meta CEO Mark Zuckerberg announced the company was open-sourcing yesterday. I-JEPA is the first step in LeCun’s roadmap towards safe superhuman intelligence—and it requires a very different algorithmic design than the Transformer-based systems responsible for today’s LLMs. (More on I-JEPA in the research section of this newsletter below.)

Zuckerberg’s I-JEPA announcement was also part of the Meta CEO’s efforts to parry criticism from investors and the press that the company is lagging in tech’s buzziest space, generative A.I. Unlike its Big Tech brethren, Meta has not rolled out a major consumer-facing generative A.I. product of its own so far. And many noted that when the Biden White House held a meeting with A.I. companies creating foundation A.I. models, Meta was notably absent. The White House said it wanted to meet with those companies that were “at the forefront of A.I. innovation,” which many interpreted as a diss on Meta. (LeCun said that the company has been talking to the White House “through other channels” and noted that he had personally advised French President Emmanuel Macron on A.I. policy in recent days.)

But Zuckerberg is not about to miss out on Silicon Valley’s latest boom. At an “all-hands” meeting for company employees last week, the CEO said Meta plans to put generative A.I. “into every single one of our products.” He previewed a number of upcoming announcements around the technology, starting with A.I.-generated stickers that can be shared in the company’s messaging apps, Whatsapp and Messenger, and moving on to chatbot-like agents with a variety of different personas designed “to help and entertain” that company is currently testing internally and which it says it will debut in Whatsapp and Messenger before being pushing them out to Meta’s other apps, and eventually to the metaverse. The company is also planning on using A.I. models that can generate three-dimensional scenes and even entire virtual worlds to help augment and build out the metaverse.

With that, here’s the rest of this week’s news in A.I.


But, before you read on: Do you want to hear from some of the most important players shaping the generative A.I. revolution and learn how companies are using the technology to reinvent their businesses? Of course, you do! So come to Fortune’s Brainstorm Tech 2023 conference, July 10-12 in Park City, Utah. I’ll be interviewing Anthropic CEO Dario Amodei on building A.I. we can trust and Microsoft corporate vice president Jordi Ribas on how A.I. is transforming Bing and search. We’ll also hear from Antonio Neri, CEO of Hewlett Packard Enterprise, on how the company is unlocking A.I.’s promise, Arati Prabhakar, director of the White House’s Office of Science and Technology Policy on the Biden Administration’s latest thoughts about the U.S. can realize A.I.’s potential, while enacting the regulation needed to ensure we guard against its significant risks, Meredith Whittaker, president of the Signal Foundation, on safeguarding privacy in the age of A.I., and many, many more, including some of the top venture capital investors backing the generative A.I. boom. All that, plus fly fishing, mountain biking, and hiking. I’d love to have Eye on A.I. readers join us! You can apply to attend here.


Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

EU Parliament passes landmark A.I. Act. The European Parliament passed a draft of its landmark A.I. Act on Wednesday, moving the EU a step closer to being the first government to have a comprehensive law on the books dealing specifically with A.I. The law takes a risk-based approach, with stricter requirements for uses of A.I. that are considered high-risk, and bans some A.I. uses, such as real-time biometric-based surveillance and social scoring, completely. It also imposes significant transparency requirements on companies creating generative A.I. foundation models. But the law is controversial, with some concerned that smaller companies and open-source A.I. firms may struggle to comply with some of the provisions. The Parliament’s vote is just a further step in a lengthy process. Now the EU member state’s national governments get the chance to weigh and potential alter the law through negotiations with both the Parliament and the EU Commission, the bloc’s executive wing. A final version is expected to be enacted later this year and come into force next year. You can read Fortune senior writer David Meyer’s coverage on the Act’s passage here.

Final Beatles song coming soon thanks to A.I., Paul McCartney says. The singer told BBC Radio 4 that a "final" Beatles song will be released later this year, made possible by A.I. technology. Tech publication Ars Technica reports that the A.I. techniques enabled the isolation of John Lennon's vocals from an old cassette tape demo. It is thought the song was likely "Now And Then," a track that Lennon produced a rough recording of, but never completed, the publication said. Similar A.I. technology was developed for Peter Jackson’s "Get Back" documentary to separate voices and instruments in Beatles' recordings. This process allowed for boosting voices in noisy situations and facilitated surround sound remixes of performances in the film. While some artists have embraced such technology, it has also created confusion and disruption in the music business. McCartney himself expressed some ambivalence towards A.I. in the interview saying “So all of that is kind of scary, but exciting because it's the future.”

Google DeepMind says it used A.I. to find faster sorting algorithms for computer code compilers. In a paper published in Nature, DeepMind researchers said they had used reinforcement learning, where an A.I. agent learns by trial-and-error how best to accomplish a given task, to find completely novel and more efficient algorithms for doing the basic sorting that underpins how much of the world’s computer code runs. For short code segments, one algorithm discovered by DeepMind’s system (which it called AlphaDev) was 70% faster than prior ones, while for much longer code sequences it was about 1.7% faster. The company said it was making the new algorithms freely available on open-source libraries for C++, a computer coding language that sits underneath many higher-level programming languages.

Month-old generative A.I. startup founded by Meta and DeepMind veterans raises Europe’s largest-ever seed round, raising the equivalent of $113 million. Mistral AI, a Paris-based company founded by a former Google DeepMind researcher and two former Meta A.I. researchers, has only been in existence for about four weeks. But it managed to raise 105 million euros from a group of backers including Lightspeed Venture Partners, former Google chief Eric Schmidt, French telecoms billionaire Xavier Niel, and Bpifrance, the French state-backed investment bank. The company has said it plans to build its own large language model to rival those from OpenAI, Anthropic, Cohere, and others, and create products powered by that model. The funding round is Europe’s largest-ever seed round, the Financial Times reported.

EYE ON A.I. RESEARCH

Meta says new vision model is a step towards more humanlike A.I. As promised, I wanted to explain a bit more about I-JEPA, Meta’s new computer vision foundation model, which LeCun says is the first step on a roadmap he delineated a year ago for how we might get to more humanlike superintelligence. I-JEPA stands for Image Joint Embedding Predictive Architecture, and it is a radical departure in many ways from how previous computer vision models work. The basic idea is that during training, large blocks of an image are masked, and the software has to learn to try to predict the content of masked areas. But unlike generative methods, where the model tries to predict each masked pixel, here the software needs to learn a way of encoding blocks of the image into higher-level abstractions—a more conceptual understanding, if you will—and what it has to predict is the correct high-level abstraction to complete the image. Then a decoder module takes this high-level representation and translates it back into individual pixels. The model is smaller and less computationally intensive than many competing generative A.I. methods, but scored better on a wide range of computer vision tasks than many previous methods (those that use hand-crafted data augmentations in training still beat I-JEPA), and proved to be an adept “few shot learner”—mastering new tasks easily from just a handful of examples. You can read the research paper here.

FORTUNE ON A.I.

A.I. is changing business and society faster than anyone expected. These 13 A.I. innovators are deciding how the tech will change your life—Andrea Guzman

Microsoft, the same company helping mainstream A.I. ‘hallucinations,’ is putting forward a media literacy ‘Trust Project’ to fight disinformation—by Rachel Shin

The 4-day workweek will finally arrive thanks to A.I., Jefferies says—You’ll just need a ‘human day’ to cope with digital overload—by Prarthana Prakash

Top tech analyst Dan Ives says the A.I. ‘gold rush’ is just like the dotcom boom but it’s a ‘1995 moment … not 1999’—by Will Daniel

BRAINFOOD

Are business leaders allowing a dangerous gap to develop between their enthusiasm for generative A.I. and their employees' own experience with the emerging technology? There’s a growing disconnect between business leaders and their rank-and-file employees when it comes to generative A.I., according to the results of a new survey released by Boston Consulting Group, and that could spell trouble for companies hoping to rapidly deploy the technology. The survey indicated a leadership class that is racing ahead to embrace the new technology, but possibly without putting in place the processes and systems that will enable their whole organization to make the most of the tech. For instance, while 80% of business leaders said they were using A.I. regularly, only 20% of frontline employees said the same. Leaders were also much more optimistic about the technology’s potential—with 62% saying they were optimistic—compared to frontline workers, where only 42% were optimistic, although the good news was that those frontline workers who were using generative A.I. tended to be more optimistic about its potential than those who aren’t using it regularly. Most troubling of all though was the finding that while 68% of business leaders thought their company had adequate responsible A.I. policies in place, just 29% of frontline staff agreed. “Employees are prepared to accept AI in the workplace, but only if they are confident that their employer is committed to doing the right thing. Companies must move quickly to build their employees’ trust and to equip them with the necessary skills,” BCG wrote. Sounds about right.

This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays. Sign up here.