There may be less to Meta’s White House A.I. pledge than meets the eye

President Biden speaks at a podium in the White House while representatives from seven leading A.I. companies stand by watching.
U.S. President Joe Biden announced on July 21 that he has gotten seven leading A.I. companies to make a number of voluntary commitments on A.I. safety.
ANDREW CABALLERO-REYNOLDS—AFP via Getty Images

Hello and welcome to Eye on A.I. One of the biggest bits of A.I. news from the past week was the White House’s announcement that it had convinced seven of the leading A.I. companies to voluntarily commit to a number of steps intended to help improve A.I. safety.

The announcement got a lot of attention. But it is important to note that the commitments only apply to models that are more powerful than the current state-of-the-art systems that have been made public, such as Anthropic’s Claude 2, OpenAI’s GPT-4, and Google’s PaLM 2. So the pledges President Joe Biden secured do nothing to ensure that currently available models aren’t used in ways that might cause harm—such as crafting malware or drafting misinformation.

Some of the commitments—such as the companies’ pledge to publish their A.I. system’s capabilities, limitations, and areas of appropriate and inappropriate use—are things they are, for the most part, already doing. Plus, since the commitments are voluntary, there’s little the administration can do to hold the companies accountable if they drift from their promises, other than to publicly shame them.

What’s more, some of the pledges lacked specifics on how they would be carried out. For instance, the seven participants—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—agreed to perform extensive security testing of their A.I. software before releasing it, including testing for biosecurity and cybersecurity threats. The companies do some of this now, but the commitment says the testing will be carried out “in part by independent experts.” What it doesn’t say is exactly who these independent experts will be, who will determine their independence and expertise, and what testing they will conduct.

One of the pledges in particular stood out to me. It concerns an A.I. model’s weights, which are the numerical coefficients applied to each node of a neural network that determine its outputs. The weights are what allows someone to replicate a trained A.I. model. The White House commitments say the companies will take steps to protect “proprietary and unreleased” model weights from being stolen and that they will “be released only when intended and when security risks are considered.”

What’s interesting about this is that Meta has emerged a leading proponent of open source A.I., the idea that the best way to push the field forward is to make A.I. models, including all their weights, widely available to the public with few restrictions on how they can be used. The company has already publicly released several powerful large language models, called LLaMA and LLaMA 2. In the case of the original LLaMA, the full weights were leaked online, allegedly from one of the research partners with which Meta had originally shared the A.I. software under an open source license. If that were to happen again, it would clearly violate the pledge. And, with LLaMA 2, Meta has made the A.I. system, actually a family of A.I. models, available both as open-source software and under commercial license. With the open-source version, it isn’t clear how easily Meta could prevent someone from using the model for some nefarious purpose, such as generating misinformation or malware.

So how does this commitment about model weight security sit with Meta’s advocacy for open source A.I.? I put this question to the company. A spokesperson for the company reiterated Meta’s rationale for open sourcing its A.I. software, arguing that it was a way to ensure the benefits of A.I. were shared widely and that “democratizing access allows for continual identification and mitigation of vulnerabilities in a transparent manner by an open community.” The spokesperson also said it had undertaken safety testing of LLaMA 2 to give it guardrails against the prompts the company’s own experts, as well as “select external partners,” believed would be most likely to be used in criminal activity or produce harmful and hateful content or unqualified advice.

So Meta isn’t backing off its open-source advocacy. It sounds as though it plans fall back on the argument that it will have “considered the security risks” when open sourcing future models. But if Meta makes the weights available, as it has with its LLaMA models to date, there is no way to prevent someone from removing whatever guardrails Meta puts in place by adjusting those weights.

This is the fundamental problem with open-source A.I. There’s no way to guarantee its safety. Meta can’t escape that fact. And neither can the White House or other regulators. Unfortunately, last week’s commitments show that the Biden administration has not yet fully come to terms with this dilemma.

Speaking of dilemmas, one of the trickiest faces Google-parent company Alphabet, whose quarterly earnings announcement later today is being carefully watched for signs of the impact that the generative A.I. revolution may be having on its bottom line. The rise of generative A.I. chatbots and search tools potentially pose an existential threat to Google’s crown jewel—Search—which provides $160 billion of the company’s $280 billion in annual revenue. Google has shown over the past six months that it has plenty of A.I. muscle. But what it hasn’t yet shown is that it knows how that muscle can be used to generate a business that will equal the money printing machine it has enjoyed with the current version of internet search. I take a deep dive into the company’s predicament in the cover story of Fortune’s August/September magazine issue, which you can read here. Check it out.

With that, here’s the rest of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

OpenAI worries GPT-4 could be used for facial recognition. That’s according to a story in the New York Times. The image analysis capabilities of GPT-4 have only be made available to a select number of visually-impaired users through a charity which is partnered with OpenAI. But concerns that the technology can be used to recognize faces has, according to the newspaper, delayed roll-out of these image-based capabilities to paid subscribers to the company’s ChatGPT chatbot, who have access to the text components of GPT-4. Microsoft has also had to take some steps to limit the vision capacities of GPT-4 that underpin its Bing Chat feature, the paper said. In some countries as well as some U.S. states, laws prohibit companies such as OpenAI from processing biometric information, including facial imagery, from users without explicit consent.

Apple reportedly building its own language model and chatbot. That’s according to a story in Bloomberg. Apple is quietly working on a framework for creating large language models that, according to the news service, is code-named “Ajax” that would be capable of training systems to rival those that OpenAI and Google have developed. But the news service reported, citing anonymous company sources, that the company hasn’t yet devised a clear strategy for the consumer release of its language models.

New York City subway system is using A.I. to spot fare dodgers. The A.I.-based video surveillance system, which has been in use in seven New York City subway stations since May, is expected to expand to approximately two dozen more stations by the end of the year, NBC News says. While the Metropolitan Transit Authority (MTA) claims the A.I. system is used only for counting fare evaders and does not flag them to the police, privacy advocates are concerned about the growing surveillance apparatus in the city. Critics argue that such measures unfairly target the poor and prioritize enforcement over making public transportation more accessible.

Google reportedly testing news writing A.I. tool. The New York Times reported that the company is testing an A.I. product called “Genesis” that can produce news articles by taking in information and generating content. The tool has been demonstrated to news organizations such as the New York Times, the Washington Post, and News Corp., the newspaper said, citing three unnamed sources who it said were familiar with the product. While Google believes it could assist journalists by automating certain tasks, some executives told the Times that they found the pitch unsettling, as it seemed to underestimate many of the steps required to create accurate and artful news stories. Google said the A.I. software is not meant to replace journalists but rather provide overstretched journalists and news organizations with a productivity-enhancing tool.

OpenAI’s head of trust and safety steps down. Just as the company is wrestling with a host of thorny safety issues (such as the one outlined above) and is facing a Federal Trade Commission probe and multiple lawsuits, the company’s head of trust and safety, Dave Willner, has announced his resignation in order to spend more time with his family, Reuters reports. Willner, a veteran of Airbnb and Facebook, had been in the job since February 2022. The company said it was actively seeking a qualified replacement and Mira Murati, its chief technology officer, is temporarily overseeing Willner’s former trust and safety team.

EYE ON A.I. RESEARCH

A new method for improving the common sense reasoning of large language models. The challenge of getting large language models to perform better at complex, common-sense reasoning tasks is a hot topic these days. The largest and most capable of these models, such as GPT-4 and PaLM 2, show a lot of promise. But there are still questions of whether they can really understand both visual images and reason about them as well as humans can. A group of Chinese researchers have just published a paper on a new multi-modal dataset and training method that they say produces much better results. The dataset is called COCO-MMRD and it includes various questions, explanations, and answers using both images and text. Unlike previous datasets that had multiple-choice questions, this one challenges A.I. models to come up with open-ended answers, making it a better test of their reasoning abilities. The researchers used some innovative techniques, including cross-modal attention, where the model has to learn which parts of an image most inform a text-based answer and vice versa, and also techniques where the system is forced to look at whole sentences at a time rather than simply predicting word by word, to achieve these improvements. You can read the research paper here.

FORTUNE ON A.I.

A.I. is ‘Amazon Web Services for human effort’ because it will ‘democratize’ large workforces and allow startups to scale faster, investment firm CEO says—Paolo Confino

A.I. might have what it takes to replace the C-suite. But experts say the top jobs are safe for cultural reasons—by Geoff Colvin

Marc Andreessen says his A.I. policy conversations in D.C. ‘go very differently’ once China is brought up—by Steve Mollman

Banks have used A.I. for decades—but now it’s going to take off like never before—by Ben Weiss

Elon Musk says Tesla will spend $1 billion to build a ‘Dojo’ A.I. supercomputer—but it wouldn’t be necessary if Nvidia could just supply more chips—by Christiaan Hetzner

BRAINFOOD

Are there lessons in Oppenheimer for today’s A.I. researchers and policy makers? Chris Nolan, the blockbuster film’s director certainly thinks so. He sees clear parallels between today’s A.I. researchers, who are also, like the namesake American Prometheus of Nolan’s film, trying to birth a powerful and risky new technology that many think just might pose an existential threat to humanity. Like many of the A.I. experts working away on today’s most powerful A.I. systems, J. Robert Oppenheimer was driven in part by ego and in part by the sense that if he didn’t create the bomb, then someone else (quite possibly the Nazis) would do it first, and that would be worse. For today’s A.I. researchers, the bogeymen range from the Chinese to Google to Elon Musk.

Almost immediately, Oppenheimer realized the devastating potential of the bomb (there’s the well-known but possibly apocryphal story of him quoting from the Bhagavad Gita upon witnessing the Trinity Test, “Now I am become Death, Destroyer of Worlds.”) And, while the affects of nuclear radiation were not well understood at the time, when Oppenheimer learned of the death toll and cataclysmic damage caused by the atomic bombings of Hiroshima and Nagasaki, he became wracked with guilt. U.S. President Harry Truman was famously contemptuous of Oppenheimer’s guilty conscious, calling him “that crybaby scientist.”

There is a parallel here too perhaps in the regrets being expressed by some pioneering deep learning researchers, such as Turing Award-winners Geoff Hinton and Yoshua Bengio, who are now concerned about the role their work may have played in creating a technology that could, in their estimation, destroy humanity or cause very grave harm to many people.

Oppenheimer went on to advocate for international control of nuclear weapons, but his plan was ultimately brushed aside by the U.S. government which put forward a different proposal that the Soviet Union rejected. (The world eventually came around to a form of international governance that borrowed some ideas from Oppenheimer’s plan.) Later Oppenheimer, who had been a Communist Party sympathizer in his youth and whose wife and brother had been Party members, became a victim of McCarthyism. Accused of being a Soviet spy, he was stripped of his security clearance. He died a broken man.

While there are some parallels between the dawn of the nuclear age and our own A.I. moment, there are some key differences, which Nolan has pointed out in various interviews. Nuclear weapons require vast capital investment—in both physical facilities, precursor chemicals and equipment, and big human teams. A.I. is merely software. And while it requires some talented computer scientists and access to large datacenters stuffed with specialized computer chips to create, it isn’t clear if access to these resources can be as easily monitored and policed as was the case with nuclear technology. Also, once the software is trained, it can easily and quickly be distributed, making it much more difficult to regulate effectively than nukes.

But there’s also another important difference. While no one had built an atomic weapon before the Manhattan Project succeeded, the science of how to build one was relatively well known among the world’s small coterie of nuclear physicists. It was a grand engineering challenge. But it was not, by the time the Manhattan Project got started, a deep, fundamental scientific mystery. And while the exact yield and radioactive effects of the bomb weren’t known, that a war involving such weapons could easily destroy the entire world was quickly apparent to most people.

The same cannot be said of A.I. and the risk of artificial superintelligence. No one, including the world’s top experts at places such as OpenAI, Anthropic, and Google DeepMind, is really sure what it will take to create an A.I. system that will equal or exceed human intelligence. Most think we are still missing a few key pieces of the puzzle. So this is still in the realm of science, not just engineering. What’s more, it’s still very much an open question whether an A.I. system that is as capable or more capable than humans at most tasks would pose an existential threat to humanity. There are thought experimentalists, philosophers, and game theoreticians who are sure it would. And there are plenty of other experts who think it would not. (See this latest essay by some of them.)

In other words, we may very well be approaching an Oppenheimer moment with A.I. But we’re not quite there yet. That might be a reason for some hope. It is yet possible to put in place regulations and governance systems that will allow us to avoid plunging over the edge of an abyss. And hopefully it won’t take the death of 110,000 people (some estimates put the combined toll from the bombings of Hiroshima and Nagasaki at almost twice that) and the destruction of two cities before we decide to take some action. We may need the time to figure it out too because A.I. is much harder to contain than nuclear bombs for all the reasons Nolan identifies.

(Of course, on the other hand, the fact that superintelligence is not yet here also may make it much harder to get policymakers to take effective steps. And, as I have said before in the newsletter, I do not advocate work on this kind of A.I. control to the exclusion of taking immediate action to try to prevent harms from the A.I. systems that are here today.)

This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays. Sign up here.