Elon Musk says A.I. will bring ‘an age of abundance’—but others say a devastating ‘liar’s dividend’ could also be a byproduct of A.I.

Elon Musk
Elon Musk speaks about the promise and dangers of A.I. at the Viva Tech conference in Paris last week.
Chesnot—Getty Images

Hello, and welcome to Eye on A.I.

I spent most of last week at Viva Tech, the big technology trade show in Paris. Unsurprisingly, the hottest topic in the exhibition hall was  generative A.I. Elon Musk, whose appearance at Viva Tech was a major coup for the show’s organizers, said A.I. “is probably the most disruptive technology ever.” He mentioned Tesla’s work on two A.I.-enabled technologies, self-driving cars and humanoid robots, and said he thought it was these technologies (both of which are a subset of what he called “autonomy”), not anything that Tesla produced today, that explained the company’s $816 billion market cap. Musk said that, in an optimistic scenario, A.I. will usher in “an age of abundance where any goods or services that you want, you can just have.” But he also said even if there were “some sort of A.I. apocalypse, I think I would still want to be alive at this time to see it,” before adding, in a joking aside, “and hopefully not cause it.”

At the same time Musk was speaking on Friday, I was moderating a panel on “Media Literacy in the Age of A.I.” One of the panelists, Claire Leibowicz, who is head of A.I. and media integrity at the Partnership on A.I., argued that the “volume, accessibility and reach” that generative A.I. potentially gives to misinformation does represent a difference from previous forms of manipulated content. She said she was particularly worried that a flood of A.I.-generated content would lead to a phenomenon known as “the liar’s dividend”—the idea that in a media environment where concerns about deepfakes and other A.I.-manipulated content degrade public trust in all information, the real winners are politicians, governments, and corporations who gain the ability to escape accountability simply by disputing any true accusation.

Another panelist, Sonja Solomun, deputy director of the Centre for Media, Technology and Democracy at McGill University in Montreal, agreed, comparing exposure to disinformation to cocaine—snort it a few times, and there’s no lasting harm. But repeated exposure, she warned, fundamentally alters your brain chemistry.

Meanwhile, Charlie Beckett, professor of media and communications at the London School of Economics, said that while he supported ideas such as digital watermarking that would make it easier to tell when content was A.I.-generated or when photos had been manipulated, people should not be lulled into thinking these technologies are silver bullets for what is actually a complex, societal issue. He pointed out that even the intrinsic value of trust in media and institutions is context-dependent. The country where surveys consistently find the public most trusts both the media and politicians? China. “Personally, I don’t see China as a great model for an information ecosystem,” he said.

Elsewhere at the conference, I heard Nick Thompson, the CEO of The Atlantic who is known for his daily “the most interesting thing in tech” videos on LinkedIn and other social media, opining on A.I. and regulation. He said it remained to be seen whether regulation would be “large and dumb”—a complex web of rules and requirements that would enable regulatory capture by the biggest and best-funded technology players—or “small and dumb,” being too light-touch and enabling harms to proliferate, or “small and smart,” getting the balance just about right. After his talk, I asked him whether he thought calls for an international agency to monitor the development of advanced A.I. systems, perhaps along the lines of the International Atomic Energy Agency, fell into the “large and dumb” camp or the “small and smart” camp. He mulled it over for a long few seconds, then replied “small and smart,” before walking off.

Well, the A.I. regulation that is actually closest to being on the books is the European Union’s A.I. Act, which cleared a key milestone last week when the European Parliament passed its version of the legislation. Now the text will be further negotiated between the Parliament, the European Council, which represents the interests of various national European governments, and the European Commission, which is the bloc’s executive arm. A final version is likely to be enacted later this year and come into force in 2025. But many experts think the basic shape of the law is unlikely to change too radically.

Exactly how the current draft of the A.I. Act came about was the subject of a Time magazine scoop today. Using documents obtained under freedom of information requests and other reporting, Time’s Billy Perrigo showed that even though OpenAI CEO Sam Altman has been on a global tour, repeatedly calling for regulation, his company had quietly lobbied the European Commission and Council for less onerous requirements. Specifically, Time obtained a lobbying document in which OpenAI argued against a draft proposal that would have seen generative A.I. foundation models, such as OpenAI’s GPT large language models, automatically classified as “high risk” under the A.I. Act’s regulatory schema. The lobbying apparently succeeded: In the version passed by the European Parliament, foundation models are not subject to the same risk assessments as other forms of A.I. (Instead, companies developing foundation models must meet several transparency requirements and must ensure their models comply with existing European law. Even that requirement, Altman has implied, may be hard for OpenAI to meet because of issues around whether the company has a legitimate legal basis for handling European citizens’ data in training its A.I. models—and perhaps also when EU citizens are using OpenAI’s products, such as ChatGPT.)

Just how tough it may be for companies to comply with the EU A.I. Act is apparent in a flowchart that Brian Wong, a partner, and Tom Whittaker, a senior associate, at the law firm Burges Salmon created to help clients prepare for the new law. (You can have a look at it here.) It depicts a Rube Goldberg-esque pathway that companies will need to navigate in order to comply. Whittaker told me that for companies that are already in heavily-regulated industries, such as financial services, health care, transportation, and the government sector, complying with the new A.I. Act might not be too much of a stretch. But for others, and for many smaller businesses, he acknowledges “the economic burden will be greater.”

Whittaker tells me that he thinks the new law will probably slow the adoption of A.I. by companies in the EU and could delay the roll-out of A.I. systems to EU customers (like Europe’s strict data protection law, GDPR, the new EU A.I. Act applies to any company with customers and employees in Europe, not just those headquartered there and there are signs that companies are already pushing back plans to make generative A.I. chatbots available to EU consumers because of concerns about complying with EU laws). Whittaker says that one of the biggest uncertainties will be around which specific regulator within each EU country has responsibility for enforcing the A.I. Act, as different regulators are likely to place emphasis on different parts of the law. He also says that there is a real need for global standards around A.I. fairness, transparency, accountability, and safety. But, in the end, Whittaker said it was better to go slow on A.I. adoption and “ensure that fundamental rights are protected.”

With that, here’s the rest of this week’s news in A.I.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

Data privacy concerns delay Google’s rollout of Bard in Europe. Google has delayed the launch of its Bard chatbot in the European Union after the Irish Data Protection Commission raised questions about how Google intended to comply with Europe’s data privacy laws, tech publication the Register reported. The Irish regulator, which is the primary data privacy overseer for many big American tech companies in Europe, told the Register it had received no information or documentation from Google to date, despite Google’s intention to roll Bard out imminently. Similar data privacy concerns had led to a Bard rival, OpenAI’s ChatGPT, being temporarily banned in Italy.

Google has warned its own employees against using Bard for work. The company has told employees not to use confidential information to prompt the A.I.-powered chatbot, according to Reuters. The company has also warned engineers to be cautious using any computer code Bard generates—and never to simply cut and paste Bard’s code suggestions into projects without scrutiny—as it may be erroneous.

Evidence of tension in Microsoft’s partnership with OpenAI surfaces. The Wall Street Journal reported that not all is well with the partnership between OpenAI and its major funder and strategic partner, Microsoft. According to the newspaper, Microsoft has been unhappy that OpenAI has sold its technology to some of Microsoft’s biggest rivals, including Salesforce. In addition, OpenAI and Microsoft have been competing for some of the same enterprise customers. The newspaper also reported that some Microsoft executives were suspicious that OpenAI may have released ChatGPT to jump out ahead of Microsoft’s debut of Bing Chat, which is powered by OpenAI’s language models. For its part, the paper reported, OpenAI had warned Microsoft against rushing to incorporate OpenAI’s powerful GPT-4 large language model too quickly into a consumer-facing chatbot because its guardrails were not robust and, unlike the A.I. model that initially powered ChatGPT, it had not yet been refined using large amounts of human feedback. Microsoft ignored OpenAI’s objections, according to the newspaper’s reporting, using GPT-4 and its own in-house language models to create what it called Prometheus, the A.I. model that powers Bing Chat. When Bing Chat first debuted, people, including the New York Times reporter Kevin Roose, were able to coax the chatbot into outputting bizarre and unsettling dialogue.

Google opposes a new A.I. regulator in the U.S. That’s according to a document the company submitted in response to the National Telecommunications and Information Administration’s (NTIA) request for public comment on A.I. accountability and which was reported on by the Washington Post. Instead, responsibility for A.I. regulation should be divided among multiple agencies, Google and its advanced A.I. research lab, Google DeepMind, argued in the 33-page letter it submitted to the NTIA. This differs from the approach suggested by rivals like Microsoft and OpenAI, which have called for a new national regulator. Google recommends the National Institute of Standards and Technology (NIST) issue technical guidance on addressing A.I. risks, but that regulators with experience in different industries and use cases be responsible for policing the use of A.I. within their jurisdiction.

U.K. wants to play a central role in global A.I. regulation. British Prime Minister Rishi Sunak aims to position Britain as the global leader in A.I. regulation, hosting a global summit on A.I. risks in September. Sunak emphasized the need to address the opportunities and challenges of A.I. during a speech at the London Tech Week conference, Reuters reported. The prime minister said he wanted the U.K. to become the "geographical home of global A.I. safety regulation."

Oracle partners with Cohere AI on generative A.I. models. Oracle became the latest company to begin offering generative A.I. services to big business customers through its cloud computing infrastructure, thanks in part to a partnership with Cohere, which develops large language models. The partnership, announced last week, is the latest move by a cloud computing and enterprise software giant to enable its own customers to use generative A.I. more easily. Oracle also participated in Cohere’s latest $270 million financing round.

Tax preparer H&R Block taps Microsoft’s cloud to access OpenAI’s tech. The tax preparation specialist will use OpenAI’s technology through Microsoft’s Azure Cloud to “build faster and more consultative tax experiences,” according to a company statement. The company shared few details on exactly how it will deploy OpenAI’s technology but stressed in its release that data privacy remained paramount in the company’s plans.

EYE ON A.I. RESEARCH

Meta has created a new foundation model for voice, but won’t release it because of safety concerns. The company’s A.I. researchers announced that they have created a generative A.I. model called Voicebox that can reliably imitate a voice after being fed just a few seconds-long audio clip of someone speaking. The system can also produce good quality text-to-speech outputs in six different languages and perform a host of other audio related tasks, such as noise removal, content editing, style conversion, and diverse sample generation, all without any specific fine tuning for those tasks, according to a company blog post. The system differs from earlier ones that had to be trained for just one task and mostly used auto-regressive methods to predict the next sound in a sequence. Instead, Voicebox uses a diffusion-based model more similar to those used in some text-to-image generative A.I. systems. The company said it recognizes “the potential for misuse and unintended harm” with the technology and that is why, while publishing a research paper on its work, Meta has decided not to make the model publicly available for now. The researchers also said they had built “a highly effective classifier that can distinguish between authentic speech and audio generated with Voicebox to mitigate these possible future risks.”

FORTUNE ON A.I.

Wharton professor says employees are hiding A.I. use—and potentially transformative productivity gains—from employers—by Steve Mollman

Cohere CEO calls A.I. debates on human extinction ‘absurd use of our time and the public’s mind space’—by Steve Mollman

Mercedes trials ChatGPT in its vehicles to help them sound more human: Here’s how you can take part—by Christiaan Hetzner

9 out of 10 developers admitted using A.I. coding tools in a GitHub survey that shows just how quickly A.I. is invading the workplace—by Andrea Guzman

BRAINFOOD

The threat of information poisoning from generative A.I. is getting real. It has become common for people building A.I. systems, whether in academic settings or within companies, to turn to contractors to label data used to train A.I. systems. Some of these contractors are low-paid and overworked. Now researchers from the Swiss Federal Institute of Technology Lusanne (EPFL) have found that at least some of these contractors, specifically those who work for the Amazon service Mechanical Turk, have been using ChatGPT and other large language models to do their data labeling tasks for them. In their paper, which has the wonderful title “Artificial Artificial Artificial Intelligence,” they estimate that between 33% and 46% of Mechanical Turkers were now using LLMs to help with their work. The problem is that as more data is used to train future A.I. systems is created by A.I. systems themselves—maybe even earlier versions of the very same A.I. software—it’s very likely that the quality of the data will get steadily worse over time. And this will ultimately mean that future A.I. systems will suffer from what’s known as “model collapse,” ceasing to perform as their creators expect, a point hammered home by another recent paper (this one with an only slightly less clever title “The Curse of Recursion”) from researchers at the University of Cambridge, Imperial College, and the University of Toronto.

This issue is not only a big threat to systems being trained using contract data labelers. It may wind up affecting us all. That’s because generative A.I. is increasingly being used to create content to autopopulate websites, sometimes with the intention of making those websites rank higher in the page rankings generated by search engines. In the future, more and more of the internet’s content will represent the output of LLMs, not humans. And that means that future LLMs trained from scrapes of the internet will be ingesting more and more of this A.I.-generated content. It’s also why search algorithms will increasingly need to be tweaked to surface the highest-quality content, and not merely the content that has followed old-school SEO tricks. Otherwise, we will all be drinking the information poison brewed by our LLM revolution.

This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays. Sign up here.