Meta’s large language model leak awakens debate over open or closed A.I. research

A person looks at their phone while standing at a booth.
Paul Yeung—Bloomberg—Getty Images

Hello, it’s tech fellow Andrea Guzman bringing today’s Data Sheet to you while David is off. 

A few days ago, I opened up TikTok to the sound of Taylor Swift bemoaning her poor fans and saying she doesn’t care about her tickets costing more than $1,000. It was such an out-of-character statement for the pop darling riding on a wave of good press over her upcoming Eras Tour that it was obvious someone had used A.I. to produce it.

Whoever made the Swift deep fake did so for absurdity’s sake, even titling the TikTok sound “taylor speaking facts.” But the reality that someone could leverage generative A.I. to create new synthetic media for more nefarious purposes is worrying researchers and politicians.

The nonprofit group Partnership on AI spells out cases like these in its framework on responsible practices for synthetic media. It notes that techniques like “representing a specific individual having acted, behaved, or made statements in a manner in which the real individual did not” can be used to cause harm.

One way some A.I. researchers hope to mitigate malicious content is by opening generative A.I. systems to scrutiny so that more than just the company that produces the A.I. and its partners can test and discover shortcomings and biases. Though critics of open-source systems note that opening them to the public increases the likelihood that bad actors could manipulate the tools, and instead champion close A.I. research.

This debate heated up in recent days after Meta’s powerful large language model was leaked to 4chan. The Facebook parent previously made it available to approved researchers and government organizations. But now it can be downloaded by anyone, tampered with, and deployed for however or whatever they wish. As Vice notes, this was “the first time a major tech firm’s proprietary AI model has leaked to the public.” (Despite the leak claims, Meta said it would not discontinue its open A.I. research practices.)

As the Verge explains, the advocates for open research think the leak will pressure A.I. developers into establishing safeguards. But for now, most of the biggest A.I. players are continuing their closed-door approach, only creating portals and chatbots for the public to use and interact with.

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

Andrea Guzman

NEWSWORTHY

25 cents. That's how much Twitter could make per hour of user attention on the platform, Elon Musk told an audience gathered for a Morgan Stanley conference in San Francisco Tuesday. Musk also said Twitter could break even on a cash flow basis in the second quarter.

White House backs bill targeting TikTok. A bipartisan bill that could give the president authority to ban or force a sale of TikTok was endorsed by the White House yesterday. It’s the first time the Biden administration has weighed in on legislation to deal with TikTok, Bloomberg reports. While the video-sharing app isn’t mentioned by name, the bill is aimed at technologies, applications, software, or e-commerce platforms that present a national security threat to U.S. users.

Tesla faces probe for detached steering wheels. The National Highway Traffic Safety Administration is investigating Tesla’s Model Y SUV following complaints that the steering wheels can come off while someone is driving. Tesla has faced probes in recent years over issues with its driver-assist system and its full self-driving software.

ON OUR FEED

“Facebook is off to a great start this year. Contrary to reports otherwise, Facebook is not dead nor dying, but in fact alive and thriving with 2 billion daily active users.” 

Head of Facebook Tom Alison, in a blog post laying out priorities for the social media app, including generative A.I., tools for creators, and access to Messenger within the Facebook app

IN CASE YOU MISSED IT

Salesforce CEO Marc Benioff says nobody saw the market downturn after the ‘best year tech ever had’— but now he’s bracing for a recession, by Tristan Bove

Google middle managers hoping for a big promotion better think again, by Christiaan Hetzner

Missing $96,000 is your problem, Coinbase allegedly told account holder who had life savings cleaned out, by Chloe Taylor

Elon Musk shares rare regrets for brutally mocking a disabled former Twitter employee: ‘I would like to apologize’, by Verne Kopytoff

YouTuber who won a seat in Japan’s senate may be expelled because he hasn’t shown up for a day of work in over 7 months, by Nicholas Gordon

Tech workers are among the increasing number of men seeking plastic surgery for a jawline that rivals Batman’s, by Erin Prater

BEFORE YOU GO

Some history on this International Women’s Day. A mathematician named Ada Lovelace is considered the first computer programmer after publishing an algorithm for a proposed computer known as the Analytical Engine. She died in 1852 but her legacy lives on. Along with being featured on the U.K. passport and as a statue in London, the Ada Developers Academy named itself in her honor. The cost-free coding school enrolls Black and brown women, trans, and nonbinary people to advance their careers in tech.

This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.