The Google Brain-DeepMind merger is good for Google. It might not be for us

Demis Hassabis, DeepMind's co-founder and CEO.
Alphabet has decided to merge its Google Brain and DeepMind A.I. research labs. Demis Hassabis, DeepMind's co-founder and CEO, will lead the newly-combined group as Alphabet seeks to show it is still at the cutting edge of A.I. in the face of increased competition from Microsoft, OpenAi, and a slew of startups.
Photo by Samuel de Roman—Getty Images

Hello and welcome to April’s special edition of Eye on A.I. Last week, Alphabet CEO Sundar Pichai announced that he was merging the company’s two advanced A.I. research labs, Google Brain, which operates out of the company’s headquarters in Mountain View, California, and the London-based DeepMind. The two groups are forming a new unit called Google DeepMind, which will be headed by DeepMind co-founder and CEO Demis Hassabis.

It’s a logical move for Google, at a time when it finds itself in an unexpected, and potentially existential, struggle with Microsoft to see who can infuse cutting-edge generative A.I. into its product offerings the fastest. At stake is Google’s main revenue and profit-driver—its dominance of internet search—as well as a bunch of other Google product lines, including its Workspace office productivity software and its cloud computing services. But, while merging Brain and DeepMind might be a winning combination for Alphabet, we all might wind up losing.

To understand what’s at stake for Alphabet, and for the public, it’s worth taking a quick look at the history of Google’s dueling A.I. teams.

Google Brain was already up-and-running when the company acquired DeepMind in 2014 for a reported $650 million, beating out a rival bid from Facebook. Google co-founder Larry Page was the driving force behind the purchase. He had recognized the importance that deep learning—the kind of A.I. which is based on neural networks—would play in determining Big Tech’s future. Page had lured a host of A.I. luminaries to Google Brain, including deep learning pioneer and future Turing Award-winner Geoff Hinton, along with his two graduate students Alex Krizhevsky and Ilya Sutskever, offering them seven-figure salaries to hire them over fierce competition from rivals such as Microsoft and Facebook (now Meta). (Sutskever would in turn go on to co-found OpenAI and lead the researcher team there.) According to an anecdote in Cade Metz’s 2021 book The Genius Makers, Page was on a private jet with Elon Musk and Luke Nosek, who along with Musk is a member of Silicon Valley’s PayPal mafia, when Page overheard Nosek and Musk discussing a DeepMind breakthrough. The London lab had created a deep learning A.I. that could, in just a few hours of training, learn to play various classic Atari video games at superhuman levels. Page, according to the story, decided there and then he had to buy DeepMind.

DeepMind’s founders had little choice but to sell. Although they had received venture funding from Peter Thiel and others, including Musk, DeepMind had no revenue-generating products and was burning through millions each year in salaries and computing costs. It could not hope to compete against the deep pockets of Google, Facebook, Microsoft, and Chinese rivals like Baidu. It required tens of millions more to retain top A.I. researchers and to buy time on the expensive, specialized computer chips to run deep learning systems. (A similar calculation has driven OpenAI into Microsoft’s arms since 2019.)

It might have made sense to merge the two units immediately after the acquisition. But as part of the deal, Hassabis and his co-founders, Mustafa Suleyman and Shane Legg, had negotiated a substantial degree of independence for DeepMind, which would continue to operate as a separate company, headquartered in London, with its own leadership team and unique culture.

Until 2022, DeepMind’s expenses were lumped in with Alphabet’s “Other Bets” in its SEC filings, a catchall category for operating units considered “individually immaterial” to Alphabet’s finances, such as its self-driving unit Waymo and Verily, its biotech longevity play. (DeepMind’s costs have since been moved on to Alphabet’s main P&L “reflecting its increasing collaboration with Google Services, Google Cloud, and Other Bets.”) DeepMind also reportedly tried for years to win even more independence from its parent, attempting to negotiate an arrangement where the company would be at least partially spun-out from Alphabet as a non-profit entity. But those efforts failed, and DeepMind told staff it was abandoning the negotiations in 2021.

There was initially some logic to having Brain and DeepMind function separately beside the personal preferences of Hassabis and his co-founders. At the time of the DeepMind deal, Google Brain was working on computer vision, the use of A.I. to classify and process images, and natural language processing. In this work, it tended to use either supervised learning—where an A.I. system learns from labelled historical data—or unsupervised learning, where it learns from unlabeled historical data. But it did relatively little work on reinforcement learning—where an A.I. system takes actions, often in a game environment, and learns from that experience how best to maximize some reward. DeepMind, by contrast, specialized in reinforcement learning and employed some of the world’s top experts in the area.

The two labs also had different organizational setups. Brain was modeled more on legendary corporate research labs such as Bell Labs or Xerox PARC. Researchers were able to pursue their own projects and research interests, with relatively little top-down direction. At DeepMind, however, Hassabis had explicitly sought to create a hybrid between the freedom of academia and the founder-led startup culture he had experienced at gaming companies. He once told me he modeled his approach in part on how Robert Oppenheimer ran the Manhattan Project. He assigned teams to go after grand challenges—beating humans at Go, solving protein folding—and would flood staff on to teams whose approaches seemed to show the most progress, while killing off other projects that failed to hit milestones.

Still having two labs created tensions. Brain and DeepMind competed for talent and, perhaps more critically, for datacenter time. I witnessed the occasional bitterness of this fraternal antagonism: at the prestigious NeurIPS A.I. conference in 2018, I looked on as two DeepMinders quizzed a couple of Brain researchers who were presenting a poster on their research. The DeepMinders wanted to know why the Brain guys hadn’t tried an obvious extension of the method they were trialing. “Yes, well, we would have done that,” one of the Brain dudes retorted. “If you guys hadn’t been hogging all of our goddamn compute!”

In recent years however, the differentiation between the two labs has blurred. DeepMind increasingly began to use both supervised and unsupervised learning in its work, not just reinforcement learning. It also increasingly worked on computer vision and natural language processing—the two areas that had been Brain’s forte. For instance, DeepMind and Brain both created large language models (PaLM for Brain; Gopher for DeepMind) and chatbot systems (LaMDA for Brain; Sparrow for DeepMind.) Given the datacenter capacity it takes to create these huge natural language processing systems and how important these tools are to Google’s future business, it didn’t make much sense to continue funding two separate efforts.

There may also be other factors at play. Margaret Mitchell, who is the A.I. ethics lead at Hugging Face and formerly co-head of Google Brain’s A.I. Ethics team, tweeted that she felt the merger was being done in part to help mask attrition from Brain. The lab has lost a number of prominent A.I. researchers since 2021, some of whom have left to found A.I. startups of their own and some of whom have gone to work for other Big Tech companies. Some were demoralized and left after Google fired Mitchell and her A.I. Ethics co-head Timnit Gebru. Others voiced frustration privately over Google’s corporate bureaucracy and its risk-averse culture, which made it hard to translate research breakthroughs into products. Mitchell was among those who criticized the leadership of Jeff Dean, the Google senior vice president who oversaw Brain, posting on Twitter that “he shouldn’t be a manager.” In the new structure, Dean, who was one of Google’s earliest hires and is a legendary coder, will become Google’s chief scientist, reporting directly to Google CEO Sundar Pichai.

Ok, so all that explains why the merger maybe makes sense from Google’s perspective. Why might it be bad for us? Well, two reasons. The first is that since 2018, a big portion of DeepMind’s work has been devoted to using A.I. to create tools that can lead to major scientific advances. The company has an entire division dedicated to this. It created a system that could help decipher ancient Greek inscriptions; another that could learn to better control a nuclear fusion reaction. It found a better way to perform matrix multiplication, a mathematical technique that is essential to deep learning itself, but also has lots of other applications in science and engineering. But most importantly is AlphaFold, the DeepMind software that predicts the structure of proteins. (I have written a number of stories about AlphaFold, including this large feature.) AlphaFold—which DeepMind has made freely available along with a database that contains predicted structures for almost every protein known to biology—has been transformative. The use of AlphaFold’s predicted protein structures have quickly become a standard practice in biology and are helping to speed research into new medicines for diseases ranging from Covid to leishmaniasis to cancer. It is also helping design enzymes that could digest plastics, helping with sustainability.

All of these tools for advancing science are essentially gifts DeepMind has given to the world. AlphaFold alone represented a multi-billion dollar business opportunity that DeepMind just gave away for free. (Hassabis did also form a new for-profit company, Isomorphic Labs, another of Alphabet’s “Other Bets,” that is using similar methods to advance drug discovery.) If the creation of Google DeepMind means that Google reduces DeepMind’s emphasis on applying A.I. to big scientific questions that have no direct bearing on Google’s business, as certainly seems possible, then Alphabet will have rescued its bottom line, but we will all be poorer for it.

The other big risk is the speed of A.I. development. Hassabis has been on the record saying “I would advocate not moving fast and breaking things” when it comes to advancing this powerful technology. But his boss, Pichai, is clearly under pressure to do exactly that even though the Alphabet CEO has admitted that he sees a “mismatch” between “the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology’s evolving.” The generative A.I. boom has the potential to upend industries, exacerbate economic inequality, supercharge misinformation, undermine trust, amplify societal biases, increase social isolation, and devalue human creativity. And those are all risks we face from today’s generative A.I.—let alone the possibility that the current A.I. arms race ends in the creation of artificial superintelligence that even those working at the cutting edge of the technology such as Hassabis and OpenAI CEO Sam Altman say could spell our doom as a species. Hopefully, Hassabis will be able to use his position at the helm of the new Google DeepMind to ensure Google deploys any A.I. it develops safely. But commercial logic argues against such caution. And DeepMind’s best minds are now subject to more commercial pressure than ever before.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

A.I. IN THE NEWS

European Parliament finalizes draft of landmark EU A.I. Act. The EU Parliament has passed a draft of its landmark EU A.I. Act. Final language for the proposed law had been held up by debate over how to deal with the kinds of general-purpose “foundation models” that have underpinned the recent generative A.I. boom. The draft bill says companies making generative A.I. tools will have to disclose if any copyrighted material was used in training these systems. They will also have to meet stricter obligations for risk assessments and have to report the environmental footprint of their A.I. models. The law, which may set a global standard for A.I. in much the same way as the EU’s data privacy law GDPR, will now be subject to negotiations between the EU Parliament, the EU Commission, the bloc’s executive brand, and the European Council, which represents the national governments of member states. It is expected a final version will be voted in June and would come into force next year. You can read more about the draft law in this Venture Beat story.

U.S. government agencies issue joint warning on A.I. The Federal Trade Commission, the Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission issued a joint statement reminding companies that they had substantial power under existing laws to regulate A.I. and pledging to use those to promote responsible A.I. innovation. The agencies have previously expressed concerns about potentially harmful uses of automated systems. The joint statement is particularly concerned with fairness and bias in A.I. systems that could impact civil rights, fair competition, consumer protection, and equal opportunity.

A.I.-generated advertising goes viral. An advertisement for a fictitious pizza delivery business that was created entirely using different kinds of generative A.I. software, with no human actors, film crews, or editors, went viral on social media. The ad used Runway’s Gen2 text-to-video system, Eleven Labs voice generation software, and Soundraw’s AI Music for music. The same day, an ad from the Republic National Committee attacking U.S. President Joe Biden’s re-election effort was released that the RNC said had been created with A.I.

Vector database startup Pinecone raises $100 million. Pinecone, an Israeli-based startup whose vector database technology has become a key tool for those looking to build products using large language models (LLMs), secured $100 million Series B round that values the company at a reported $750 million, Tech Crunch reported. The round was led by Andreessen Horowitz, with participation from ICONIQ Growth and previous investors Menlo Ventures and Wing Ventute Capital. People have turned to vector databases as a way to give LLMs a form of long-term memory and as a way to reduce the chance that the generative A.I. models will make up information (a phenomenon A.I. folks call “hallucinations.”) Vector databases also allow users to search unstructured data easily, using natural language, and without having to rely on key words or phrases contained in the data. But the databases can be more expensive to run and slower than some traditional structured databases. 

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.