Bill Gates, along with Eric Schmidt and Nvidia, just minted another A.I. unicorn in a massive $1.3 billion funding round

Mustafa Suleyman wears glasses and a button up.
CEO Mustafa Suleyman raised $1.3 billion for startup Inflection AI in a new funding round.
Courtesy of Inflection AI

San Francisco–based A.I. startup, Inflection AI, just raised $1.3 billion in a new round of funding from the likes of Microsoft, Nvidia, and former Google CEO Eric Schmidt—not to mention a certain Bill Gates. The funding will support further development of Inflection AI’s first product, a personal assistant and companion named Pi (Personal A.I.), which was launched in May.

The funding brings the total Inflection has raised to $1.5 billion. The company had previously raised $225 million in early 2022 from some of the same investors, as well as former Meta chief technology officer Mike “Schrep” Schroepfer plus Google DeepMind cofounder and CEO Demis Hassabis, and pop artist Will.i.am.  

Inflection’s goal is to build a kind of universal A.I.-powered digital assistant. Many technologists, including Gates, see this kind of assistant as the future for all human-computer interaction. 

“Whoever wins the personal agent, that’s the big thing, because you will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again,” Gates said at a Goldman Sachs– and SV Angel–sponsored talk on artificial intelligence in San Francisco in May. Just weeks ago, Gates was involved with another “unicorn,” or $1 billion–plus valuation, funding round for an A.I. startup, joining a group of prominent investors pouring money into mining startup KoBold Metals.

The size of Inflection’s new funding round reflects skyrocketing investor enthusiasm for startups pioneering generative A.I., particularly those creating the underlying large language models (LLMs) that are at the heart of the current A.I. boom. In recent months, startups such as Cohere, Anthropic, and Runway have announced funding rounds in the hundreds of millions of dollars. But $1 billion–plus funding for Inflection also reflects the vast expense in creating these A.I. models, which must be trained on pricey, specialized computer chips in large data centers.

“A powerful benefit of the A.I. revolution is the ability to use natural, conversational language to interact with supercomputers to simplify aspects of our everyday lives,” said Jensen Huang, founder and CEO of Nvidia in a statement. “The world-class team at Inflection AI is helping to lead this groundbreaking work, deploying Nvidia A.I. technology to develop, train, and deploy massive generative A.I. models that enable amazing personal digital assistants.”

Inflection AI was cofounded by CEO Mustafa Suleyman, who also helped found advanced A.I. research lab DeepMind along with Hassabis in 2012. Google acquired DeepMind in 2014 for about $650 million. Suleyman took a leave from DeepMind in 2019 following allegations of bullying by several employees, but later returned to the company briefly before joining Google as a vice president in charge of product management and policy for A.I. He then left Google in 2022 to join venture capital firm Greylock Partners, where Inflection AI was incubated. Greylock partner Reid Hoffman, who cofounded PayPal and LinkedIn and was an early backer of OpenAI, is also an Inflection cofounder.

A number of venture capitalists have predicted the integration of A.I. personal assistants into daily life. Marc Andreessen, the billionaire founder of Andreessen Horowitz, wrote in a 7,000-word manifesto that chatbots like Pi would one day be pervasive in every sector, from creative arts to government.

“Every person will have an A.I. assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful,” Andreessen wrote in June. “The A.I. assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.”

Inflection’s first product, a chatbot called Pi, is much more limited than this kind of genie-like personal agent. In fact, it is more limited than many competing chatbots, such as OpenAI’s ChatGPT and Google’s Bard. Pi is designed, Suleyman told Fortune, to simply be an empathetic listener and good conversationalist, rather than a tool for writing research reports, brainstorming marketing ideas, or writing software code—all tasks that rival chatbots can perform.

“I think everybody is going to have a chief of staff in their pocket that is knowledgeable, kind, supportive, and also very practical,” Suleyman said during a Collision livestream on Thursday. “There aren’t any humans that can do all of those skills at once in a single experience and a single person…It’ll be consigliere, it’ll be confidante, it’ll be chief of staff, it’ll be coach, it’ll be educator and teacher, all in one.”

“Personal A.I. is going to be the most transformational tool of our lifetimes. This is truly an inflection point,” Suleyman noted. “We’re excited to collaborate with Nvidia, Microsoft, and CoreWeave, as well as Eric, Bill and many others to bring this vision to life.” 

Pi is a “teacher, coach, confidante, creative partner, and sounding board” that is mainly useful in its readiness to field conversations with users at any time, according to Inflection.

But Inflection’s ambitions are far loftier. The A.I. system that powers Pi, a large language model (LLM) that the company calls Inflection-1, has outperformed a number of competing LLMs—including OpenAI’s GPT-3.5, which is the system powering the free version of ChatGPT, and LLaMA, an open-source A.I. system created by Meta researchers—on a range of tasks, according to Inflection. The system still lags the reported performance of the largest LLMs, such as OpenAI’s GPT-4 and Google’s PaLM 2 models.

Inflection signaled its intention to surpass these competing systems in the press release announcing its latest funding round. It said it would use the money it has raised to build a supercomputer, in conjunction with chipmaker Nvidia and computing infrastructure company CoreWeave, for training massive A.I. models. The supercomputer will be fueled by 22,000 of Nvidia’s most powerful chips for A.I. applications, its H100 Tensor Core graphics processing units. This would be one of the most powerful GPU clusters in the world.

“This is going to allow us to train models that are at least an order of magnitude larger than anything that is available in the cutting edge in the world today, that is, GPT-4,” Suleyman said of the supercomputer during the livestream.

The supercomputer recently completed an MLPerf reference training task, a benchmark that evaluates the training and performance of A.I. and other technology, in a state-of-the-art time of 11 minutes.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.