Anthropic’s quest for better, more explainable AI attracts $580 million TechCrunch

Less than a year ago, Anthropic was founded by former OpenAI VP of research Dario Amodei with the intention of doing research in the public interest to make AI more reliable and explainable. The $124 million funding was surprising then, but nothing could have prepared us for the company raising $580 million less than a year later.

“With this fundraiser, we will explore the predictable scaling properties of machine learning systems while closely examining the unpredictable ways in which capabilities and security issues can arise at scale,” Amodei said in the announcement.

His sister Daniela, with whom he co-founded the public benefit corporation, said that after building the company: “We are focused on ensuring that Anthropic has the culture and governance to responsibly secure AI continue to research and develop systems as we grow. †

There’s that word again – scale. Because that’s the problem category Anthropic created to investigate: how can we better understand the AI ​​models increasingly used in every industry as they grow beyond our ability to explain their logic and results.

The company has already published several articles looking at things like reverse engineering the behavior of language models to understand why and how they produce the results they do. Something like GPT-3, probably the most well-known language model out there, is undeniably impressive, but there’s something disturbing about the fact that its inner workings are essentially a mystery, even to its creators.

As the new funding announcement explains:

The aim of this research is to develop the technical components needed to build large-scale models that have better implicit safeguards and require fewer post-training interventions, as well as to develop the tools needed to look further into these models to make sure the protections really work.

If you don’t understand how an AI system works, you can only respond when it does something wrong — for example, exhibits a bias when recognizing faces, or a tendency to draw or describe men when asked about doctors and CEOs. That behavior is built into the model, and the solution is to filter the output rather than prevent it from having those incorrect “notations” at all.

It’s kind of a fundamental change in the way AI is built and understood, and therefore requires big brains and big computers – neither of which is particularly cheap. No doubt $124 million was a good start, but apparently the early results were promising enough for Sam Bankman-Fried to lead this massive new round, along with Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn and the Center for Emerging Risk Research. .

Interesting to see none of the usual deep tech investors in that group – but of course Anthropic isn’t focused on making a profit, which is kind of a deal breaker for VCs.

You can follow the latest research from Anthropic here.

Leave a Comment