The CEOs of Google DeepMind and Anthropic discussed the heavy responsibilities they carry in a recent interview. They both advocated for the establishment of regulatory bodies to oversee artificial intelligence (AI) projects. They emphasized the importance of public understanding and preparation for the risks associated with advanced AI.
When asked about his concerns regarding the potential outcomes similar to Robert Oppenheimer, CEO of Google DeepMind, Demis Hassabis, admitted he often loses sleep over such thoughts. “I worry about those kinds of scenarios all the time. That’s why I don’t sleep very much,” Hassabis explained during an interview with Anthropic CEO Dario Amodei and The Economist’s editor-in-chief, Zanny Minton Beddoes. “There’s a huge amount of responsibility — probably too much — on the people leading this technology,” he added.
Both CEOs concurred that advanced AI holds significant destructive potential as it becomes more feasible. Anthropic’s CEO, Amodei, expressed his concerns, stating, “Almost every decision that I make feels like it’s kind of balanced on the edge of a knife — if we don’t build fast enough, then authoritarian countries could win. If we build too fast, then the risks that Demis is talking about and that we’ve written about a lot could prevail.” He continued, “Either way, I’ll feel that it was my fault that we didn’t make exactly the right decision.”
Hassabis acknowledged that while AI may seem “overhyped” in the short term, the mid-to-long-term ramifications are often underestimated. He advocates for a balanced viewpoint — recognizing the “incredible opportunities” AI presents, especially in science and medicine, while also addressing the related risks. “The two big risks that I talk about are bad actors repurposing this general-purpose technology for harmful ends — how do we enable the good actors and restrict access to the bad actors?” Hassabis said. “Secondly, there’s the risk from AGI, or agentic systems themselves, getting out of control, or not having the right values or goals. Both of those issues are critical to address, and I believe the entire world needs to focus on them.”
Both Amodei and Hassabis called for a governing body to regulate AI initiatives, suggesting the International Atomic Energy Agency as a potential model. “Ideally, it would be something like the UN, but given the geopolitical complexities, that doesn’t seem very feasible,” Hassabis noted. “So, I worry about that all the time, and we try to do everything we can within our influence.”
Hassabis believes that international cooperation is crucial. “My hope is, I’ve talked a lot in the past about a kind of CERN for AGI setup, where we would have an international research collaboration focused on the final steps toward building the first AGIs,” he said. Both leaders emphasized the importance of understanding the immense change they anticipate AI will bring, urging society to begin planning for these changes. “We’re on the eve of something that poses great challenges. It’s going to drastically alter the balance of power,” Amodei remarked. “If someone dropped a new country into the world with 10 million people smarter than any living human, you’d ask, ‘What is their intent? What will they do in the world, especially if they can act autonomously?’”
Anthropic and Google DeepMind did not respond immediately to Business Insider’s requests for comments. “I also agree with Demis on the idea of governance structures beyond ourselves — these decisions are too monumental for any one individual,” Amodei concluded.