Skip to main content

Topic / Science, Technology and Data

An Interview with OpenAI CEO Sam Altman

Policy on AI works best “Downstream of Science”

During an on-campus interview, I had the opportunity to ask Sam Altman, the CEO of OpenAI, about how governments and societies can become more attuned to AI systems as they evolve rapidly. Altman responded by emphasizing that AI policy works best downstream from the science, suggesting that policymaking should follow and adapt to the scientific developments in AI, rather than attempting to preemptively regulate the technology. This approach allows for a more informed and effective approach to governance as the full potential and implications of AI become clearer over time. As technology consistently evolves to meet the changing needs of business, industry, and society, government policies should also be updated accordingly.

Altman proposed an idea for regulating advanced AI systems, suggesting the creation of an international agency similar to the International Atomic Energy Agency (IAEA).

“One idea that’s quite compelling, is something like an IAEA for advanced AI systems, for example, if you have 1 of the 10 systems in the world that is over this threshold – you’re going to have the equivalent of weapons and specters and we’re going to focus the international policy on catastrophic risk that affects all of us.”

As companies across various industries determine how they will incorporate AI solutions, ranging from automation to decision-making processes, Altman’s comments and stance on AI regulation will be closely monitored. The impact of AI on society will also be thoroughly analyzed as its adoption becomes more widespread.

Altman acknowledged the complex questions that arise with the development of AI, stating,

“How do we as a society agree on, how do we negotiate the sort of ‘here’s what AI systems can never do’, where do we set the defaults? how much does an individual user get to move things around within those boundaries? how do we think about different country’s laws? This is an area where we are very actively working…”

However, Altman also recognized the possibility of AI becoming more accessible and the need for adaptable policies.

“And I think that’s a great idea, and it’s workable, if the technology goes like we think it’s going to go. However, if it turns out you can make AGI on a laptop, which I don’t think is the case, but I can’t prove to you you never can, then you need a super different policy approach,”

Artificial intelligence (AI) has the potential to revolutionize various aspects of our lives, from the way we conduct business and industries to how we learn and interact with one another. As AI technologies continue to advance at an unprecedented pace, it is crucial to consider the implications of these developments on employment, education, and society as a whole. With AI systems becoming increasingly sophisticated and capable of performing tasks that were once the exclusive domain of humans, policymakers and industry leaders must navigate the challenges and opportunities presented by this transformative technology. 

As the interview drew to a close, Sam and I posed for some photos. In that moment, I made sure to ask him the real question I had. 

“What keeps you up at night about AGI?”

“The unknown,” he replied.

Sam Altman, CEO of OpenAI, visited Harvard on May 1st for a series of events hosted by Xfund, the early-stage VC fund co-founded by Harvard’s School of Engineering and Applied Sciences (SEAS). Altman engaged in a conversation with Xfund’s Patrick Chung, his first investor. He also judged a pitch competition where student entrepreneurs presented their startup ideas, attended a lunch with Computer Science department students, and a community dinner.

During the visit, Altman was awarded the 2024 Xfund Experiment Cup, which recognizes extraordinary founders from top universities worldwide. The award is co-hosted by Harvard Business School’s Institute for Business in Global Society (BiGS), SEAS, and the Digital Data Design Institute at Harvard.