- Sam Altman said he worried creating ChatGPT was “something really bad” given the risks AI poses.
- The OpenAI CEO was speaking to Satyan Gajwani, vice chairman of Times Internet, in New Delhi.
- He is on a six-nation tour which includes Israel, Jordan, Qatar, the UAE, India and South Korea.
OpenAI CEO Sam Altman has admitted to losing his sleep over the dangers of his creation ChatGPT.
In a conversation during a recent trip to India, Sam Altman said he worries the over the idea that he may have done “something really bad” by creating ChatGPT, which was released in November and sparked a surge of interest in AI.
“What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT,” Altman told Satyan Gajwani, Vice Chairman of Times Internet at an event organized by the Economic Times on Wednesday.
“That maybe there was something hard and complicated in there (the system) that we didn’t understand and have now already kicked it off,” Altman added.
Asked whether AI should be regulated like atomic energy, Altman said there had to be a better system in place to audit the process.
“Let’s have a system in place so that we can audit people who are doing it, licence it, have safety tests before deployment,” he said.
The risks are high
A number of tech leaders and government officials have raised concerns about the pace of development of AI platforms.
In an open letter in March from the Future of Life Institute, Elon Musk and Apple cofounder Steve Wozniak were among tech leaders who warned that powerful AI systems should be developed only once there is confidence that the effects will be positive and the risks are manageable.
The letter called for a six-month pause over training of AI systems that are more powerful than GPT-4.
Altman responded to the letter saying it “lacked technical nuance about where we need the pause.”
Earlier this month, Altman was among more than 350 scientists and tech leaders who signed a statement expressing deep concern over AI risks.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.
Read the full article here