Apple is about to join the race that ChatGPT started
Speaking at the CEO Council Summit held by the WSJ in London (UK), Eric Schmidt, former CEO of Google, was concerned about the possibility that artificial intelligence (AI) poses a risk to humanity.
"This risk defines that very, very, very, very many people are harmed or killed," Schmidt said. He said governments need to ensure this technology is not taken advantage of by bad actors.
The former Google CEO outlined a scenario in the near future, when AI has the ability to find zero-day security holes in the network, or discover new biological forms.
"It's fiction at the moment, but some arguments could be true. When that happens, we need to be ready to respond to make sure these things aren't dominated by the bad guys." Schmidt said.
In recent years, the future of AI has become a hot topic of discussion in technology and policy-making circles.
After the popularity of ChatGPT last year, technology companies are increasingly focusing on developing and launching AI products. Many experts believe that AI will develop rapidly in the near future, and need to be included in the legal framework.
Schmidt held the position of CEO of Google from 2001 to 2011. In the interview, he admitted that he did not have a clear picture of how to manage AI, and added that it would be "the big question for society". Schmidt also said that the possibility of establishing a legislative body specializing in AI in the US is not high.
According to CNBC, the former Google CEO is not the only famous figure in the technology world to warn about the risks of AI. In March, Sam Altman, CEO of OpenAI, admitted to being "a bit scared" of AI.
Billionaire Elon Musk thinks AI is one of the "biggest risks" of human civilization. Sundar Pichai, CEO of Alphabet, said that AI will "impact every product of the company", and society needs to prepare for these changes.
Schmidt is a member of the National Security Committee on AI in the US. The agency has been assessing the potential of AI since 2019, including the regulatory framework. Results published in 2021 conclude the US is not ready for the age of AI.