Open AI CEO Sam Altman shared his viewpoint on so-called weaponized AI technology in a Tuesday video call with the U.S. think tank Brookings Institution. Asked by Michael E. O'Hanlon, a senior fellow and director of research in foreign policy at the institution, if AI systems could be allowed to make decisions that can lead to human loss in a case where South Korea uses defense robots to intercept manned fighter jets sent by North Korea, Altman replied that it involves a lot of questions.
Altman commented that AI programs could make interception decisions if little time is left for human controllers to intervene with North Korean fighter jets nearing the country, adding that there are gray areas regarding how convinced human decision-makers can be that such strikes are underway, for instance.
Altman, in a clear and unequivocal manner, affirmed Open AI's allegiance to the United States and its allies. He expressed his fervent hope that AI technologies would serve as a boon to humanity, not as a tool for countries under leaders they may not agree with. This strong commitment to the ethical use of AI is further demonstrated by Open AI's recent initiatives to advocate for mandatory human intervention with AI weaponry. In line with this, Washington is preparing to engage in a discussion with Beijing to foster bilateral regulatory cooperation this month.
There is growing competition for AI technologies among intelligence authorities across the world. Bloomberg wrote on that day that Microsoft invented top-secret generative AI models divorced from the internet for US spies. Earlier models of the same kind have only been used to a limited degree because such generative AI systems, which are run based on the internet, are prone to data leakage or virtual attacks.
워싱턴=문병기 특파원 weappon@donga.com