Go to contents

The war against AI-produced sexually exploitative content

The war against AI-produced sexually exploitative content

Posted September. 21, 2024 07:28,   

Updated September. 21, 2024 07:28

한국어

On September 11, globally-reigning pop star Taylor Swift shared a widely praised Instagram post endorsing Democratic presidential nominee Kamala Harris. However, Swift’s mention of fake images of herself generated by AI Deepfake technology left a bitter aftertaste. Her statement, "The simplest way to combat misinformation is with the truth," was undeniably right, but it brought to mind the harsh reality of AI's overwhelming influence.

AI Deepfake sexual exploitation content is causing concern in Korea as well, as it dangerously blurs the line between truth and lies. Such manipulated sexual content can ruin reputations, regardless of its authenticity. Even if the truth is revealed later, the damage done cannot be undone. The U.K.’s Independent compared this misuse of AI to turning an ordinary toaster found in every home into an arsenal, creating something as powerful as a nuclear bomb.

Some may dismiss these concerns as exaggerated. However, a German IT web magazine noted in August that creating Deepfakes is so simple that it’s almost embarrassing not to include step-by-step instructions to prevent potential copycats. All one needs is an open-source AI model and a few "appropriate" phrases. IT experts who reviewed such content explained that, despite slight awkwardness, identifying the AI-generated nature requires careful examination, and it’s shocking that this can be done without specialized knowledge.

As the situation worsens, governments and legislative bodies of the Western world are stepping up their efforts. In early September, the California State Legislature passed a bill criminalizing the creation, distribution, and possession of child sexual abuse content generated by AI Deepfakes. Microsoft, OpenAI, and others pledged at the White House on September 11 to remove nude images from AI learning data to prevent AI-generated sexually exploitative content. The EU also investigated whether newly-developed AI models have violated privacy laws by producing such content. The Washington Post praised these efforts, noting that considering regulations can’t keep pace with technology, this may be a late effort, but without comprehensive measures, AI could become a massive disaster.

In Korea, though belated, there is also welcoming news. On Thursday, bipartisan bills passed a National Assembly subcommittee to strengthen penalties for AI Deepfake sexual exploitation crimes targeting children and youth, and to support victims. Almost identical legislation was proposed last year but abandoned due to lack of attention, highlighting the delay. However, it's never too late to take action. NBC News emphasized during its coverage of the White House pledge that the war against AI Deepfakes is about human dignity. Protecting that dignity is the single most fundamental duty of government institutions funded by taxpayers.