Go to contents

What needs to be done before facing Europe’s AI barrier

What needs to be done before facing Europe’s AI barrier

Posted December. 23, 2023 08:07,   

Updated December. 23, 2023 08:07

한국어

The European Union (EU) has recently reached an agreement on the draft Artificial Intelligence Act (AI Act), which outlines a set of obligations for developers to follow throughout the entire life cycle of an AI system. Companies aiming to enter the European market must ensure compliance with AI laws at every stage, including service design, deployment, and post-market operation.

For instance, consider an artificial intelligence company creating an intelligent virtual HR manager (human resource development manager). Right from the initial development stages, the company must institute rigorous data governance. It is imperative to demonstrate the absence of bias in the training data, and complete transparency must be maintained regarding the data source.

During the system development stage, meticulous attention must be given to designing the algorithm to mitigate potential controversies related to bias or unfairness, ensuring strict adherence to AI laws. Post-market release, trust must be sustained through ongoing monitoring. In the event of issues, the system should be equipped with incident response capabilities to promptly notify and rectify malfunctions. Additionally, it is essential to measure the system's energy consumption and actively work towards minimizing its carbon footprint.

If AI companies were to be assessed under the current legal framework, the likelihood is that a majority would likely receive a failing grade. This expectation aligns with the outcomes of the evaluation conducted by the Stanford Fundamental Model Research Center (CRFM) on AI foundation models, utilizing the initial version of the AI Act in June of this year.

If companies fail to comply with the AI law, they could face fines of up to 35 million euros (approximately 49.7 billion won), varying according to the size of the company. The extent of fines these companies might incur remains a point of interest.

Consequently, numerous companies are reassessing their entry into the European AI market. Citing privacy concerns, Google, for instance, has opted to postpone the release of the generative AI chatbot Bard in Europe. Some critics argue that the AI law serves as a barrier, providing a grace period for artificial intelligence companies in EU countries to achieve self-reliance.

However, there is little time for complaints; instead, a proactive response strategy must be devised in anticipation of the potential adoption of Europe’s AI regulations within the EU and other global regions. Korea’s artificial intelligence law adheres to the principle of ‘permit first, regulate later,’ prioritizing technology development with subsequent regulation in case of issues. While this approach aims to foster technological advancement, concerns persist that it may fall behind international regulations if not adapted accordingly.

Moreover, it is imperative to actively engage behind the scenes actively, ensuring that our perspectives are thoroughly considered in formulating international AI regulations. The interpretation of the obligations companies must meet under the EU’s AI law can vary depending on the context. There is a significant likelihood that AI regulations emphasizing growth rather than strict regulation will be crafted in AI-dominant countries like the U.S., as opposed to Europe.