The Government held the 11th Joint Meeting of the AI Strategy Council and the 1st Joint Meeting of the AI Institutional Research Group on August 2nd. Safety of AI is essential for promoting the utilization and strengthening the development capabilities of AI. To meet this requirement, Japan has compiled guidelines covering a wide range of AI-related businesses. At this meeting, the participants agreed to begin examining how to improve the system, including whether or not a new regulatory law is required, and to compile an interim report in the fall. The AI Institutional Research Group was established as an expert panel under the AI Strategy Council based on the Integrated Innovation Strategy 2024, which was approved by the Cabinet in June. The Group is composed of 14 members, including experts on AI, legal systems, and companies that develop and use AI.
The ideal state of AI systems is examined to ensure safety, security, and reliability in the field. Members of the Group commented that "the enhancement of AI safety is important for continuously strengthening the development capability of AI utilization in various fields," "an agile perspective that can withstand technological and market changes is necessary," "it is important to seek public-private joint regulations and international harmonization by combining soft and hard laws based on a risk-based approach," "it is necessary to train people who create and use AI," and "AI is also necessary for protection against cyber-attacks."
Prime Minister Fumio Kishida stated that "The following four points are the basic principles for discussing the ideal system. First, the risk response must be compatible with innovation promotion. Based on the guidelines, measures that ensure the safety of AI must be established depending on the magnitude of the risk. Second, a flexible system that can respond to rapid changes in technology and business must be designed. Third, the system must be internationally interoperable and compliant with international guidelines. The fourth requirement is proper procurement and use of AI by the government. As the government's efforts will exert a significant ripple effect on others, the government's use of AI must be thoroughly investigated."
According to the secretariat, there is a possibility that under a new legal system, rules will be formulated to allow the government to collect and disclose information for addressing risks involved in the following situations: AI created by developers dealing with large-scale models gives wrong answers despite being widely trusted; and AI that is considered to be malicious and to promote prejudice and discrimination is found, even if the AI in question is produced by a small developer.
A wide range of items need to be considered, including the type of risk to be addressed, the type of activity to be regulated, whether the regulation should be an ex-ante or ex-post regulation, the strength of the regulation, items to be observed, and standards. An interim report is scheduled to be compiled by this fall.
This article has been translated by JST with permission from The Science News Ltd. (https://sci-news.co.jp/). Unauthorized reproduction of the article and photographs is prohibited.