AI Utilization Strategy Task Force
Committee on Digital Economy
Keidanren
At the same time, AI may generate new safety risks for users and third parties, which are not yet explicitly tackled clearly by the product safety legislation. For example, in principle stand-alone software is not explicitly covered by EU product safety legislation, with the consequence that the risks generated by the probabilistic nature of AI are not yet clearly and specifically addressed by existing safety rules. Additionally, such legislation focuses on safety risks present at the time of placing the product on the market and presupposes "static" products, while AI systems can evolve. In addition to generating new safety risks for user and third parties, the lack of clear safety provisions tackling such risks may give rise to:
- legal uncertainty for businesses that are marketing their products involving AI in the Union, as well as for those using such products in their own processes, and
- challenges for market surveillance and supervisory authorities which may find themselves in a situation where they are uncertain whether they can intervene, because they may not be empowered to act and/or may not have the appropriate tools and means to inspect AI-enabled systems.
Specific challenges on product safety are currently also being addressed by other ongoing initiatives, such as the revisions of the Machinery Directive and of the General Product Safety Directive. The Commission will ensure coherence and complementarity between those initiatives and this initiative.
When considering the introduction of new regulations, full attention must be given to existing regulations and systems, and they must not be extended to AI technology embedded in hardware, including software and services. Changes in the targets of regulation will mean AI system developers being held accountable for issues they cannot be directly involved with, and this may undermine the development and utilization of AI systems.
Since existing safety regulations have not envisioned application to AI technology, a solution to the issue of lack of explicit rules to address new safety risks will take time. Discussions on this issue need to take into account a balance between the benefits for society as a whole—including developers, providers, deployers, and users of AI systems—and the possible risks.
More specifically, the aims are:
- (a) to ensure the effective enforcement of rules of existing EU law meant to protect safety and fundamental rights and avoid illegal discrimination by ensuring the relevant documentation for the purposes of private and public enforcement of EU rules;
- (b) to provide legal certainty for businesses that are marketing their AI-enabled products or using such solutions in the EU as regards the rules applicable to such products and services;
- (c) to prevent where possible or to minimise significant risks for fundamental rights and safety;
- (d) to create a harmonised framework in order to reduce burdensome compliance costs derived from legal fragmentation, which could jeopardise the functioning of the Single Market;
For business operators, it is desirable that the legal position of their businesses is clear, that the credibility of AI-related products and services is enhanced by legal underpinning, and that compliance cost is reduced through the creation of a harmonized framework. We, therefore, welcome the discussion in the inception impact assessment.
A discussion on accountability, which is inevitable in the process of introducing regulations, should give consideration to the areas of AI utilization, social impact, and other background factors. Imposing uniform accountability rules for businesses operating in diverse areas may hinder AI utilization in multiple areas and is, therefore, undesirable.
There are cases where it may not be possible to sort out all AI-related rights and obligations with legal provisions and cases where comprehensive categorization will be difficult. Even in such cases, the legal stability of businesses and the safety of users can be ensured through the clarification of rights and obligations by the parties in a contract. It is desirable to hold discussions on new rules while paying due attention to security and safety guaranteed by voluntary and flexible free contracts by the parties involved.
Alternative options to the baseline scenario
Striking a balance between innovation and regulation is important for the realization of the interests of an advanced and highly credible data-driven society. Likewise, it is desirable to ensure the harmony of international rules to prevent unnecessary fractionalization. Lack of clarity on what is allowed and not allowed under the regulations is one obstacle to the development and deployment of AI solutions in Europe. Concerns about proof of full compliance with regulations have undermined many business negotiations with both government and the private sector.
Basically, it is desirable not to complicate the legal systems developers must comply with, such as by regulating areas that can be regulated by existing rules and systems under these frameworks (e.g. regulating AI used in medical equipment through regulations on medical equipment). New regulations should be limited to the minimum required, and when considering the introduction of such regulations, compatibility with existing rules and systems should be ensured, and particular attention needs to be paid to preventing overlapping administration.
(1) Option 1: EU "soft law" (non-legislative) approach to facilitate and spur industry-led intervention (no EU legislative instrument)
With regard to areas that existing rules and systems are unable to regulate, considering policies with the "soft law" approach is effective. Under the current situation where it is still premature to define and regulate high-risk AI, imposing prior regulations on AI-enabled products and services without any explicit basis may hinder innovation that will contribute to industrial development and help resolve social issues in Europe.
Therefore, it is better to promote industry-led measures and enhance the credibility of AI in the market by developing a joint government-private sector scheme for the appropriate evaluation of the voluntary steps taken by businesses.
Compatibility with international standards is necessary for the voluntary steps initiated by the industrial sector to be widely recognized and adopted. Continuous investigation, discussion, and improvement for all stakeholders are necessary in the process of determining policies to address the risks relating to the development and utilization of AI applications. Corporate self-governance is particularly important for the protection of fundamental rights.
(3) Option 3: EU legislative instrument establishing mandatory requirements for all or certain types of AI applications (see sub-options below).
a. As a first sub-option, the EU legislative instrument could be limited to a specific category of AI applications only, notably remote biometric identification systems (e.g. facial recognition). Without prejudice to applicable EU data protection law, the requirements above could be combined with provisions on the specific circumstances and common safeguards around remote biometric identification only.
A consensus has yet to be reached on the definition and correct understanding of biometric data, such as face recognition and dactyloscopy (fingerprint) data, which has been identified as "high-risk AI application," and legal systems pertaining to its utilization have not been established. It is necessary to clarify the definition of the scope and uses of systems such as remote biometric identification and biometric authentication, as well as the difference between the two, ensuring that this issue is discussed carefully, in order not to restrict utilization by the private sector unnecessarily.
The creation of practical guidelines is necessary to clarify the conditions and operational requirements for using remote biometric identification systems. The guidelines must also define remote biometric identification systems and classify their utilization methods.
b. As a second sub-option, the EU legislative instrument could be limited to "high-risk" AI applications, which in turn could be identified on the basis of two criteria as set out in the White Paper (sector and specific use/impact on rights or safety) or could be otherwise defined.
Evaluation as "high-risk" must be based on the current discourse and definition of "risk" at institutions deliberating international standards. Uniform regulations must not be imposed on diverse sector-specific definitions and methods for risk assessment and management, and measures based on existing regulations in each sector should be considered.
While the criteria for defining high-risk AI set out in the European Commission's white paper are indispensable for the realization of credible AI as an issue for the future, technical validation of these criteria is still very difficult at present. It must also be noted that in cases where the risks of systems using high-risk AI can be fully eliminated or mitigated through physical safety measures and operations, the criteria defining high-risk AI must not be applied uniformly. Therefore, it is desirable to adopt a step-by-step approach to arrive at a realistic timetable and criteria based on a road map to be drawn up after consultations with experts. The substance of the road map and the standards must be consistent with and conform to European as well as world standards.
c. In a third sub-option, the EU legislative act could cover all AI applications.
There are many AI applications, such as those relating to the optimization of production processes and energy use, that do not pose any risks to the fundamental rights and safety being considered in this assessment. The imposition of uniform criteria on all applications will result only in demerits, such as increased cost, for such applications. Therefore, Sub-option 3 under Option 3 is inappropriate.
(4) Option 4: combination of any of the options above taking into account the different levels of risk that could be generated by a particular AI application.
The standards for fairness, safety, and quality differ for each country, culture, sector, user, and so forth. Therefore, it is necessary to make appropriate choices of measures to be adopted for each AI technology and sector, such as the "soft law" approach and measures based on existing regulations, in order to be able to adapt flexibly to AI and other rapidly evolving new technologies.
In the introduction of regulations, consistency and conformity with laws recognized as a vital framework for the privacy and security of European citizens' personal data, which are also regarded as models by many countries, particularly the General Data Protection Regulation (GDPR), is very important. In order to promote consistency and conformity, cooperation with the European Data Protection Board (EDPB) and national and international data protection bodies in the implementation of such regulations is advisable.
The public intervention may however impose additional compliance costs, in so far as the development of some AI systems may have to account for new requirements and processes. If compliance costs outweigh the benefits, it may even be the case that some desirable AI systems may not be developed at all.
The imposition of conditions on companies developing AI applications that constitute a burden on them may have economic consequences, including weakened international competitiveness of EU companies due to the delay in the launch of products and services in the EU market compared to other economic zones, or even failure to launch them. It must also be noted that the result may impede innovation.
It must be noted that even with uniform rules for EU members, lack of conformity with international regulations, which are important for businesses engaged in global operations, will increase compliance cost.
The assessment will also have to consider which measures a responsible economic operator would take even without explicit public intervention.
The extent of the economic benefits depends, all other things being equal, on the increase in trust. Other things being equal, users will have more trust when they can rely on legal requirements, which they can enforce in courts if need be, than if they have to rely on voluntary commitments
Regardless of whether there is explicit public intervention, since AI undergoes model changes through learning after the start of service, even if measures that need to be taken by the responsible operators at the start of service are anticipated, these operators may still face unforeseen litigation risks as a result of subsequent model changes, thus compromising the social benefits AI would have generated. For this reason, the terms that suppliers of products and services need to abide by should be clarified to ensure legal certainty.
Due to the high scalability of digital technologies, small and medium enterprises can have an enormous reach, potentially impacting millions of citizens despite their small size.
As stated elsewhere in this document, the extent of the impact of AI on society, the environment, basic human rights, and so forth cannot be measured by the capital or number of employees of the corporate developers but is largely determined by the number of users and the utilization methods of the products and services. Therefore, it is not necessarily appropriate to use the size of the corporate developer or whether it is a small or medium-sized enterprise as a criterion for determining whether it is subject to regulation.
Likely social impacts
We hope for greater credibility of AI applications in society and greater social acceptance of these applications.
It must be noted that the imposition of conditions on companies developing AI applications that constitute a burden on them may prevent users in EU nations from enjoying the benefits of cutting-edge technology due to delay in the launch of products and services in the EU market compared to other economic zones, or even failure to launch them, with the result of jeopardizing innovation.
Impact assessment
The completion of the impact assessment is scheduled for December 2020.
The start time for the impact assessment of this initiative should be specified. Furthermore, this assessment should be implemented with ample lead time, taking into account the effect of the COVID-19 pandemic, and with sufficient opportunities to engage in dialogue with the industrial sector.