**Target:** Proposal for a regulation — Article 29 a (new) ## Text proposed by the Commission (a) (b) (c) (d) (e) (f) (g) (h) (j) ## Amendment of the European Parliament Article 29 a Fundamental rights impact assessment for high-risk AI systems Prior to putting a high-risk AI system as defined in Article 6(2) into use, with the exception of AI systems intended to be used in area 2 of Annex III, deployers shall conduct an assessment of the systems’ impact in the specific context of use. This assessment shall include, at a minimum, the following elements: (a) a clear outline of the intended purpose for which the system will be used; a clear outline of the intended purpose for which the system will be used; (b) a clear outline of the intended geographic and temporal scope of the system’s use; a clear outline of the intended geographic and temporal scope of the system’s use; (c) categories of natural persons and groups likely to be affected by the use of the system; categories of natural persons and groups likely to be affected by the use of the system; (d) verification that the use of the system is compliant with relevant Union and national law on fundamental rights; verification that the use of the system is compliant with relevant Union and national law on fundamental rights; (e) the reasonably foreseeable impact on fundamental rights of putting the high-risk AI system into use; the reasonably foreseeable impact on fundamental rights of putting the high-risk AI system into use; (f) specific risks of harm likely to impact marginalised persons or vulnerable groups; specific risks of harm likely to impact marginalised persons or vulnerable groups; (g) the reasonably foreseeable adverse impact of the use of the system on the environment; the reasonably foreseeable adverse impact of the use of the system on the environment; (h) a detailed plan as to how the harms and the negative impact on fundamental rights identified will be mitigated. a detailed plan as to how the harms and the negative impact on fundamental rights identified will be mitigated. (j) the governance system the deployer will put in place, including human oversight, complaint-handling and redress. the governance system the deployer will put in place, including human oversight, complaint-handling and redress. 2. If a detailed plan to mitigate the risks outlined in the course of the assessment outlined in paragraph 1 cannot be identified, the deployer shall refrain from putting the high-risk AI system into use and inform the provider and the National supervisory authority without undue delay. National supervisory authorities, pursuant to Articles 65 and 67, shall take this information into account when investigating systems which present a risk at national level. 3. The obligation outlined under paragraph 1 applies for the first use of the high-risk AI system. The deployer may, in similar cases, draw back on previously conducted fundamental rights impact assessment or existing assessment carried out by providers. If, during the use of the high-risk AI system, the deployer considers that the criteria listed in paragraph 1 are not longer met, it shall conduct a new fundamental rights impact assessment. 4. In the course of the impact assessment, the deployer, with the exception of SMEs, shall shall notify national supervisory authority and relevant stakeholders and shall, to best extent possible, involve representatives of the persons or groups of persons that are likely to be affected by the high-risk AI system, as identified in paragraph 1, including but not limited to: equality bodies, consumer protection agencies, social partners and data protection agencies, with a view to receiving input into the impact assessment. The deployer shall allow a period of six weeks for bodies to respond. SMEs may voluntarily apply the provisions laid down in this paragraph. In the case referred to in Article 47(1), public authorities may be exempted from this obligations. 5. The deployer that is a public authority or an undertaking referred to in Article 51(1a) (b) shall publish a summary of the results of the impact assessment as part of the registration of use pursuant to their obligation under Article 51(2). 6. Where the deployer is already required to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 shall be conducted in conjunction with the data protection impact assessment. The data protection impact assessment shall be published as an addendum.
aiact/history/parliament-2023/amendments/413 · 2023-06-14
Amends: article 29 a
Proposal for a regulation — Article 29 a (new)