**Target:** Proposal for a regulation — Recital 32 a (new) ## Text proposed by the Commission (32a) ## Amendment of the European Parliament (32a) Providers whose AI systems fall under one of the areas and use cases listed in Annex III that consider their system does not pose a significant risk of harm to the health, safety, fundamental rights or the environment should inform the national supervisory authorities by submitting a reasoned notification. This could take the form of a one-page summary of the relevant information on the AI system in question, including its intended purpose and why it would not pose a significant risk of harm to the health, safety, fundamental rights or the environment. The Commission should specify criteria to enable companies to assess whether their system would pose such risks, as well as develop an easy to use and standardised template for the notification. Providers should submit the notification as early as possible and in any case prior to the placing of the AI system on the market or its putting into service, ideally at the development stage, and they should be free to place it on the market at any given time after the notification. However, if the authority estimates the AI system in question was misclassified, it should object to the notification within a period of three months. The objection should be substantiated and duly explain why the AI system has been misclassified. The provider should retain the right to appeal by providing further arguments. If after the three months there has been no objection to the notification, national supervisory authorities could still intervene if the AI system presents a risk at national level, as for any other AI system on the market. National supervisory authorities should submit annual reports to the AI Office detailing the notifications received and the decisions taken. Providers whose AI systems fall under one of the areas and use cases listed in Annex III that consider their system does not pose a significant risk of harm to the health, safety, fundamental rights or the environment should inform the national supervisory authorities by submitting a reasoned notification. This could take the form of a one-page summary of the relevant information on the AI system in question, including its intended purpose and why it would not pose a significant risk of harm to the health, safety, fundamental rights or the environment. The Commission should specify criteria to enable companies to assess whether their system would pose such risks, as well as develop an easy to use and standardised template for the notification. Providers should submit the notification as early as possible and in any case prior to the placing of the AI system on the market or its putting into service, ideally at the development stage, and they should be free to place it on the market at any given time after the notification. However, if the authority estimates the AI system in question was misclassified, it should object to the notification within a period of three months. The objection should be substantiated and duly explain why the AI system has been misclassified. The provider should retain the right to appeal by providing further arguments. If after the three months there has been no objection to the notification, national supervisory authorities could still intervene if the AI system presents a risk at national level, as for any other AI system on the market. National supervisory authorities should submit annual reports to the AI Office detailing the notifications received and the decisions taken.
aiact/history/parliament-2023/amendments/61 · 2023-06-14
Amends: recital 32 a
Proposal for a regulation — Recital 32 a (new)