**Target:** Proposal for a regulation — Recital 32 ## Text proposed by the Commission (32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons , taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation . The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. (32) (32) ## Amendment of the European Parliament (32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products and that are listed in one of the areas and use cases in Annex III , it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a significant risk of harm to the health and safety or the fundamental rights of persons and, where the AI system is used as a safety component of a critical infrastructure, to the environment . Such significant risk of harm should be identified by assessing on the one hand the effect of such risk with respect to its level of severity , intensity, probability of occurrence and duration combined altogether and on the other hand whether the risk can affect an individual, a plurality of persons or a particular group of persons. Such combination could for instance result in a high severity but low probability to affect a natural person, or a high probability to affect a group of persons with a low intensity over a long period of time, depending on the context . The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons , taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation . The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products and that are listed in one of the areas and use cases in Annex III , it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a significant risk of harm to the health and safety or the fundamental rights of persons and, where the AI system is used as a safety component of a critical infrastructure, to the environment . Such significant risk of harm should be identified by assessing on the one hand the effect of such risk with respect to its level of severity , intensity, probability of occurrence and duration combined altogether and on the other hand whether the risk can affect an individual, a plurality of persons or a particular group of persons. Such combination could for instance result in a high severity but low probability to affect a natural person, or a high probability to affect a group of persons with a low intensity over a long period of time, depending on the context . The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
aiact/history/parliament-2023/amendments/60 · 2023-06-14
Amends: recital 32
Proposal for a regulation — Recital 32