**Target:** Proposal for a regulation — Article 15 — paragraph 3 — subparagraph 3 ## Text proposed by the Commission High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used as an input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures. ## Amendment of the European Parliament High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs influencing input for future operations (‘feedback loops’) and malicious manipulation of inputs used in learning during operation are duly addressed with appropriate mitigation measures.
aiact/history/parliament-2023/amendments/327 · 2023-06-14
Amends: article 15 , ¶3
Proposal for a regulation — Article 15 — paragraph 3 — subparagraph 3