Article 61 Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes
1.
1. For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: (a)
the nature and objectives of the testing in real world conditions and the possible inconvenience that may be linked to their participation;
(b)
the conditions under which the testing in real world conditions is to be conducted, including the expected duration of the subject or subjects’ participation;
(c)
their rights, and the guarantees regarding their participation, in particular their right to refuse to participate in, and the right to withdraw from, testing in real world conditions at any time without any resulting detriment and without having to provide any justification;
(d)
the arrangements for requesting the reversal or the disregarding of the predictions, recommendations or decisions of the AI system;
(e)
the Union-wide unique single identification number of the testing in real world conditions in accordance with
Article 60(4) point (c), and the contact details of the provider or its legal representative from whom further information can be obtained.
2.
2. The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative.
Recitals 1 ▾
Drafting History 2 ▾
Case Law 0 ▾
Guidance 1 ▾
In order to accelerate the process of development and the placing on the market of the high-risk AI systems listed in an annex to this Regulation, it is important that providers or prospective providers of such systems may also benefit from a specific regime for testing those systems in real world c…
2021-04-21
Commission Proposal — COM(2021) 206 final
Article 61 — Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems
1. Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system. 2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2 . 3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan. 4. For high-risk AI systems covered by the legal acts referred to in Annex II, where a post-market monitoring system and plan is already established under that legislation, the elements described in paragraphs 1, 2 and 3 shall be integrated into that system and plan as appropriate. The first subparagraph shall also apply to high-risk AI systems referred to in point 5(b) of Annex III placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU.
July 2024
Final Adopted Text — Regulation (EU) 2024/1689
Article 61 — Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes
2 paragraph(s) · Current text shown above
⚖
No case law referencing Article 61 yet.
As courts and enforcement authorities produce decisions interpreting this provision, they will appear here.