**Target:** Proposal for a regulation — Recital 60 h (new)

## Text proposed by the Commission



(60h)

## Amendment of the European Parliament

(60h) Given the nature of foundation models, expertise in conformity assessment is lacking and third-party auditing methods are still under development . The sector itself is therefore developing new ways to assess fundamental models that fulfil in part the objective of auditing (such as model evaluation, red-teaming or machine learning verification and validation techniques). Those internal assessments for foundation models should be should be broadly applicable (e.g. independent of distribution channels, modality, development methods), to address risks specific to such models taking into account industry state-of-the-art practices and focus on developing sufficient technical understanding and control over the model, the management of reasonably foreseeable risks, and extensive analysis and testing of the model through appropriate measures, such as by the involvement of independent evaluators. As foundation models are a new and fast-evolving development in the field of artificial intelligence, it is appropriate for the Commission and the AI Office to monitor and periodically asses the legislative and governance framework of such models and in particular of generative AI systems based on such models, which raise significant questions related to the generation of content in breach of Union law, copyright rules, and potential misuse. It should be clarified that this Regulation should be without prejudice to Union law on copyright and related rights, including Directives 2001/29/EC, 2004/48/ECR and (EU) 2019/790 of the European Parliament and of the Council.

Given the nature of foundation models, expertise in conformity assessment is lacking and third-party auditing methods are still under development . The sector itself is therefore developing new ways to assess fundamental models that fulfil in part the objective of auditing (such as model evaluation, red-teaming or machine learning verification and validation techniques). Those internal assessments for foundation models should be should be broadly applicable (e.g. independent of distribution channels, modality, development methods), to address risks specific to such models taking into account industry state-of-the-art practices and focus on developing sufficient technical understanding and control over the model, the management of reasonably foreseeable risks, and extensive analysis and testing of the model through appropriate measures, such as by the involvement of independent evaluators. As foundation models are a new and fast-evolving development in the field of artificial intelligence, it is appropriate for the Commission and the AI Office to monitor and periodically asses the legislative and governance framework of such models and in particular of generative AI systems based on such models, which raise significant questions related to the generation of content in breach of Union law, copyright rules, and potential misuse. It should be clarified that this Regulation should be without prejudice to Union law on copyright and related rights, including Directives 2001/29/EC, 2004/48/ECR and (EU) 2019/790 of the European Parliament and of the Council.