In preparation for the application of the Artificial Intelligence (AI) Act and in connection with the proposed Digital Omnibus, the European Commission is moving forward with several initiatives aimed at helping stakeholders comply with the new rules. These actions are intended to respond to expressed needs and to ensure legal clarity. Within this framework, the Artificial Intelligence Office is drafting guidelines that will offer clear and practical direction on how to apply the AI Act together with other relevant EU legislation.
During 2026, the European Commission will develop guidance covering a broad range of practical issues related to high-risk Artificial Intelligence systems. This will include clarification on how to apply the high-risk classification, meet transparency obligations under Article 50 of the AI Act, report serious incidents, and comply with the various high-risk requirements. Stakeholders have also requested more detailed clarification on the practical use of the AI Act’s research exemptions, particularly in areas such as pre-clinical research and the development of medical products. The aim is to ensure a smooth implementation of the AI Act, to encourage innovation, and to reinforce Europe’s position in safe AI-driven developments.
Additional support is available to stakeholders through the AI Act Service Desk and further resources can be accessed via the AI Act Single Information Platform.