How public administrations can evaluate the respect of technical, legal and ethical requirements of their AI systems
The Responsible Organisations
|
The Z-Inspection® Initiative is made up of a network of over 100 individual experts and more than 80 affiliated institutions and laboratories across 40 countries worldwide, who developed a methodology for assessing the trustworthiness of AI systems. One of the examples, implemented by its members is the coordinatedassessment of the AI system for nature monitoring developed by the Province of Friesland. |
|
The Rijks ICT Gilde is an internal community of information and communication technologies professionals within the Dutch central government, operating under the Ministry of the Interior and Kingdom Relations. Several members of the ICT Gilde took part in the working groups of the Z-Inspection® process for the Province of Friesland. |
|
The Province of Friesland (in Dutch, Fryslân) is one of the twelve provinces of the Netherlands, located in the north of the country. Its AI system for nature monitoring was the object of the Z-Inspection® process, to which several representatives of the provincial administration participated. |
|
1. The context
As public administrations across Europe increasingly adopt emerging technologies, a key challenge is ensuring that the use of artificial intelligence (AI) remains aligned with human-centred and democratic values. To support this objective, the EU has developed a growing body of rules and guidance, including the Ethics guidelines for trustworthy AI, published in 2019 by the High-Level Expert Group on Artificial Intelligence (AI HLEG), and the European Artificial Intelligence Act (AI Act), entered into force in 2024.
While this evolving framework provides important safeguards for the responsible use of AI, public administrations may face difficulties in interpreting and applying the relevant requirements when developing and/or deploying AI-enabled solutions. As a result, there is a growing need for practical guidance to help translate regulatory principles into operational AI solutions that balance innovation with compliance, transparency, accountability, and ethical considerations.
2. The solution: Trustworthy AI assessment with Z-Inspection®
Developed in 2019 by Professor Roberto Zicari and other academic experts and business professionals, Z-Inspection® was designed to support interdisciplinary teams in assessing the ethical, technical, domain-specific and legal implications of AI products and services across a range of contexts, including healthcare, business, and policymaking. It can be defined as a multidisciplinary and participatory framework for assessing the trustworthiness of AI systems, with the primary objective of enabling a “mindful use of AI”.
The methodology evaluates AI systems at different stages of their lifecycle, with a particular focus on identifying and deliberating ethical issues through the development of socio-technical scenarios. In doing so, the methodology offers a practical means of operationalising the AI Ethics Guidelines for Trustworthy AI developed by the High-Level Expert Group on AI (AI HLEG), whose pillars and requirements are reported in the following box.
Box 1 – Focus on the pillars and requirements for trustworthy AI development
🔍 EU Framework for Trustworthy AI
The EU Framework for Trustworthy AI is a human-centric approach to AI development, built on the Ethics Guidelines for Trustworthy AI, focusing on AI systems that respect fundamental rights and democratic values. It guides a comprehensive legal framework that categorises AI by risk to ensure safety and trust while fostering innovation. It defines trustworthy AI as:
-
Lawful, respecting all applicable laws and regulations.
-
Ethical, respecting ethical principles and values.
-
Robust, both from a technical perspective and considering its social environment.
Moreover, it establishes that trustworthy Artificial Intelligence must be developed according to four main pillars:
-
Respect for human autonomy.
-
Prevention of harm.
-
Fairness.
-
Explicability.
To meet these pillars, AI systems should comply with 7 requirements, that are human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability. By the way, the published PSTW Report Analysis of the generative AI landscape in the European public sector provides an overview of the documents published by European public administrations according to such requirements. An updated list of the guidelines, policies and procedural documents published by European public administrations on the use of AI and GenAI tools can be found in the GenAI4PA section of the Collection.
2.1 How an assessment with Z-Inspection® is performed
Z-Inspection® aims to support the trustworthy deployment of AI through a rigorous, interdisciplinary methodology that builds on the EU Framework for Trustworthy AI. The methodology adopts a holistic view of the AI socio-technical system, evaluating all its technical, legal, ethical, and social aspects within its specific context of use across the entire AI lifecycle, from design to post-deployment. It is structured around three phases:
-
Set up. It gathers an interdisciplinary team of experts (e.g., ethicists, legal experts, technical experts, domain experts, and representatives of end-users/affected parties), to define the scope and objectives of the assessment.
-
Assess. This phase is iterative and focuses on identifying ethical, technical, and legal issues through the analysis of socio-technical scenarios, mapping them against the EU’s Trustworthy AI requirements and addressing potential value conflicts and trade-offs.
-
Resolve. It addresses the identified tensions and translates the findings into recommendations for appropriate use, risk mitigation, and redress, and includes ongoing post-deployment monitoring (“ethical maintenance”) to ensure trustworthiness over time.
2.2 A real-world use case example: assessing the AI system for nature monitoring of the Province of Friesland
Z-Inspection® has been piloted to evaluate AI systems developed by public administrations for environmental monitoring applications. More specifically, an AI system using a deep learning (DL) algorithm developed by the Province of Friesland (Netherlands) to process satellite images and produce reports of nature reserves, was assessed in cooperation with the Dutch Ministry of the Interior and Kingdom Relation’s Rijks ICT Gilde. The AI system was developed to evaluate the level of nitrogen on the soil faster, more cost-effective and accurate, by tracking the diffusion of two nitrogen-sensitive invasive grass species.
The pilot employed a hybrid approach to ensure a comprehensive assessment of trustworthiness and ethical implications, by integrating the methodology with the Fundamental Rights and Algorithms Impact Assessment (FRAIA) tool, which is recommended by the Dutch government for use by public authorities. The Z-Inspection® assessment process included training for all participants, defining the scope and boundaries, parallel evaluations, common workshops for dialogue, and a final report with recommendations, facilitating a holistic evaluation by having the different expert groups engaged in structural dialogue to sharpen findings.
Even though a multidisciplinary composition is fundamental to a holistic assessment, some prerequisites were identified in the creation of the working groups, such as the involvement of independent professionals with no competing interest. In the case of Friesland’s AI system for nature monitoring, the objectivity and robustness of the assessment were ensured through a structured verification of key pre-conditions prior to the start of the evaluation process. Stakeholders were involved at distinct stages of the assessment, according to their roles and expertise. Independent domain experts were included in the working groups to address ecological aspects, while the private company that developed the system contributed through targeted interviews, providing necessary technical information while preserving the independence of the assessment.
Following the methodology, the AI system was examined from three perspectives by separate expert working groups:
-
Technical. The team responsible for analysing the technical aspects of the DL system concluded that it outperformed the human-made maps (80% of accuracy versus 65%). However, the prototype’s maturity was assessed as Technical Readiness Level (TRL) 6, indicating that it was not ready for deployment. This was due to a lack of representational fairness due to the limited size of the training dataset, and because robustness could not be reliably estimated, as ground-truth validation occurred too rarely (once every 12 years).
-
Ecological. The team responsible for evaluating the ecological model found that it produced conservative results (underestimating grassing effects), and expressed concern about the choice of only two nitrogen-indicator species.
-
Ethical and fundamental human rights. Concerning transparency and trust, the interdisciplinary group found the system functioning as a black box. The ethical assessment focused on fundamental rights clusters (personal, freedom-related, equality, procedural). While the AI system was found to not directly infringe rights related to personal identity and privacy, autonomy, or territorial privacy, it could indirectly affect the autonomy of certain groups like farmers if its output were to lead to stricter regulations.
3. Benefits
During the pilot, the Z-Inspection® assessment offered a variety of advantages, including:
-
Ensuring transparency and verifiability through an evidence-based approach. The methodology adopts an evidence-based methodology grounded in the “claim–argument–evidence” approach, as defined by Professor Roberto Zicari. The goal is to assess trustworthiness claims made about AI systems, ensuring they are clear, contextualised, and supported by verifiable evidence.
-
Guaranteeing robustness through a holistic, domain-agnostic process. By integrating technical, legal, ethical, and social perspectives, it assesses AI systems as complete socio-technical units. The approach is adaptable across sectors while remaining sensitive to specific use contexts.
-
Enhancing digital awareness and supporting AI governance. Particularly in public-sector organisations, the methodology supports informed dialogue, enhances digital awareness, and enables structured, transparent, and accountable decision-making, especially for complex or high-risk use cases.
4. Main challenges
The Z-Inspection® Initiative presents a number of challenges encountered in the trustworthy assessment of AI systems:
-
Scalability and accountability dispersion. A key challenge concerns the scaling the methodology to meet regulatory and oversight requirements, particularly when assessing large numbers of AI systems across their lifecycle. In parallel, complex AI supply-chain models involving multiple actors risk fragmenting accountability, highlighting the need for clearer mechanisms to ensure traceability across the AI value chain.
-
Subjectivity and scope limitations in assessment. Although the methodology promotes interdisciplinarity, assessment outcomes may still be influenced by the composition and focus of the assessment team. The identification and prioritisation of issues necessarily rely on expert judgement, which may lead to certain ethical dimensions being underrepresented in specific cases. To mitigate this risk, Z-Inspection® encourages broad and diverse stakeholder participation to strengthen the balance and robustness of assessments.
5. Future steps
To address the encountered challenges, future developments have been identified regarding the Z-Inspection® Initiative. The team plans to introduce a more agile assessment process, with the goal of increasing efficiency by leveraging technological tools, including experimental uses of Natural Language Processing (NLP) to assist with the consolidation of findings and facilitate participation. The initiative also plans to develop strategic guidance to help organisations determine when the comprehensive methodology is most appropriate (particularly for complex or high-risk use cases) and when alternative or complementary approaches may be more suitable.
In relation to the EU AI Act, the methodology is expected to play a complementary role by supporting organisations beyond pre-deployment compliance requirements. In fact, while the AI Act places emphasis on ex ante impact assessments, Z-Inspection® can be applied after deployment, providing a valuable evaluation throughout the AI lifecycle. Moreover, the interviewed public officials highlighted its usefulness both for deepening understanding of AI Act requirements and for addressing ethical, legal, and organisational aspects that may not be fully covered by formal compliance procedures.
Future work will also address challenges related to smart governance, auditing, and accountability by designing frameworks that support repeated lifecycle audits under institutional and budgetary constraints, increase accountability across the AI value chain, and enable continuous post-deployment monitoring (“ethical maintenance”) to sustain AI trustworthiness over time.
Website and Contact Information
Useful links:
-
Z-Inspection® website: https://z-inspection.org/.
-
How to assess trustworthy AI in practice: https://arxiv.org/pdf/2206.09887.
-
Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment: https://arxiv.org/abs/2404.14366.
- Z-Inspection® Assessment of Responsible AI: https://www.youtube.com/watch?v=z_RCysclXdk.
Project contact:
-
Prof. Roberto Zicari (Z-Inspection® Initiative)
-
Willy Tadema (Rijks ICT Gilde)
-
Genien Pathuis (Province of Friesland)
Detailed Information
Case Viewer ID: PSTW-1917
Year: 2021
Status: In development
Responsible Organisation: Province of Friesland
Geographical extent: Regional Government
Country: Netherlands
Function of government: Environmental protection
Technology: Artificial intelligence
Interaction: G2G