(A.) Policy and legislation
(A.1) Policy objectives
We are using AI on a daily basis, e.g. to translate text, generate subtitles for videos or to block email spam. Beyond making our lives easier, AI is helping us to solve some of the world’s biggest challenges: from treating chronic diseases or reducing traffic accident (and therefore the fatality rates) to fighting climate change or anticipating cybersecurity threats. Like the steam engine or electricity in the past, AI is transforming our world, our society and our industries.
Since 1950s, the research on AI included a large variety of computing techniques and spread over many different application areas. In recent years, AI has experienced a period of fast development, which is motivated by three main driving factors: the progress in algorithms and computing techniques, the huge amount of available data generated by the advancements in ICT and Internet of Things applications, and the affordability of high-performance processing power, even in low-cost personal devices. These factors have contributed towards the rapid evolution of AI technologies such as large language models, which potentially could have a strong impact on society.
The way of approaching AI will shape the digital future. In order to enable European citizens, companies, governments, etc. to reap the benefits of AI, we need a solid European strategy and framework.
The EU strategy on AI was published on 25th April 2018, in the Commission Communication on Artificial Intelligence for Europe. One of the main elements of the strategy is an ambitious proposal to achieve a major boost in investment in AI-related research and innovation and in facilitating and accelerating the adoption of AI across the economy.
In February 2020 the Commission issued a White Paper on AI. The overall EU strategy proposed in the White Paper on AI proposes an ecosystem of excellence and trust for AI. The concept of an ecosystem of excellence in Europe refers to measures which support research, foster collaboration between Member States and increase investment in AI development and deployment. The ecosystem of trust is based on EU values and fundamental rights, and foresees robust requirements that would give citizens the confidence to embrace AI-based solutions, while encouraging businesses to develop them. The European approach to AI ‘aims to promote Europe’s innovation capacity in the area of AI, while supporting the development and uptake of ethical and trustworthy AI across the EU economy. AI should work for people and be a force for good in society.
Following a public consultation, the objectives of the White Paper were translated into a key AI package adopted by the Commission on 21 April 2021. This package includes a proposal for the first ever legal framework on AI (the AI Act), which addresses the risks of AI and positions Europe to play a leading role globally and the 2021 review of the Coordinated Plan.
The proposal for a legal framework is aimed at laying down rules to ensure that AI systems used in the EU are safe and do not compromise fundamental rights.
The 2021 Review of the Coordinated Plan on AI puts forward a concrete set of joint actions for the European Commission and Member States on how to create EU global leadership on trustworthy AI. Standardisation activities are one of the action areas identified in the 2021 Coordinated Plan as an area for joint action between the European Commission and Member States.
In December 2023, the colegislators reached an agreement on the AI Act which entered into force on 1 August 2024.
The European Commission will now monitor the correct implementation of the legislation through the newly established AI Office. The AI Office aims at enabling the future development, deployment and use of AI in a way that fosters societal and economic benefits and innovation, while mitigating risks. The Office will play a key role in the implementation of the AI Act, especially in relation to general-purpose AI models. It will also work to foster research and innovation in trustworthy AI and position the EU as a leader in international discussions.
(A.2) EC perspective and progress report
The big increase in interest and activities around AI in the latest years brings together a need for the development of a coherent set of AI standards. In response to this, international and European standardisation alike have created committees on AI, including CEN-CENELEC JTC 21, ETSI OCG AI and ISO/IEC JTC 1/SC 42.
In addition, the AI Act is set as a New Legislative Framework-type legislation. Hence, the role of harmonised standards will be key to providing detailed technical specifications through which economic operators can achieve compliance with the relevant legal requirements. Harmonised standards will thus be a key tool for the implementation of the legislation and contribute to the specific objective of ensuring that that AI systems are safe and trustworthy.
As a consequence of this, the European Commission intends to intensify the elaboration of standards in the area of AI to ensure that standards are available to operators on time ahead of the application date of the future AI framework. In this respect, the Commission issued a first standardisation request to CEN and CENELEC in accordance with Regulation (EU) 1025/2012 in May 2023. Harmonised standards developed in response to that request will help companies to comply with the legal requirements of the AI Act. In early 2025, the Commission plans to issue an amended standardisation request to take into account the wording of the AI legislation as adopted and published in the OJEU. Other standardisation requests on energy efficiency and general-purpose AI models are also in preparation for the second half of 2025.
(A.3) References
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)
- COM(2020) 65 final: White Paper On Artificial Intelligence - A European approach to excellence and trust
- COM(2018) 237:Artificial Intelligence for Europe
- EC High-Level Expert Group on Artificial Intelligence (AI HLEG): Ethics Guidelines for Trustworthy Artificial Intelligence (AI)
- Coordintated plan on artificial intelligence, 2021 review
- C(2023)3215 – Standardisation request M/593: AI Standardisation Request
B.) Requested actions and progress in standardisation
(B.1) Requested actions
Action 1: SDOs should establish coordinated linkages with, and adequately consider European requirements or expectations from initiatives, including policy initiatives, and organisations contributing to the discourse on AI standardisation. This in particular includes the contents of the AI Act, the standardisation request on AI issued by the European Commission in 2023, its amendment, as well as the orientations set in the 2021 review of the Coordinated Plan.
Action 2: SDOs should further increase their coordination efforts around AI standardisation both in Europe and internationally in order to avoid overlap or unnecessary duplication of efforts and aim to the highest quality to avoid the creation and use of discriminating algorithms and to ensure a trustworthy and safe deployment of this technology.
Action 3: ESOs should coordinate with the Commission and appropriately direct their activities to ensure that the objectives set in the standardisation request on AI issued in 2023 (and its amendment) are adequately and timely fulfilled. This includes ensuring active participation of representatives from SMEs and civil society organisations in their activities.
Action 4: SDOs to take into account the cross-sectorial aspects of the AI Act and the interactions between the AI Act and existing or future sectorial safety legislation.
Action 5: EC and ESOs should coordinate to promote mobilisation of stakeholders around AI standardisation activities.
Action 6: Taking into account the gap analysis by EC/JRC, EC/JRC to coordinate with SDOs and other initiatives on a follow-up and ways to address the identified gaps.
C.) Activities and additional information
(C.1) Related standardisation activities
CEN & CENELEC
A CEN-CENELEC Focus Group on Artificial Intelligence (AI) was first established in December 2018. The Focus Group published two documents: a response to the EC white paper on AI as well as the CEN-CENELEC Roadmap for AI standardisation. Subsequently, CEN-CENELEC created a Joint Technical Committee, namely CEN-CENELEC JTC 21, which started its activities on 1 June 2021.
JTC 21 is producing standardisation deliverables in the field of Artificial Intelligence (AI) and related use of data, as well as providing guidance to other technical committees concerned with Artificial Intelligence. JTC 21 plays a crucial role in developing European Standards for AI technologies, addressing the unique needs and requirements of the European market and societal context. By developing harmonised standards, JTC 21 aims to support the implementation of AI systems that are technologically advanced and align with European values and the AI Act.
CEN and CENELEC have accepted the standardisation request on Artificial Intelligence from the European Commission. In this context, CEN-CLC/JTC 21 is currently developing European standards which will be able to provide manufacturers the presumption of conformity with the newly adopted Artificial Intelligence Act (AIA).
Five working groups operate under CEN-CENELEC JTC 21:
WG 1: Strategic Advisory Group (SAG),
WG 2: Operational aspects,
WG 3: Engineering aspects,
WG 4: Foundational and societal aspects,
WG 5: Joint standardization on Cybersecurity for AI systems.
The committee is developing homegrown European standards in support of the AI Act, including:
- AI Trustworthiness Framework
- AI Risk Management
- AI Quality Management System
- AI Conformity Assessment
For an overview of the work program to see the status of the standards under development, see the CEN-CLC/JTC 21 technical work page.
Additionally, a Task Group led by the European Trade Union Confederation (ETUC), and working under the Strategic Advisory Group in JTC 21/WG 1, publishes the “AI Standardization Inclusiveness” Newsletter to inform about the latest developments and decisions in JTC 21 and in the European AI community. Previous editions can be found here.
ETSI
In addition to CEN and CENELEC, ETSI is also active in the use of AI in ICT and coordinates work across a dozen technical bodies using the OCG AI (Operational Coordination Group for AI). A summary of current work on AI can be found in a dedicated white paper. The OCG AI is also in continual discussion with CEN-CENELEC JTC21.
ETSI TC HF (Human Factors) is organising works on the topic of human oversight and transparency/explainability of AI solutions, including also accessibility of explanations to all segments of society (user-oriented explanations for persons with varying physical/mental capabilities). This work also includes requirements for (future) human-AI collaborative systems for example in manufacturing processes.
The ETSI ISG on Experiential Networked Intelligence (ENI) is defining a Cognitive Network Management architecture. This is using Artificial Intelligence (AI) techniques and context-aware policies to adjust offered services based on changes in user needs, environmental conditions and business goals. ISG ENI outputs centre around network optimization & Cognitive Network Management architecture highlighted in https://eniwiki.etsi.org/index.php?title=ISG_ENI_Activities. This is described further in the whitepaper (https://www.etsi.org/images/files/ETSIWhitePapers/etsi-wp44_ENI_Vision.pdf) and a whitepaper on cognitive management (https://www.etsi.org/images/files/ETSIWhitePapers/ETSI_WP51_Understanding_the_Operator_Experience_Using_Cognitive_Manage.pdf). ISG ENI has published many GRs and GSs in the Field of AI Cognitive Network Management for global availability.
The ETSI ISG on Securing Artificial Intelligence (ISG SAI), created in October 2019, focused on three key areas: using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack. ISG SAI collaborated closely with ENISA. ISG SAI outputs have centered around several key topics and the following have been published or are in development to date in part in response to Action 5 above:
- Problem Statement
- Mitigation Strategy
- Data Supply Chain
- Threat Ontology for AI, to align terminology
- Security testing of AI
- Role of hardware in security of AI
- Explainability and transparency of AI processing
- Privacy and security aspects of AI/ML systems
- Traceability of AI models
- Automated Manipulation of Multimedia Identity Representations
- Collaborative Artificial Intelligence (also know as Generative AI)
- Proofs of Concepts Framework.
The ETSI SAI work programme can be found at: https://portal.etsi.org/Portal_WI/form1.asp?tbid=877&SubTB=877
In September 2023 the ETSI TC SAI was created. ISG SAI was turned into TC SAI and work continues now only at the level of the new TC.
ETSI has several other ISGs working in the domain of AL/ML (Machine Learning). They are all defining specifications of functionalities that will be used in technology.
- ISG ENI develops standards that use AI mechanisms to assist in the management and orchestration of the network.
- ISG ENI is defining AI/ML functionality that can be used/reused throughout the network, cloud and end devices.
- ISG ZSM is defining the AI/ML enablers in end-to-end service and network management.
- ISG F5G on Fixed 5G is going to define the application of AI in the evolution towards ‘fibre to everything’ of the fixed network.
- ISG NFV on network functions virtualisation studies the application of AI/ML techniques to improve automation capabilities in NFV management and orchestration. GS NFV-IFA 047 defines the use of Management Data Analytics (MDA) Function (MDAF), corresponding service interfaces produced by the MDAF, and related information elements.
GR NFV-EVE 027 studies Model-as-a-Service (MaaS) for AI-based applications and identifies relevant use cases in NFV. - ISG CIM has published specifications for a data interchange format (ETSI CIM GS 009 V1.7.1 NGSI-LD API) and a flexible information model (ETSI CIM GS 006 V1.2.1) that support the exchange of information from e.g. knowledge graphs, including relationships between entities and signing of information to guarantee the origins. The work is applicable to exchange of data/metadata with AI solutions, including storage of historical results for later (human) oversight and governance in the context of the AI ACT. Additionally, it has published ETSI CIM GR 021, which describes property-graphs-based approaches to machine learning, able to leverage additional information coming from the graph’s relationships, supported by NGSI-LD.
- The ETSI TC MTS provides technologies, tools, and guidelines on conformance and interoperability testing and certification of protocols and other systems, including AI systems, that are under standardisation at various ETSI groups and committees.
IEC
SEG 10 Ethics in Autonomous and Artificial intelligence Applications
https://www.iec.ch/dyn/www/f?p=103:186:0::::FSP_ORG_ID,FSP_LANG_ID:22827,25
ISO/IEC JTC 1
ISO/IEC JTC 1 SC 42 Artifical intelligence
SC 42 Artificial Intelligence is looking at the international standardisation of the entire AI ecosystem. With 33 published standards and 36 current projects under development and 6 working groups, the program of work has been growing rapidly and continues to grow in 2025.
The structure of SC 42 is composed of 10 Working Groups:
- WG1: Foundations standards
- WG2: Data
- WG3: Trustworthiness
- WG4: Use cases and applications
- WG5: Computational approaches and computational characteristics of AI systems
- JWG 2: Joint Working Group ISO/IEC JTC1/SC 42 - ISO/IEC JTC1/SC 7 : Testing of AI-based systems
- JWG 3: Joint Working Group ISO/IEC JTC1/SC42 - ISO/TC 215 WG : AI-enabled health informatics
- JWG 4: Joint Working Group ISO/IEC JTC1/SC42 - IEC TC65/SC65A: Functional safety and AI systems
- JWG 5: Joint Working Group ISO/IEC JTC1/SC42 - ISO/TC 37 WG: Natural language processing
- JWG 6: Joint Working Group ISO/IEC JTC1/SC42 - ISO/CASCO: Conformity assessment schemes for AI systems
The following AdHoc Groups have also been created:
- AHG 4: Liaison with SC 27
- AHG 7: JTC1 joint development review (to ensure coordination with CEN-CENELEC JTC 21 on projects under the Vienna agreement)
An advisory Group has also been created on the topic of AI & Sustainability:
- JAG: Joint Advisory Group on AI and Sustainability with ISO/IEC JTC 1/SC 39
The list of published standards and projects under development can be found in:
- Published standards: https://www.iso.org/committee/6794475/x/catalogue/p/1/u/0/w/0/d/0
- Projects under development: https://www.iso.org/committee/6794475/x/catalogue/p/0/u/1/w/0/d/0
In addition to the above projects under development, a number of ad hoc groups in the SC 42 WGs are studying topics that cross multiple areas such as:
a) machine learning computing devices
b) ontologies, knowledge engineering, and representation
c) data quality governance framework
d) testing of AI systems
e) AI standards landscape and roadmap
f) coordination with JTC 1 SC 27 on AI security and privacy proposed standards
g) data quality visualization
In addition, SC 42 has developed over 30 active liaisons with ISO and IEC committees, SDOs and industry organizations to encourage collaboration and build out the industry ecosystem around AI and Big Data.
ISO/IEC JTC 1 SC 7 - Software and systems engineering
ISO/IEC 25012:2008 Software engineering — Software product Quality Requirements and Evaluation (SQuaRE) — Data quality model
ISO/IEC TR 29119-11:2020 Software and systems engineering — Software testing — Part 11: Guidelines on the testing of AI-based systems.
IEEE
IEEE has a significant amount of activity in the fields of Autonomous and Intelligent Systems (A/IS) and related vertical industry domains. IEEE standards and pre-standards address: ethical and societal implications of artificial intelligence; foundational concepts, architecture and ontology; governance and management; data; trustworthiness; etc.
IEEE has a significant amount of activity in the fields of Autonomous and Intelligent Systems (A/IS) and related vertical industry domains. IEEE standards and pre-standards address: ethical and societal implications of artificial intelligence; foundational concepts, architecture and ontology; governance and management; data; trustworthiness; etc.
Ethical and Societal Implications:
IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems developed “Ethically Aligned Design (EAD): A Vision for Prioritizing Human Wellbeing with Autonomous and Intelligent Systems,” which served as the foundation for many other organizations’ AI principles and the IEEE 7000 Series. The Global Initiative 2.0 now inspires a new paradigm for AI governance that shifts from merely mitigating risks to proactively embedding a “Safety First Principle” and “Safety by Design” into AI’s design and lifecycle assessment, including for generative AI.
- IEEE 7000 Model Process for Addressing Ethical Concerns During System Design
- IEEE 7001, Transparency of Autonomous Systems
- IEEE 7002, Data Privacy Process
- IEEE 7005 Transparent Employer Data Governance
- IEEE 7007, Ontological Standard for Ethically Driven Robotics and Automation Systems
- IEEE 7009, Fail-Safe Design of Autonomous and Semi-Autonomous Systems
- IEEE 7014, Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems
- IEEE P7003, Algorithmic Bias Considerations
- IEEE P7004, Child and Student Data Governance
- IEEE P7008, Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
- IEEE P7011, Process of Identifying and Rating the Trustworthiness of News Sources
- IEEE P7012, Machine Readable Personal Privacy Terms
- IEEE P7015, Data and Artificial Intelligence (AI) Literacy, Skills, and Readiness
IEEE 7000 ethical and governance standards are made available here for free to support widespread AI literacy.
The IEEE CertifAIEd Program: Through certification guidance, assessment and independent verification, IEEE CertifAIEd offers the ability to scale responsible innovation implementations, thereby helping to increase the quality of AIS, the associated trust with key stakeholders, and realizing associated benefits.
Foundational Concepts, Architecture, Ontology
- IEEE 1872 Series for Robotics and Automation
- IEEE 2755 Series on Intelligent Process Automation
- IEEE 3079.3, Framework for Evaluating the Quality of Digital Humans
- IEEE 3652.1, Architectural Framework and Application of Federated Machine Learning
- IEEE 11073-10101, IEEE/ISO/IEC International Standard—Health informatics-Device interoperability-Part 10101: Point-of-care medical device communication-Nomenclature
- IEEE 2894, Architectural Framework for Explainable Artificial Intelligence
Governance and Management
- IEEE 1232 Series for Artificial Intelligence Exchange and Service Tie to All Test Environments (AI-ESTATE)
- IEEE 2089, Age Appropriate Digital Services Framework - Based on the 5Rights Principles for Children
- IEEE 2830, Technical Framework and Requirements of Shared Machine Learning
- IEEE 2841, Framework and Process for Deep Learning Evaluation
- IEEE 2941, Artificial Intelligence (AI) Model Representation, Compression, Distribution, and Management
- IEEE P2247.1, Classification of Adaptive Instructional Systems
- IEEE P2802, Performance and Safety Evaluation of Artificial Intelligence Based Medical Device: Terminology
- IEEE P2840, Responsible AI Licensing
- IEEE P2863, Recommended Practice for Organizational Governance of Artificial Intelligence
- IEEE P2937, Performance Benchmarking for AI Server Systems
- IEEE P3119, Procurement of Artificial Intelligence and Automated Decision Systems
- IEEE P3394, Large Language Model Agent Interface
Trustworthiness, namely security, quality, transparency, bias, and accuracy, include:
- IEEE 2801, Quality Management of Datasets for Medical Artificial Intelligence
- IEEE P2751, 3D Map Data Representation for Robotics and Automation
- IEEE P3156, Requirements of Privacy-preserving Computation Integrated Platforms
- IEEE P3157, Vulnerability Test for Machine Learning Models for Computer Vision Applications
- IEEE P3181, Trusted Environment Based Cryptographic Computing
- IEEE P3187, Framework for Trustworthy Federated Machine Learning
- IEEE P3198, Evaluation Method of Machine Learning Fairness
Other aspects of ML and other AI techniques:
- IEEE 1855, Fuzzy Markup Language
- IEEE 1873, Robot Map Data Representation for Navigation
- IEEE 3079.3.1, Service Application Programming Interfaces (APIs) for Digital Human Authoring and Visualization
- IEEE 3129, Standard for Robustness Testing and Evaluation of Artificial Intelligence (AI)-based Image Recognition Service
- IEEE 3333.1.3, Deep Learning-Based Assessment Of Visual Experience Based On Human Factors
- IEEE 12207.2, Systems and software engineering - Software life cycle processes—Part 2: Relation and mapping between ISO/IEC/IEEE 12207:2017 and ISO/IEC 12207:2008
- IEEE P2874, Spatial Web Protocol, Architecture and Governance
- IEEE P2975 Series on Industrial Artificial Intelligence
- IEEE P2976, XAI—eXplainable Artificial Intelligence—for Achieving Clarity and Interoperability of AI Systems Design
- IEEE P2986, Privacy and Security for Federated Machine Learning
- IEEE P2987, Principles for Design and Operation Addressing Technology-Facilitated Inter-personal Control
- IEEE P3109, Arithmetic Formats for Machine Learning
- IEEE P3110, Computer Vision (CV)—Algorithms, Application Programming Interfaces (API), and Technical Requirements for Deep Learning Framework
- IEEE P3123, AI and ML Terminology and Data Formats
- IEEE P3127, Architectural Framework for Blockchain-based Federated ML
- IEEE P3128, Evaluation of AI Dialogue System Capabilities
- IEEE P3142, Distributed Training and Inference for Large-scale Deep Learning Models
- IEEE P3152, Description of the Natural or Artificial Character of Intelligent Communicators
- IEEE P3168, Robustness Evaluation Test Methods for a NLP Service that uses ML
- Standards on Knowledge Graphs (IEEE 2807 Series, IEEE P3154)
For more information, visit https://ieee-sa.imeetcentral.com/eurollingplan/.
IETF
The IETF Autonomic Networking Integrated Model and Approach Working Group will develop a system of autonomic functions that carry out the intentions of the network operator without the need for detailed low- level management of individual devices. This will be done by providing a secure closed-loop interaction mechanism whereby network elements cooperate directly to satisfy management intent. The working group will develop a control paradigm where network processes coordinate their decisions and automatically translate them into local actions, based on various sources of information including operator-supplied configuration information or from the existing protocols, such as routing protocol, etc.
Autonomic networking refers to the self-managing characteristics (configuration, protection, healing, and optimization) of distributed network elements, adapting to unpredictable changes while hiding intrinsic complexity from operators and users. Autonomic Networking, which often involves closed-loop control, is applicable to the complete network (functions) lifecycle (e.g. installation, commissioning, operating, etc). An autonomic function that works in a distributed way across various network elements is a candidate for protocol design. Such functions should allow central guidance and reporting, and co-existence with non-autonomic methods of management. The general objective of this working group is to enable the progressive introduction of autonomic functions into operational networks, as well as reusable autonomic network infrastructure, in order to reduce operating expenses.
https://wiki.ietf.org/en/group/iab/Multi-Stake-Holder-Platform#h-319-artificial-intelligence
ITU
AI for Good is the leading United Nations platform for global and inclusive dialogue on AI. The Summit is hosted each year in Geneva by the ITU in partnership with 40 UN Sister agencies.
More info: https://aiforgood.itu.int.
ITU-T SG11 is developing ITU-T Recommendations implementing AI in signalling exchange, protocols and testing. ITU-T SG11 approved Recommendation ITU-T Q.5023 “Protocol for managing intelligent network slicing with AI-assisted analysis in IMT-2020 network”. Among ongoing work there are protocol for managing energy efficiency with AI-assisted analysis in IMT-2020 networks and beyond; signalling requirements and architecture to support AI based vertical services in future network, IMT2020 and beyond; methods and metrics for monitoring ML/AI in future networks including IMT-2020; data management interfaces for intelligent edge computing-based smart agriculture service.
ITU-T Study Group 13 approved various ITU-T Recommendations covering AI-based networks as well as machine learning in future networks and IMT-2020, including use cases, architectural frameworks, quality of service assurance, service provisioning, data handling, learning models, network automation for resource and fault management, marketplace integration, cloud computing, Quantum key distribution networks (e.g. Recommendations ITU T Y.3142, Y.3170, Y.3172; Y.3173, Y.3174, Y.3175, Y.3176, Y.3177, Y.3178, Y.3179, Y.3180-Y.3186, Y.3325, Y.3531, Y.3550, Y.3654, Sup 55 to Y.3170-series and Sup 70 to Y.3800-series. More info: https://www.itu.int/en/ITU-T/focusgroups/ml5g/Pages
SG13 continues development of Recommendations on the above topics as well as ML for big data driven networking, ML as a tool to better shape traffic, man-like networking. Also, in the framework of 5G, SG13 studies ML and AI to enhance QoS assurance, network slicing, operation management of cloud services, integrated cross-domain network architecture, network automation, framework of user-oriented network service provisioning. It also maintains the AI standards roadmap, Supplement 72 to Y.3000-series, which has a matrix of different document types per vertical versus the related technologies for supporting AI. For more info contact tsbsg13@itu.int.
ITU has been at the forefront to explore how to best apply AI/ML in future networks including 5G networks. To advance the use of AI/ML in the telco industry, ITU launched the AI/ML in 5G Challenge in March 2020. The Challenge rallies like-minded students and professionals from around the globe to study the practical application of AI/ML in emerging and future networks. It also enhances the community driving standardization work for AI/ML, creating new opportunities for industry and academia to influence international standardization. The Challenge solutions can be accessed in several repositories on the Challenge GitHub: https://github.com/ITU-AI-ML-in-5G-Challenge.
Since its inception in 2020, the Challenge has grown to encompass other areas relevant to accelerate the achievement of sustainable development goals. The Challenge therefore has the following areas:
- AI/ML in 5G Challenge: https://aiforgood.itu.int/about-ai-for-good/aiml-in-5g-challenge/
- GeoAI Challenge: https://aiforgood.itu.int/about-ai-for-good/geoai-challenge/
- TinyML Challenge: https://aiforgood.itu.int/about-ai-for-good/tinyml-challenge/
ITU-T Study Group 12 (performance, QoS and QoE) offers guidance for the development of machine learning based solutions for QoS/QoE prediction and network performance management in telecommunication scenarios (Recommendation ITU-T P.1402). ITU-T P.565 describes a framework for the creation and performance testing of machine learning based models for the assessment of transmission network impact on speech quality for mobile packet-switched voice services. ITU-T P.565.1 is the first standardized instantiation of the framework. ITU-T E.475 introduces a set of guidelines for intelligent network analytics and diagnostics. SG12 has developed and standardized several quality models leveraging machine learning techniques for the objective estimation of dimensions of QoS and QoE.
AI for Road Safety: The ITU, together with the UN Secretary-General’s Special Envoy for Road Safety and the Envoy on Technology, launched the initiative on AI for Road Safety, which is in line with the UN General Assembly Resolution (UN A/RES/74/299) on Improving global Road Safety, which highlights the role of innovative automotive and digital technologies. AI for Road Safety aims to leverage the use of AI for enhancing the safe system approach to road safety.
The new initiative supports achieving the UN SDG target 3.6 to halve by 2030 the number of global deaths and injuries from road traffic accidents, and the SDG Goal 11.2 to provide access to safe, affordable, accessible and sustainable transport systems for all by 2030. See:
https://aiforgood.itu.int/event/ai-for-road-safety/
https://aiforgood.itu.int/about/ai-ml-pre-standardization/ai4roadsafety/
ITU-T SG20 approved Recommendation ITU-T Y.4470 “Reference architecture of artificial intelligence service exposure for smart sustainable cities” that introduces AI service exposure (AISE) for smart sustainable cities (SSC), and provides the common characteristics and high-level requirements, reference architecture and relevant common capabilities of AISE, Recommendation ITU-T Y.4494 “Reference architecture of collaborative decentralized machine learning for intelligent IoT services”, and agreed Supplement ITU-T Y.Suppl.63 “Unlocking Internet of things with artificial intelligence” that examines how artificial intelligence could step in to bolster the intent of urban stakeholders to deploy IoT technologies and eventually transition to smart cities. ITU-T SG20 is currently working on draft Recommendation ITU-T Y.RA-FML “Requirements and reference architecture of IoT and smart city & community service based on federated machine learning”, draft Recommendation ITU-T Y.SF-prediction “Service framework of prediction for intelligent IoT”, draft Supplement to ITU-T Y.4223 - Use cases of smart cities and communities supported by AI, draft Recommendation ITU-T Y.AIoT-fr Framework of Artificial Intelligence of Things, draft Recommendation ITU-T Y.AIoT-FRA Functional requirements and architecture for Artificial Intelligence of Things, draft Recommendation ITU-T Y.AIoT-dfs-arc Reference architecture of data fusion service in artificial intelligence of things, draft Recommendation ITU-T Y.AIoT-dpsm Requirements and framework of data processing for smart manufacturing with Artificial Intelligence of Things, and draft Technical Report ITU-T YSTR.GenAI-Sem-Interop Implications of Generative Artificial Intelligence on Semantic Interoperability for Data Use.
More info: https://itu.int/go/tsg20
ITU also coordinates the United for Smart Sustainable Cities (U4SSC) Initiative, which is a UN initiative that develops action plans, technical specifications, case studies, guidelines and offer policy guidance for cities to become smarter and more sustainable. The U4SSC Initiative is currently working on a Thematic Group on “Artificial Intelligence in Cities”. U4SSC deliverable on Guiding principles for artificial intelligence in cities along with 5 case studies were published in February 2024.
More info: https://u4ssc.itu.int/
ITU-T Study Group 5 develops international standards, guidelines, technical papers and assessment frameworks that support the sustainable use and deployment of ICTs and digital technologies, and evaluate the environmental performance, including biodiversity, of digital technologies such as, but not limited to, 5G, artificial intelligence (AI), smart manufacturing, automation, etc. ITU-T SG5 approved Recommendation ITU-T L.1305 “Data centre infrastructure management system based on big data and artificial intelligence technology”. This standard contains technical specifications of a data centre infrastructure management (DCIM) system, covering: principles, management objects, management system schemes, data collection function requirements, operational function requirements, energy saving management, capacity management for information and communication technology (ICT) and facilities, other operational function requirements and intelligent controlling on systems to maximize green energy use. Other aspects such as maintenance function requirements, early alarm and protection based on big data analysis and intelligent controlling on systems to decrease the cost for maintenance are also considered. Additionally, it has produced the following supplements: L Suppl. 48: Data centre energy saving: Application of artificial intelligence technology in improving energy efficiency of telecommunication room and data centre infrastructure and L Suppl. 53: Guidelines on the implementation of environmental efficiency criteria for artificial intelligence and other emerging technologies
More info: https://itu.int/go/tsg5
The Focus Group on Environmental Efficiency for Artificial Intelligence and other emerging technologies (FG-AI4EE) concluded in December 2022 and identified the standardization needs to develop a sustainable approach to AI and other emerging technologies. The FG-AI4EE developed 21 technical reports and specifications on requirements, assessment and measurement and implementation guidelines of AI and other emerging technologies.
More info: https://itu.int/go/fgai4ee
The ITU-T Focus Group on AI for Autonomous and Assisted Driving (FG-AI4AD) aims to develop a definition of minimal performance threshold for AI systems that are responsible for the driving tasks in vehicles, so that an automated vehicle always operates safely on the road, at least as a competent and careful human driver. The Focus Group has completed the Technical Report on “Automated driving safety data protocol – Ethical and legal considerations of continual monitoring”: https://www.itu.int/pub/T-FG-AI4AD-2021-02 and is in the process of finalizing three additional TRs on related protocol specification, practical demonstrators and benefits of continual monitoring.
More info: https://itu.int/go/fgai4ad
ITU-T Focus Group on Artificial Intelligence (FG-AI4H), established in partnership with ITU and WHO, is working towards to establishing a standardized assessment framework for the evaluation of AI-based methods for health, diagnosis, triage or treatment decisions.
https://www.itu.int/en/ITU-T/focusgroups/ai4h/
The Focus Group on Artificial Intelligence for Natural Disaster Management (FG-AI4NDM) aims to underscore best practices for leveraging AI for supporting data collection modelling across spatiotemporal scales, and providing effective communications in the advent of disasters of natural origin. The activities of this Focus Group are conducted in collaboration with the World Meteorological Organization (WMO) and United Nations Environment Programme (UNEP).
More info: https://itu.int/go/fgai4ndm
Established by ITU-T SG20, ITU-T Focus Group on Artificial Intelligence (AI) and Internet of Things (IoT) for Digital Agriculture (FG-AI4A) explores emerging technologies including AI and IoT in data acquisition and handling, modelling from a growing volume of agricultural and geospatial data, and providing communication for the optimization of agricultural production. The activities of this Focus Group are being conducted in cooperation with Food and Agriculture Organization of the United Nations (FAO).
More info: https://itu.int/go/fgai4a
The Focus Group on Artificial Intelligence for Natural Disaster Management (FG-AI4NDM) aims to underscore best practices for leveraging AI for supporting data collection modelling across spatiotemporal scales, and providing effective communications in the advent of disasters of natural origin. The activities of this Focus Group are conducted in collaboration with the World Meteorological Organization (WMO) and United Nations Environment Programme (UNEP). This Focus Group completed its work in March 2024. The Global Initiative on Resilience to Natural Hazards through AI Solutions which is a collaborative effort led by ITU, WMO, UNEP, UN Framework Convention on Climate Change (UNFCCC) and Universal Postal Union (UPU), will further build on the work of this Focus Group.
More info: https://itu.int/go/fgai4ndm
https://www.itu.int/en/ITU-T/extcoop/ai4resilience/Pages/default.aspx
Established by ITU-T SG20, ITU-T Focus Group on Artificial Intelligence (AI) and Internet of Things (IoT) for Digital Agriculture (FG-AI4A) explores emerging technologies including AI and IoT in data acquisition and handling, modelling from a growing volume of agricultural and geospatial data, and providing communication for the optimization of agricultural production. The activities of this Focus Group were conducted in cooperation with Food and Agriculture Organization of the United Nations (FAO).
More info: https://itu.int/go/fgai4a
ITU-T Focus Group on Artificial Intelligence Native for Telecommunication Networks (FG-AINN) was established by ITU-T Study Group 13 to explore and define the fundamental changes needed in network architecture to fully harness the potential of AI. This focus group, launched in July 2024, seeks to identify the requirements, challenges, and opportunities that AI-native networks will bring to the global communications landscape.
More info: https://www.itu.int/en/ITU-T/focusgroups/ainn/Pages/default.aspx
ITU-R
AI in Radiocommunication Standards: ITU Radiocommunication (ITU-R) Study Groups and forthcoming reports examine the use of AI in radiocommunications:
- ITU-R Study Group 1 covers all aspects of spectrum management, including spectrum monitoring. Question 241/1 looks at “Methodologies for assessing or predicting spectrum availability”.
- ITU-R Study Group 6, dedicated to broadcasting services, is also studying AI and ML applications:
- Question ITU-R 144/6, “Use of AI for broadcasting”, considers the impact of AI technologies and how can they be deployed to increase efficiency in programme production, quality evaluation, programme assembly and broadcast emission.
- Recommendation ITU-R BS.1387: “Method for objective measurements of perceived audio quality”. The first application of neural networks, which is now called AI (artificial intelligence), in the field of broadcasting.
- Report ITU-R BT.2447, “AI systems for programme production and exchange”, discusses current applications and near-term initiatives. This Report is being revised regularly to reflect the latest progresses on AI for the applications in broadcasting industry chains.
OASIS
The Coalition for Secure AI (CoSAI) is a project launched at OASIS in mid-2024 by AI stakeholders to collaborate on open tools to identify and mitigate potential vulnerabilities and threats in AI systems, and lead to the creation of systems that are secure-by-design. Its initial deliverables are expected to include: guidance on software supply chain security for AI systems, including the deployment and structure of adequate provenance data, training and risk mitigation processes, and risks associated with integrating or relying on third-party models; expanding cybersecurity vulnerability and threat management methods, detection, and training, to address AI contexts; and developing a risk and controls taxonomy and scorecard for AI risk governance. For more information, see https://github.com/cosai-oasis/oasis-open-project/blob/main/CHARTER.md.
oneM2M
oneM2M provides a standardized IoT data source for AI/ML applications. Furthermore, the oneM2M work item on “System enhancements to support AI capabilities” (WI-0105) aims to enable oneM2M to utilize Artificial Intelligence models and data management for AI services.
All oneM2M specifications are publicly accessible at Specifications (onem2m.org). See also the section on IoT in the Rolling plan.
W3C
The Web Machine Learning Working Group develops the Web Neural Network API for enabling efficient machine learning inference in web browsers. The Ethical Principles for Web Machine Learning document discusses ethical issues associated with using machine learning and outlines considerations for web technologies that enable related use cases.
The GPU for the Web Working Group develops the WebGPU specification and its companion WebGPU Shading Language to give web applications access to computation capabilities offered by modern GPU cards, allowing them to run AI computations efficiently on the device.
The Web & Networks Interest Group explores solutions for web applications to leverage network capabilities in order to achieve better performance and resources allocation, both on the device and network. The group discusses machine learning acceleration scenarios and requirements in Client-Edge-Cloud coordination Use Cases and Requirements.
(C.2) Other activities related to standardisation
The European AI Alliance
https://ec.europa.eu/digital-single-market/en/european-ai-alliance
The High-Level Group on Artificial Intelligence
https://ec.europa.eu/digital-single-market/high-level-group-artificial-intelligence
AI on Demand Platform
H2020
R&D&I projects funded within topics ICT-26 from the H2020-ICT-Work Programme 2018-20 can produce relevant input for standardisation.
StandICT.eu
This EU funded project produced a standardisation landscape report for the technology area of AI.
This overview or landscape document is a static “snap shot” of a dynamically updated database compiled within StandICT.eu.
The database is inclusive (from many different SDOs and organizations), re-useable (available for liaison to other organisations), filterable (to choose a subset of documents and organisations appropriate to a particular use), and easily exportable (CSV, Word, ODT, Mind-map).
https://www.standict.eu/landscape-analysis-report/landscape-artificial-intelligence-standards
(C.3) Additional information
European AI Alliance
European AI Alliance is a forum set up by the European Commission engaged in a broad and open discussion of all aspects of Artificial Intelligence development and its impacts. Given the scale of the challenge associated with AI, the full mobilisation of a diverse set of participants, including businesses, consumer organisations, trade unions, and other representatives of civil society bodies is essential. The European AI Alliance will form a broad multi-stakeholder platform, which will complement and support the work of the AI High-Level Group in particular in preparing draft AI ethics guidelines, and ensuring the competitiveness of the European Region in the burgeoning field of Artificial Intelligence. The Alliance is open to all stakeholders. It is managed by a secretariat, and it is already open for registration.
High-Level Expert Group on Artificial Intelligence (AI HLG)
The group has now concluded its work by publishing the following four deliverables:
Deliverable 1: Ethics Guidelines for Trustworthy AI
The document puts forward a human-centric approach on AI and lists 7 key requirements that AI systems should meet in order to be trustworthy.
Deliverable 2: Policy and Investment Recommendations for Trustworthy AI
Building on its first deliverable, the HLEG put forward 33 recommendations to guide trustworthy AI towards sustainability, growth, competitiveness, and inclusion. At the same time, the recommendations will empower, benefit, and protect European citizens.
Deliverable 3: Assessment List for Trustworthy AI (ALTAI)
A practical tool that translates the Ethics Guidelines into an accessible and dynamic self-assessment checklist. The checklist can be used by developers and deployers of AI who want to implement the key requirements. This list is available as a prototype web-based tool and in PDF format.
Deliverable 4: Sectoral Considerations on the Policy and Investment Recommendations
The document explores the possible implementation of the HLEG recommendations, previously published, in three specific areas of application: Public Sector, Healthcare and Manufacturing & Internet of Things.
CAI.
In September 2019, the Committee of Ministers of the Council of Europe set up an Ad Hoc Committee on Artificial Intelligence – CAHAI. The Committee examined the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law. The committee, which brings together representatives from the Member States, had an exchange of views with leading experts on the impact of AI applications on individuals and society, the existing soft law instruments specifically dealing with AI and the existing legally binding international frameworks applicable to AI. CAHAI finalised its work at the end of 2021 by adopting the final deliverable titled “Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law” https://rm.coe.int/cahai-2021-09rev-elements/1680a6d90dthat. Based on results of CAHAI work, Council of Europe has established in 2022 a new Committee for AI – CAI.
CAI drafted and negotiated the text of the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law that was opened for signature on 5 Septembre 2024. It is the first-ever international legally binding treaty for AI and aims to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation.
The Framework Convention covers the use of AI systems by public authorities – including private actors acting on their behalf – and private actors. The Framework convention requires states to comply with fundamental rights principles. It puts in place remedies, procedural rights and safeguards, as well as the obligation to carry out risk and impact assessments with the establishment of sufficient prevention and mitigation measures. The Council of Europe also developed a methodology for risk and impact assessments: “the HUDERIA”. HUDERIA is a standalone, non-legally binding guidance document that parties to the Framework Convention have the flexibility to use or adapt, in whole or in part, to develop new approaches to risk assessment or to refine existing ones, in accordance with their applicable laws.
CAHAI: https://www.coe.int/en/web/artificial-intelligence/cahai
CAI: https://www.coe.int/en/web/artificial-intelligence/cai
Framework Convention: https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence
HUDERIA: https://www.coe.int/en/web/artificial-intelligence/huderia-risk-and-impact-assessment-of-ai-systems
AI on Demand Platform
From 2014 to 2020, the European Commission funded a large €20 million project on Artificial Intelligence (AI) under the framework programme on R&D Horizon 2020. It aimed to mobilise the AI community in Europe in order to combine efforts, to develop synergies among all the existing initiatives and to optimise Europe’s potential.
The Commission plans to increase its investment in AI further, mainly through two programmes: the research and innovation framework programme Horizon Europe, and the Digital Europe programme.
UNESCO International research centre on Artificial Intelligence (IRCAI)
UNESCO has approved the establishment of IRCAI, which will be seated in Ljublijana (Slovenia). IRCAI aims to provide an open and transparent environment for AI research and debates on AI, providing expert support to stakeholders around the globe in drafting guidelines and action plans for AI. It will bring together various stakeholders with a variety of know-how from around the world to address global challenges and support UNESCO in carrying out its studies and take part in major international AI projects. The centre will advise governments, organisations, legal persons and the public on systemic and strategic solutions in introducing AI in various fields.
AI studies
In addition to the previous initiatives, the Commission is planning to conduct some technical studies about AI. Among them, there will be one specifically targeted to identify safety standardisation needs.
Standard sharing with other domains
AI is a vast scientific and technological domain that overlaps with other domains also discussed in this rolling plan, e.g. big data, e-health, robotics and autonomous systems and so forth. Many of the standardisation activities of these domains will be beneficial for AI and the other way around. For more details, please refer to section “C.1-Related standardisation Activities”.