A new report from ETSI aims to pave the way for establishing a standard for artificial intelligence (AI) security...
A new report from ETSI, the European standards organization for telecoms, broadcasting and electronic communications networks and services, aims to pave the way for establishing a standard for artificial intelligence (AI) security.
The first step in the path to creating a standard is describing the problem of securing AI-based systems and solutions. This is what the 24-page report, ETSI GR SAI 004, the first to be published by the ETSI Securing Artificial Intelligence Industry Specification Group (SAI ISG) does. It defines the problem statement and has a particular focus on machine learning (ML), and the challenges relating to confidentiality, integrity and availability at each stage of the machine learning lifecycle. It also points out some of the broader challenges of AI systems including bias, ethics and ability to be explained. A number of different attack vectors are outlined, as well as several cases of real-world use and attacks.
To identify the issues involved in securing AI, the first step was to define AI. For the ETSI group, artificial intelligence is the ability of a system to handle representations, both explicit and implicit, and procedures to perform tasks that would be considered intelligent if performed by a human. This definition still represents a broad spectrum of possibilities. However, a limited set of technologies are now becoming feasible, largely driven by the evolution of machine learning and deep-learning techniques, and the wide availability of the data and processing power required to train and implement such technologies.
Numerous approaches to machine learning are in common use, including supervised, unsupervised, semi-supervised and reinforcement learning.
Within these paradigms, a variety of model structures might be used, with one of the most common approaches being the use of deep neural networks, where learning is carried out over a series of hierarchical layers that mimic the behaviour of the human brain.
Various training techniques can be used as well, namely adversarial learning, where the training set contains not only samples which reflect the desired outcomes, but also adversarial samples, which are intended to challenge or disrupt the expected behaviour.
“There are a lot of discussions around AI ethics but none on standards around securing AI. Yet, they are becoming critical to ensure security of AI-based automated networks. This first ETSI report is meant to come up with a comprehensive definition of the challenges faced when securing AI. In parallel, we are working on a threat ontology, on how to secure an AI data supply chain, and how to test it,” explains Alex Leadbeater, Chair of ETSI SAI ISG.
Asked about timelines, Leadbeater told embedded.com, “Another 12 months is a reasonable estimate for technical specifications. There are more technical reports coming over the next couple of quarters (AI Threat Ontology, Data Supply Chain Report, SAI Mitigation Strategy report). In fact, one specification on security testing of AI should be out before, around end of Q2/Q3. The next steps will be to identify specific areas in the problem statement that can be expanded into more detailed informative work items.”
Report outline
Following the definition of AI and machine learning, the report then looks at the data processing chain, covering confidentiality, integrity and availability challenges throughout the lifecycle, from data acquisition, data curation, model design and software build, to training, testing, deployment and inference, and upgrades.
In an AI system, data can be obtained from a multitude of sources, including sensors (such as CCTV cameras, mobile phones, medical devices) and digital assets (such as data from trading platforms, document extracts, log files). Data can also be in many different forms (including text, images, video and audio) and can be structured or unstructured. In addition to security challenges related to the data itself, it is important to consider the security of transmission and storage.
To give an indication of the integrity challenges in data curation, when repairing, augmenting or converting data sets, it is important to ensure that the processes do not risk impacting on the quality and integrity of the data. For supervised machine learning systems, it is important that the data labelling is accurate and as complete as possible, and to ensure that the labelling retains its integrity and is not compromised, for example through poisoning attacks. It is also important to address the challenge of ensuring the data set is unbiased. Techniques for data augmentation can impact on the integrity of the data.
Another area covered is around design challenges other unintentional factors around bias, data ethics and explainability.
For example, bias should be considered not only during the design and training phases, but also after a system has been deployed, since bias could still be introduced. The report cites an example from 2016, when a chatbot was launched, which was intended as an experiment in “conversational understanding”. The chatbot would engage with social network users through tweets and direct messages. Within a matter of hours, the chatbot was beginning to tweet highly offensive messages. After the chatbot was withdrawn, it was discovered that the chatbot’s account had been manipulated to display biased behavior by internet trolls. Bias does not necessarily represent a security issue, but can simply result in the system not meeting its functional requirement.
On ethics, the report highlights several examples, including autonomous cars and healthcare. It cites a paper from the University of Brighton which discussed a hypothetical scenario where a car powered by AI knocks down a pedestrian and explored the legal liabilities that ensue. In March 2018, this scenario became a reality when a self-driving car hit and killed a pedestrian in the city of Tempe, Arizona. This brought into sharp focus not only the legal liabilities, but the potential ethical challenges of the decision-making process itself. In 2016, Massachusetts Institute of Technology (MIT) launched a web site called Moral Machine exploring the challenges of allowing intelligent systems to make decisions that are of an ethical nature. The site attempts to explore how humans behave when faced with ethical dilemmas, and to gain a better understanding of how machines ought to behave.
The report emphasizes that while ethical concerns do not have a direct bearing on the traditional security characteristics of confidentiality, integrity and availability, they can have a significant effect on an individual’s perception of whether a system can be trusted. It is therefore essential that AI system designers and implementers consider the ethical challenges and seek to create robust ethical systems that can build trust among users.
Finally, the report looks at attack types, from poisoning and backdoor attacks to reverse engineering, followed by real world use cases and attacks.
In a poisoning attack, an attacker seeks to compromise the AI model, normally during the training phase, so that the deployed model behaves in a way that the attacker desires. This can be due to the model failing based on certain tasks or inputs, or that the model learns a set of behaviors that are desirable for the attacker, but not intended by the model designer. Poisoning attacks can typically occur in three ways:
While the term ‘artificial intelligence’ originated at a conference in the 1950s at Dartmouth College in Hanover, New Hampshire, USA, the cases of real-life use described in the ETSI report show how much it has evolved since. Such cases include ad-blocker attacks, malware obfuscation, deepfakes, handwriting reproduction, human voice and fake conversation (which has already raised a lot of comments with chatbots).
What’s next? Ongoing reports as part of this ISG
This industry specification group (ISG) is looking at several ongoing reports as part of the work items under it will dig deeper into.
Security testing: The purpose of this work item it to identify objectives, methods and techniques that are appropriate for security testing of AI-based components. The overall goal is to have guidelines for security testing of AI and AI-based components taking into account of the different algorithms of symbolic and subsymbolic AI and addressing relevant threats from the work item “AI threat ontology”. Security testing of AI has some commonalities with security testing of traditional systems but provides new challenges and requires different approaches, due to
(a) significant differences between symbolic and subsymbolic AI and traditional systems that have strong implications on their security and on how to test their security properties;
(b) non-determinism since AI-based systems may evolve over time (self-learning systems) and security properties may degrade;
(c) test oracle problem, assigning a test verdict is different and more difficult for AI-based systems since not all expected results are known a priori, and (d) data-driven algorithms: in contrast to traditional systems, (training) data forms the behaviour of subsymbolic AI.
The scope of this work item on security testing is to cover the following topics (but not limited to):
And it provide guidelines for security testing of AI taking into account the above mentioned topics. The guidelines will use the results of the work item “AI Threat Ontology” to cover relevant threats for AI through security testing and will also address challenges and limitations when testing AI-based system.
AI threat ontology: The purpose of this work item is to define what would be considered an AI threat and how it might differ from threats to traditional systems. The starting point that offers the rationale for this work is that currently, there is no common understanding of what constitutes an attack on AI and how it might be created, hosted and propagated. The “AI threat ontology” work item deliverable will seek to align terminology across the different stakeholders and multiple industries. This document will define what is meant by these terms in the context of cyber and physical security and with an accompanying narrative that should be readily accessible by both experts and less informed audiences across the multiple industries. Note that this threat ontology will address AI as system, an adversarial attacker, and as a system defender.
Data supply chain report: Data is a critical component in the development of AI systems. This includes raw data as well as information and feedback from other systems and humans in the loop, all of which can be used to change the function of the system by training and retraining the AI. However, access to suitable data is often limited causing a need to resort to less suitable sources of data. Compromising the integrity of training data has been demonstrated to be a viable attack vector against an AI system. This means that securing the supply chain of the data is an important step in securing the AI. This report will summarise the methods currently used to source data for training AI along with the regulations, standards and protocols that can control the handling and sharing of that data. It will then provide gap analysis on this information to scope possible requirements for standards for ensuring traceability and integrity in the data, associated attributes, information and feedback, as well as the confidentiality of these.
SAI mitigation strategy report: This work item aims to summarize and analyze existing and potential mitigation against threats for AI-based systems. The goal is to have guidelines for mitigating against threats introduced by adopting AI into systems. These guidelines will shed light baselines of securing AI-based systems by mitigating against known or potential security threats. They also address security capabilities, challenges, and limitations when adopting mitigation for AI-based systems in certain potential use cases.
The role of hardware in security of AI: To prepare a report that identifies the role of hardware, both specialised and general-purpose, in the security of AI. This will address the mitigations available in hardware to prevent attacks and also address the general requirements on hardware to support SAI. In addition, this report will address possible strategies to use AI for protection of hardware. The report will also provide a summary of academic and industrial experience in hardware security for AI. In addition, the report will address vulnerabilities or weaknesses introduced by hardware that may amplify attack vectors on AI.