AI Security and Decision Processes
In search of simple definitions usable in the security context
In this post, we will talk about definitions of AI that can be useful for the practical security of AI applications when they become targets of attacks. There is a lot of confusion around definitions of AI resulting from the variety of applications, growing capabilities and potential impact, fast development pace, but also the attractiveness of these technologies to investors and marketing departments. From a practical point of view, we don’t necessarily need a definition that is complete, detailed, and future-proof (especially as AI systems will keep evolving for a while). We need, however, a shared understanding of the key concepts, which are described in simple, correct (based on our current knowledge), and possibly consistent way, to help us with problems that need to be addressed today. Conversations about the security of AI applications have value if they’re focused on understanding risks, determining limitations and constraints, and defining guardrails and guidelines. They will not be productive if they need to start with questions about the terminology every single time.
AI as automation/augmentation of decision processes
There are many attempts to define AI for upcoming regulations and practical guidelines. A great example is the definition proposed by OECD, which gets adopted by different efforts, like NIST AI Risk Management Framework 1.0. Creating a simple definition for a complex system is rarely easy, and many challenges can manifest only upon moving to the implementation phase. That can be illustrated by discussions about the definition proposed in European Union’s AI Act and concerns about designing AI systems in ways to intentionally avoid governance. The problem doesn’t become any simpler when we try to find definitions that would cover both Narrow AI, which works well within the limited context of specific tasks and General AI (AGI), which is a much broader and complex concept. On the other hand, definitions of AI can also become too focused when they are built around specific technologies or impactful applications (like, recently, ChatGPT). A good example is defining AI as a system based on deep neural networks (DNN), while machine learning doesn’t have to be used; instead, it can be built upon a Good Old-Fashioned AI or a new approach yet to be invented.
Rather than looking at specific technologies, the practical definitions might focus on the role that AI components are playing or are expected to play. Following that path, we can find an excellent definition of AI:
The computing of appropriate action from context.
This definition is simple and almost beautiful but may still be too general for connecting with engineering efforts. For that, we might look at something more rudimentary, like defining AI as:
Technological solutions for automation or augmentation of our decision processes.
A few comments are needed for this definition:
Technological solutions — AI components these days are still mostly software, but that doesn’t have to be the case in the future. Even today, AI solutions may be equipped with sensors and actuators connecting them with physical reality.
Augmentation or automation — concepts related to the role of AI components as supporting vs. replacing humans in particular tasks, with different levels of autonomy and clarity of responsibilities; especially useful in high-risk domains such as healthcare.
Decision processes — the term is understood very broadly and includes cognitive tasks or complex and not always conscious streams of decisions made during creative processes that can be uniquely individual for a human creator (prompt is not an inspiration).
A nice thing about this definition is that it can connect AI capabilities with specific business scenarios and underlying technologies used to implement them. AI components are always elements of a bigger system operating in some environment, digital or physical (or both), and embedded in the business, socio-technical, and individual contexts. By connecting these elements with the stack of underlying technologies, we might be able to use familiar techniques and approaches from cybersecurity.
Protecting integrity of decisions made with AI
Practical cybersecurity is focused on three main security properties: Confidentiality, Integrity, and Availability (sometimes including others, such as Authenticity and Non-repudiation). Protecting a system requires taking care of these properties at the level of infrastructure, software, data, and users (including access control and users’ behavior). Security of these technological foundations is essential for any AI solution, but the protection must go beyond that. Since the value of AI lies in improving our decision processes, we also need to protect the specific way AI components are used. That requires an understanding of use context, clarity of decision situations, their requirements, technology’s limitations, or users’ assumptions and expectations. We will need to find an answer if AI results can be trusted in a particular context.
When protecting an AI application from attacks and external interferences, we need to think about a complete stack of technologies used to enable AI components and workflows built on top of them. In that context, the security of AI applications can be understood as:
Protecting data, infrastructure, software, and operations of an AI solution, as well as Confidentiality, Integrity, and Availability of decision processes that are automated or augmented with results from AI models and algorithms.
Please note that additional requirements may also be applicable depending on the context, the severity of decisions and their consequences. For example, the need for accountability or transparency may be translated into underlying requirements around the security property of Non-repudiation. Priority of specific requirements may also be context-dependent; even though Integrity will likely remain key in most scenarios, there may be cases where Availability will be more critical than Confidentiality (e.g., medical devices in time-sensitive contexts).
Faster and broader adoption of AI technologies will lead to an increasing transition of decision and creative tasks from humans to machines. We need to recognize these changes and pay attention to clear rules, transparency, and guardrails, especially for AI in high-risk or mission-critical applications. That will become more difficult as we move from isolated and discreet decisions, with clearly defined inputs and outputs, to more continuous interactions, like using conversational solutions or augmenting creative tasks with AI.
The situation may be even more tricky as many of our decisions or sequences of decisions become indirectly based on AI without clear documentation of how AI components are used. Such indirect dependencies will be more common as AI solutions get more complex, ubiquitous, and heading more toward AGI. We may quickly lose clarity and understanding of the mechanics behind our decisions, be unaware that AI components are even involved, or miss specific data flows or possible consequences that should be considered (a significant opportunity for functional regulations).
AI Security can provide a structure to address these challenges. Eventually, it can not only be based on experiences and the culture of cybersecurity but also should become a part of cybersecurity, with a focus on protecting new types of intelligent products.
Updated on Aug 30th, 2023, with references and minor fixes based on received feedback.
Great paper. I'd love to see more around Non-repudiation. With the pace towards AI and Machine driven work. Non-repudiation is going to take a more center stage view in the not to distant future.