AI applications are happening now, the related threats are not yet fully understood, and we know there are gaps in our controls, tools and processes. We must protect our use of AI, be ready for AI-supported attacks, and effectively use AI in defenses. This is a blog post about the need for focused security efforts around AI, the dedicated role of AISO, and using experiences from cybersecurity.
Let's start by clarifying that by AISO (Artificial Intelligence Security Officer), we don't mean AI components operating as a CISO and taking responsibility for an organization's security. There are many opportunities for AI automation and augmentations of decisions and tasks in cybersecurity, but this is one of the domains where humans will likely remain in control (with several red buttons at hand). The term AISO is used here to describe a role responsible for efforts to address all security challenges related to research and developing AI capabilities and deploying them in production scenarios. AI technologies are associated with many hopes and promises, and we need to take proper steps to take advantage of opportunities while understanding and reducing the risks, including those that are still unknown. That mission is associated with unique challenges related to the properties and possible consequences of AI applications. We see an unprecedented rush of deployment from research to production. Many are done with a limited understanding of threats, have known gaps in technologies and practices, or have not addressed and included non-technical concerns. As a result, we need security efforts that would cover all scenarios involving AI, focus on new threats and requirements, and enable integration with non-security initiatives while supporting continuous learning and agile adaptation to local contexts. Similarly to the role of BISO (Business Information Security Officers), who is working closely with specific business units to understand their specific security problems, we need the role of AISO dedicated to addressing all immediate or imminent problems related to applications of AI in a particular context.
Protecting AI, from AI, and with AI
Whenever we talk about practical AI security, we need to remember that this term can be understood in multiple ways that are connected but associated with different objectives and methods. The role of AISO is to work on awareness, policies, technologies, and culture to cover all these areas of understanding that apply to the local context of an organization or a project (with different priorities as needed). In a practical and immediate sense, that role usually covers protecting how we want to use AI, defending ourselves from AI-supported attacks, and using AI in our defenses.
Protecting AI Applications – this area seems to be the closest to traditional product security efforts, yet still sufficiently different. All non-trivial AI applications are potential targets of attacks, and we need to have a complete understanding of practical requirements for the Responsible use of Trustworthy AI, with particular attention paid to high-risk domains and mission-critical scenarios.
Protecting from Malicious Use of AI – this area might be of the highest urgency, as attackers will be the first to use the new AI capabilities. We need to understand and be ready for the use of AI in more effective implementation of the old threats (e.g., automated social engineering), but also for the emergence of new ones with different targets and objectives (e.g., reputation, trust, or users).
Protecting with the help of AI – this area is related to the most significant direct opportunities in the context of cybersecurity. AI will help with the security and complexity of modern information systems and user interactions. Still, many snake oil solutions will be built with unrealistic assumptions or expectations. We should assume that attackers will initially have a significant advantage in that space.
Security and safety are critical requirements for AI, as new applications must not only provide value but also mustn't cause harm. There are obviously many other requirements, some related to security (privacy or accountability), but others fundamentally different in nature and scope, e.g., broad categories of critical ethical and legal concerns. Close cooperation and integration of efforts related to all these requirements will be essential for successfully adopting AI. These will be significant technical and organizational challenges, and security might be very useful as a central coordination point for these projects. We have more than 30 years of experience with cybersecurity, which by design has been dealing with continuously evolving technologies, unexpected flaws and behaviors, and applications brought to life without proper consideration. Security efforts are focused on practical results, challenging ambiguity, and establishing a working structure with many workflows and patterns that could be relatively quickly re-used in the specific context of AI technologies. With all the hype about AI and the need for competitiveness, often perceived as a necessity, we need a culture of responsible innovation. It might be easier to build that culture on the foundations of security and its key principles at the core.
We need security efforts dedicated to AI, based on cybersecurity, but focused on differences, new requirements, and redefining assumptions if needed.
The situation with AI research and development is also sufficiently different, so existing cybersecurity tools and practices cannot be automatically extended to cover all new applications and scenarios. We are used to technologies moving fast from research to production, but not to the speed of change that we see with the current AI race. All that is happening with limited transparency and dynamic context driven by experimentation, opportunities, and priorities that may not be beneficial or even acceptable in the longer term. There are new types of threats, assumptions, and users’ expectations (with related vulnerabilities), often connected with novel ways of experiencing and interacting with AI. We already know that we have a lot of gaps in policies, controls, tools, and processes required to analyze these threats, detect flaws in applications, and mitigate attempts to exploit them. We will also need to develop new solutions for AI, including regulation, certification, threat modeling, pen-testing, securing the platform, and managing all concerns related to external dependencies and outsourcing of critical decisions to 3rd party systems. Because of all these problems, we need security efforts dedicated to AI, based on and extending cybersecurity methodologies, but also focused on differences, identification of new requirements, and redefining existing assumptions if needed.
AI security can become a driving force in shaping the organizational culture around AI projects, so it is not only focused on benefits but also risks.
The proposed role of an AISO is connected with many challenges and some opportunities. Because of the gaps in processes and technologies, this role requires continuous adaptation, learning, and (occasional) improvisation to keep up with new capabilities and their use by attackers. That involves working on awareness and prioritization in the local context, including organizational priorities, culture, and external factors, like a competition moving fast in the AI race. There is an opportunity to use cybersecurity structure to facilitate and coordinate other efforts required by AI. The requirements in that space go beyond technology as we need to cover the broad impact of AI applications on groups, individuals (not only direct users but all affected subjects), and social systems. Addressing ethical or legal concerns and translating them into technical requirements for implementation at all layers of an AI system will require close interdisciplinary cooperation and the definition of new vocabularies and patterns. In the past, cybersecurity changed engineering practices in software development and infrastructure management, which resulted in improvements in other tenets, like reliability or privacy. Similarly, AI security could become one of the key driving forces in shaping the organizational culture around AI projects, so it is not only focused on benefits but also takes risks into consideration.
In a way, the role of AISO is already needed regardless of the practical success or failures of AI applications, just because we are trying to implement them.
We don’t know where we are heading with this wave of progress around AI. Many great ideas (and many not-so-great ones) will be tested in practice, and we can be sure there will be plenty of disappointments. From a security POV, just the attempts to implement these new capabilities change the requirements for existing and newly created systems. The results of these projects are not necessarily their total importance as, without proper diligence, both successful and failed projects could cause harm. Failures are often great opportunities for accelerated learning, but only if there is somebody in a position to take advantage of the occasion. In a way, the role of AISO is already needed regardless of the practical success or failures of AI applications, just because we are implementing these projects. Eventually, we will know more and have proven mitigations, regulations in high-risk domains, and operational standards, so AI security efforts will get completely embedded in regular operations. Until then, though, we need transparency, accountability, and attention whenever a decision about a new AI application is made. The dedicated role of AISO might be the solution to help us make the best possible decisions while we still face so many unknowns.Â
AI applications are new, complex, and quickly becoming ubiquitous without sufficient control, accountability, or transparency. If an AI system is not secure, it will not be safe, fair, inclusive, or trusted. It would also be difficult to talk seriously about human agency and oversight if an AI application can be controlled or influenced by an external entity. Every organization operating in the digital space will soon need practical AI security. The more users and businesses start to depend on AI, the more attractive target these solutions will become. And even if we don’t plan to use AI yet, we need to prepare for AI to be used against us in offensive ways. The role of AISO can be fundamental in learning about new applications and threats, defining policies and standards, evaluating new mitigations as they mature, and working closely with other Ethical AI initiatives that are adjusted to unique local contexts. As I mentioned in the first post of this publication, in every non-trivial AI project, we need a person in the room that will keep asking the question: what can possibly go wrong? The idea behind the dedicated role of AISO is to give structure and a development path to such responsibilities. The most straightforward goal of security is to avoid surprises. The goal of AISO can be defined as avoiding surprises related to AI.