Starting AI Security Programs
We need to remember a few things when introducing AI applications in an organization.
We have a lot of conversations about the security and safety of AI applications. There seems to be agreement about the importance of related requirements and gaps in existing security technologies and practices. At the same time, AI capabilities are rapidly applied on a massive scale in most industries, often in scenarios that should undoubtedly be considered high risk. In this post, we sketch how to start and frame the security efforts around AI in an organization to reduce the probability of adverse and unexpected outcomes of these projects.
New AI capabilities have the potential to automate and augment our decision tasks, change our creative processes, and redefine how we interact with software, artificial agents, and, unavoidably, with each other. We hope these new technologies will help us solve problems that are currently too difficult or expensive to approach. Unfortunately, there are also plenty of unreasonable expectations, ridiculous hype, and the perceived necessity to use AI before competitors at all costs. Both AI opportunities and challenges are still relatively new; we have a limited understanding of the value of technology, new threats, and required mitigations. Even though the importance of security and safety is recognized, we cannot expect that attempts to use AI in practice will be postponed until we figure out all technical challenges, best practices, or regulations. Since security and safety are not areas where shortcuts are acceptable, we need to build local AI security programs that enable a staged approach for experimentation and testing the value of new capabilities, the development of new applications, and their responsible move to production. We need to provide a structure for practical AI security efforts that will address the demand for innovation and agility without exposing organizations, their users, or other individuals and groups to unnecessary, unjustified, or unknown risks.
AI security will eventually be integrated into regular cybersecurity programs, but at this point, these new projects can benefit from focused efforts. Existing cybersecurity controls and methods do not fully address specific threats or characteristics of AI-based applications. All classic security problems still apply to AI platforms and models; in some cases, they can be even more challenging due to the complexity and novelty of AI technologies. But to address all new challenges, we need dedicated AI security programs built on top of existing cybersecurity efforts but focused on threats and requirements for developing, hosting, or integrating with AI components specifically. Experiences from cybersecurity are strong foundations, but efforts will have to go beyond technology to cover the broad impact of AI applications on groups, individuals, and organizational resilience. We need AI security programs that will help us make informed decisions about using AI capabilities in specific use contexts with an understanding of both the benefits and risks of these new opportunities. The key elements of such programs include proper foundations, structure, execution, and effects, both expected as well as those that are unintended.
A few comments should be added for each of the elements presented in the diagram above.
Research & Awareness (Foundations)
AI capabilities are still evolving, and we must keep learning, analyzing, and documenting threats, assumptions, and user expectations. Risks depend on the use context, so all the above elements can be specific to an industry, application domain and scenarios, or even an organization. Until mature industry standards and frameworks exist, internal research efforts cannot be skipped.
All stakeholders must have a sufficient understanding of the benefits and risks of new technologies. They need adequate training to make informed decisions about AI applications, which may often result in reducing the project's scope or even its cancellation. We must ensure that risks and unknown variables are acknowledged, documented, and never dismissed or ignored.
Policy & Analysis (Structure)
Scenarios and rules for acceptable use of AI must be defined and justified by potential value in the context of applicable risks. We need clear criteria for identifying high-risk or mission-critical scenarios and to know when they are not ready for production. Ongoing efforts should be aligned with upcoming AI regulations and emerging industry standards.
Security reviews should cover all non-trivial AI applications and start with threat modeling adopted to unique characteristics of AI applications and their dependencies. The results should be translated into the requirements that are precise and measurable. Critical parts of the system (e.g., AI models) should be subjects of dedicated penetration/red-team testing (expected to be standardized).
Technology & Operations (Execution)
Technical mitigations must be implemented at all stages of secure AI development lifecycles, along with applicable procedures and best practices. In some cases, we will need new or updated security controls and procedures to mitigate threats unique or critically changed by AI capabilities. Protection mechanisms must be integrated, covering the use contexts, AI models, and underlying platforms.
Operational considerations must include managing models (their use in specific scenarios), experimentation, and integration with data governance. Monitoring must be extended to AI components, behaviors, and interactions to detect critical events and trends. The incident response process should be ready to address unique situations, like the significant time needed to address a vulnerability in a model.
Culture & Communication (Effects)
Broad applications of AI will impact all organizations and their cultures, even if they are very careful with the local adoption of AI capabilities. That includes affecting organizational resilience and vulnerabilities. Work on security and safety will need to go beyond technology and be coordinated with other tenets of responsible and ethical AI, like fairness or interpretability.
There is a need for a clear vision and communication strategy around AI, both within the organization and externally. Internal communication needs to address all fears and confusion related to AI and support organizational honesty. External communication is essential for building trust - especially relevant with generative AI, which may overtake significant parts of daily customer interactions.
All the elements of AI security programs mentioned above need to be applied to the complete lifecycle of an AI application. We need to cover how the application is developed (including training and tuning models), how it is applied to specific problems, and how it is operated in expected and unusual conditions. These are not linear but rather continuous and repetitive processes – for example, the need for research or training doesn’t stop when AI application becomes operational but needs to be continued. Use contexts matter - new applications cannot be automatically approved, as unintentional harm can be caused by using tested technologies in a context with different requirements. And those requirements can change over time. At least at the beginning, we also need to pay special attention to all external dependencies and 3rd party engagements, as early mistakes in external integrations can be very expensive in the long term. That brings us back to the critical importance of transparency and accountability and the need to acknowledge and document any assumptions, unknowns, and issues that must be addressed (and could limit applications in specific use contexts in the meantime).
We are only at the beginning of the road when it comes to practical applications of AI. It will take time to verify new opportunities, learn the lessons, and fill the gaps. We have some great ideas for AI principles, and many frameworks are proposed that could eventually make these projects more manageable. But for now, the most depends on internal AI security programs and focused efforts executed for specific AI projects in an organization. Mistakes will be made along the way, but we need to do our best so they can be avoided whenever possible or have significantly reduced impact in all other cases. We shouldn’t expect to know all the answers yet, but we should be expected to have at least a complete list of questions. We cannot ignore problems that seem too new or too complex – on the contrary, these problems require our closest attention. No excuses, no surprises. We must try to understand the big picture and long-term consequences of any non-trivial AI application; the fact that they are hard to predict just means that we need to take our steps more carefully and meticulously document our journeys.Â
Are there any parallels outside of technology with the evolution of AI. I'm think of how the Ford motor company streamlined car manufacturing. You could argue that it put people out of a job and that this was doing bad this. Fast forward 100 years and nobody today would even think this.
Are the things we see bad for AI today. Will we still think the same way in 100 years?