AI Security and Organizational Resilience
Organizations must be ready for AI even if they don’t believe in AI.
As organizations broadly onboard AI capabilities, they need to implement new security controls and practices to protect AI applications and assets. Since AI changes the security landscape, organizations also need to be sure that their employees, customers, or brands are protected from threats that are enabled or enhanced by AI. As a result, AI challenges cyber and business resilience, and the problems cannot be solved with the help of technologies alone. If organizations want to benefit from AI, they need to evolve and learn how to effectively manage risks and threats related to current and upcoming AI capabilities. If they are not interested in AI, they need to do basically the same.
AI is very shiny in the organizational context. Visions of automation and augmentation improving productivity and reducing costs sound too good. As a result, AI is expected to significantly change how we work, and the use of generative AI continues to climb, even though there are problems with estimating and demonstrating value and affected individuals (including customers) lack the enthusiasm for those changes. Introducing fundamentally new technologies and application scenarios is a considerable effort for any organization. That was the case with the Internet and Cloud Computing and even more so with AI. These efforts require technical, business, and cultural changes and are critical challenges for organizational resilience, considered either in cyber- or business scope. Cyber resilience usually refers to cybersecurity risk and the capacity to withstand or to recover quickly from difficulties, covering areas like risk management, incident response, or business continuity on a technical level. Business resilience is defined in broader terms as the ability of an organization to absorb the effects of and adapt to a changing environment and address also financial, operational, or reputational aspects. We can generally think of resilience as organizational abilities to adapt to changes, effectively respond to disruption, manage crises, and recover from incidents. Integration with AI technologies may critically challenge those abilities in ways we don’t yet fully understand, which could also be very specific for an organization.
AI capabilities change the security landscape, regardless of whether they are used in production or even considered to be useful for an organization.
When discussing AI security, we need to consider at least three meanings of that term. First, it is about protecting AI applications, as they are exposed to new threats and have unique vulnerabilities and requirements while guardrails are still under development. Secondly, we need protection from AI, new attacks that are enabled, or old ones that are supported and enhanced with AI to become more effective, scalable, and cheaper. Finally, we should think about using AI defensively, which is necessary to deal with information overload and complexity but is also associated with unique risks and challenges. AI capabilities fundamentally change the security landscape, regardless of whether they are used in production or even considered to be useful for an organization. We may choose to wait with onboarding AI applications but still need to be ready for new or AI-enhanced attacks or partners and other companies being affected by new threats. It also does not matter how successful AI applications are and if the promised value is ever delivered – just mere attempts to integrate AI with existing systems and environments can make organizations more exposed and vulnerable. If an organization operates in a digital space in any capacity, its security is impacted by the development and deployment of AI capabilities, even if the current push for AI turns out to be hype or a bubble. Below, we will look at some examples of AI-related challenges affecting organizations.
AI integration and dependencies change organizational Attack Surfaces. AI applications can be closely integrated into organizational workflows and individual routines for frequent interactions with users and other systems or data access. The data in the scope are usually unstructured and often highly sensitive, adding new requirements for data validation, sanitization, or classification of their sensitivity (especially when contextual). That can be a significant problem for organizations with unclear data flows and gaps in their data governance programs. AI applications have more complex dependencies and supply chains, covering not only software but also data or details of the training processes. That applies to internally hosted apps, which still likely use foundational models, external frameworks, or data for training. As a result, introducing AI to production environments can significantly increase the organizational attack surface, making it more dynamic and open. Perimeter defenses are becoming less effective and added AI components could expose existing vulnerabilities and technical debt, affecting infrastructure, software, and data that were previously internal and isolated. Suddenly, the environment might have many more entry points, regardless of whether they are ready for that and if the benefits actually justify those risks.
AI components become the most critical assets for knowledge-based organizations. Interacting with agents based on LLM models can often lead to users leaking sensitive information to AI systems. Such behavior can often be unintentional, resulting from a limited understanding of threats and more open conversational user experiences, going beyond traditional HCI. That can be especially problematic with external services and data flows crossing external trust boundaries, unclear user agreements, and a lack of general regulations in space. Additional risks are related to training and tuning internal models - in practice, an incremental transfer of institutional and individual knowledge to AI components. As a result, local AI models are becoming the most critical assets in an organization, and we are learning a lot about practical vulnerabilities allowing for the exfiltration of models. Additionally, there are new sensitive data sets, like records from the internal use of AI (which could often be shared with external companies for security purposes). All these assets can be targeted by attacks aimed at extracting organizational values and competencies. As an individual could be replaced by AI, so could organizations that don’t protect their AI assets sufficiently.
AI capabilities have a downstream impact on the integrity of critical decisions processes. The main promise of generative AI is assisting in cognitive, creative, and decision tasks. The more difficult and expensive the task is, the more significant the potential value of its automation or augmentation with AI. LLM-based apps can help write an email or summarize an article, but they have even bigger value in helping with difficult tasks, integrating with predictive models, or helping explain complex data and patterns. As our decisions become more dependent on results from AI, these solutions become more attractive targets of attacks. New threats are aimed not only at software, infrastructure, and data but also at the integrity of AI-supported decision processes. Attackers will still try to gain control over IT systems but also to influence business decisions, with potentially high ROI. Mitigations will include AI-supported decisions, provenance, and validating the quality of AI results, but these are not easy problems to solve. They will not become easier, with AI agents operating internally with different degrees of autonomy, leading to separate concerns regarding their permissions. As a result, internal trust boundaries become more complex; we will require additional controls and fundamentally rethink concepts of identity for humans and non-humans.
Security impact of AI on organizations goes beyond technology and may also be critical for employees, customers, reputation, or continuity of the business.
AI technologies promise close cooperation between users and agents within organizations, which, more than ever before, can now be described as socio-technical systems. With the continuous move to digital and online space, organizations have become increasingly dependent on technology in their daily operations and effectiveness. AI only accelerates that process, as having all data in online systems is a practical prerequisite for broad use of AI (also in scenarios when they should not be used). As a result, the security impact of AI on organizations goes far beyond technology and may also be critical for employees, customers, reputation, or continuity of the business. Cybersecurity has never been only about technology, as users often turn out to be the weakest elements of a system and frequent targets of attacks. As individual work gets more augmented with AI by interactions with multiple agents, users become even more critical parts of organizational attack surfaces. The attacks can still exploit technical vulnerabilities and use social engineering against humans, but they can also implement new types of techniques targeting AI components or their interactions with users. A new, bigger attack surface of AI-integrated organizations can expose not only technical issues but also security problems on the business or cultural levels.
Organizational culture is the key element of security efforts, covering awareness, education, communication, and – most importantly – trust. Technologies can change cultures, with the most recent example of remote working during the pandemic. With the perceived AI race, new capabilities are often implemented in chaotic and rushed ways, with wrong motivations, lack of clear vision, or limited transparency (starting with the decisions about AI projects). Such new projects can be confusing and lead to understandable fear and hesitation towards new technologies, as they might help but also replace users. This is a general problem with adopting AI, as users can refuse to participate in training models but also stop sharing their knowledge with co-workers. In a security context, this leads to the evolution of insider threats, where malicious AI agents can play a significant role, but more often, users will just misuse AI capabilities as the result of actual or pretended incompetence. That will also manifest in using unauthorized or unsanctioned AI applications (Shadow AI) to improve work productivity. All these challenges may erode organizational trust and make efforts aimed at securing AI-based socio-technical systems even more difficult. The situation can get uglier if AI is used to offload responsibility and blame for making unpopular or potentially questionable decisions.
Eventually, AI should help organizations become more resilient by applying AI in a defensive context and making them resolve previously dismissed security problems. However, that will be unlikely to happen anytime soon, as in the meantime we should rather expect AI to increase the risks significantly. AI creates great opportunities for adversaries, as publicly available AI models and tools enable them to perform faster, cheaper, and more effective attacks against individuals, organizations, and individuals in organizations. Advanced Persistent Threats (APT) will become more targeted, contextual, and scalable, with automatically detected targets, custom contents and payloads, and highly coordinated campaigns with continuous learning. Generative AI redefines risks related to social engineering from deepfakes to persuasive voice phishing, but fortunately, the same capabilities can also be used against scammers. The attacks may be focused on a specific company but affect a much bigger ecosystem, with actions aimed at employees, key customers, or broader social networks (e.g., families of executives). We should also expect innovative offensive tactics aimed at brand, reputation, and trust, as an effective simulation of human behavior will allow for advanced business-level Denial-of-Service. Such attempts may be especially successful in organizations that quickly became too dependent on AI, outsourced their decision processes to external systems, or failed to define appropriate requirements for technology providers.
To successfully benefit from AI, organizations must evolve and learn how to effectively manage risks and threats related to new AI capabilities.
In cybersecurity, the value of information can dramatically change when it is stolen, altered, or removed. That became a much bigger problem when information started to be processed digitally in networked systems. When an organization transfers internal knowledge to AI systems (Individual -> Organizational -> AI), the value of that organization can also change if these systems are untrustworthy or insufficiently protected. Years of unique experiences could be extracted (in ready-to-be-applied form!), or a business decision could be silently manipulated, resulting in critical disruptions. The solution to these challenges can be only partially delivered by technology – new requirements, controls, and processes. However, to successfully benefit from AI, organizations must adapt and learn how to effectively manage risks and threats related to current and upcoming AI capabilities. In many situations that means the need to change how organizations used to effectively operate, starting with redefining the role of technology. This should be an internal process that will benefit from external partners, solutions, and guidance, but it must not be completely outsourced. Many of the practical issues may be very local and specific for the organization; fixing them is not a type of responsibility that should be externalized (or ignored).
The introduction of AI to organizations is a similarly critical change as connecting a company to the Internet 30 years ago. There must be a difference between experiments and onboarding full capabilities to production. Existing security controls and practices are insufficient, and no silver-bullet solution in a box will quickly solve all our problems. We are dealing with new capabilities that are still evolving, new scenarios that are tested, and new threats that are appearing faster than mitigations mature. In the meantime, AI apps will keep getting into organizations in less or more official ways, and security teams must update access control, validate inputs and outputs, and protect the most critical assets. These efforts cannot be random or executed in continuous crisis mode. They should be part of dedicated AI Security Programs covering education, policies, technical guardrails, and operational practices. Such a program should be part of existing cybersecurity efforts but focus on AI-specific risks and opportunities. They need to help us understand unique threats that must be mitigated and provide security requirements for full lifecycles of AI applications. Finally, they must also cover AI in incident response planning, including cases when AI could be unreliable or unavailable for the organization.
In the context of AI, the differences between cyber- and business resiliencies seem to become less relevant. AI-infused companies are complex socio-technical systems where a technical failure can translate to critical consequences for business. If organizations want to benefit from AI, they must learn how to be more resilient in AI-based reality. If they are not interested in AI, they still basically need to do the same. Resilience cannot be provided by technology alone. We need to have a solid understanding of the actual value of AI applications in specific use contexts and determine if benefits justify risks and costs (usually nonobvious). We need to define strong requirements for technology, services, or data providers, ensure that dependencies and responsibilities are clear and that external visions benefit the organization in the long term. Finally, we need to have meaningful conversations about the roles of human and non-human elements, the value of individual contributions, and the relationships between employees and organizations. None of these tasks will be easy, but they may create opportunities to address long-dismissed security problems, reinforce internal architectures, address technical debts, or increase transparency and accountability. AI will keep challenging organizations to continuously work on their resilience. Organizations that figure out how to effectively learn and adapt to those challenges, just by gaining that ability will become more resilient.
The value of an organization depends on individual and institutional knowledge, and when that knowledge gets transferred from humans to AI components, they must be appropriately protected. Resilience to changes and disruptions is developed by an organization, but it is also a function of environments, and those are dramatically changing with AI. We only start to understand the impacts, but even if all AI-related hopes and promises turn out to be disappointments, the challenges for security will not go away. Not all organizations will be affected in a similar way, but those operating in high-risk domains, with mission-critical scenarios, or in highly competitive markets need to pay close attention. That includes applications in healthcare, financial or safety, and military – basically, all the cases where there are the most to gain (or lose) with AI. These organizations must be resilient if they want to stay relevant, and they must be ready for AI if they want to remain resilient. Even if they don’t believe in AI.
To make this more tangible, would you recommend any companies or software tbag can help with internal and external security? Are there specific industry solutions?