AI Security and Risks vs Benefits
Security and safety of AI begin with the questions about its value
It’s been a few years since we really started to chase the dream of AI. We’ve seen significant technical progress, and even though new capabilities are often still searching for problems to solve, they have slowly but continuously started to change how we work with information. AI technologies have the potential to revolutionize domains such as healthcare, but new application scenarios are always associated with certain risks that impact individuals, groups, and organizations. With new opportunities, we tend to focus a lot on hopes and promises, even if they are still in the distance, and we often easily accept risks and unknowns as necessary for innovation. Decisions about the implementation of AI in not trivial scenarios must be informed, transparent, and accountable. We need to determine if the risks and costs associated with the required controls are acceptable. Before jumping to risk assessments or threat modeling, we need to ask questions about our realistic expectations from AI and whether we understand what needs to happen to get there.
New threats are unavoidable with any new technologies or application scenarios. New capabilities lead to new vulnerabilities, and adversaries will find ways to exploit them. The bigger the disruptive potential of new technologies, the bigger the associated risks. In the case of AI, we don’t yet fully understand those risks, but we already know they are different and significant. AI applications are more complex than traditional software, they have different attack surfaces, are more deeply integrated with decision workflows, have confusing supply chains, operate on highly sensitive data, and enable brand-new user experiences. They can be abused and misused in new ways, even further changing the security landscape. The security of AI is also different from classical cybersecurity, with new types of vulnerabilities and the need for new controls and practices. Just a mere fact of implementing AI applications exposes individuals and organizations to new kinds of threats. Security efforts must be at the center of any non-trivial AI implementation to be sure that threats are adequately mitigated, risks are acceptable, and the benefits are already worth the risks.
The risk analysis starts with understanding the expected and realistic value and benefits of AI technology applied in a specific Use Context.
Practical risk management is a complex effort that covers risk identification, assessments, mitigations, and monitoring. We need to understand the inherent risks of specific applications, as well as residual risks, measured after security controls are applied. Implementing these controls and practices is always associated with significant costs (with unknown risks, the costs should tend to infinity). In some cases, even residual risks may be unacceptable; in others, the costs of required controls are too high. Successful implementations of AI require balancing benefits and risks, as we need to confirm that practical gains from the application of AI justify these costs. The risk analysis starts with understanding the expected and realistic value and benefits of AI technology applied in a specific Use Context. In this process, we cannot rely on high hopes, wishful thinking, theoretical opportunities, or unchallenged assumptions. We rather need to start with questions about realistic goals, success metrics, associated requirements and managing the risks. What do we want to solve with AI? How will we know that we did that? What needs to happen for our goals to be achieved? Are we ready for the consequences of a failure? How about a success?
The benefits of AI are more likely to be overestimated, while unknown risks and threats are easier to ignore or dismiss.
Risk management is challenging as it is all about uncertainties. The task becomes more complicated when we deal with technologies still in the oven, unverified capabilities, and bold marketing promises. And there is even more fun when surrounded by hyped race, the perceived gold rush, and the corporate level of FOMO. Innovation is always associated with risks, but even though the practical value of AI is not necessarily yet obvious, and AI's Capability, Urgency, and Inevitability are often exaggerated, we seem to have a fundamental imbalance in priorities. We focus primarily on benefits, immediate crossing boundaries between experiment and production, or pushing new capabilities to customers as quickly as possible (before the vision was effectively sold). In dynamic environments full of opportunities and competition, any restrictions to moving fast may be perceived as hindering innovation and competitive risk. Unfortunately, the uncertain benefits of AI are more likely to be overestimated, while unknown risks and threats are easier to ignore or dismiss. In security, unknown variables are potentially critical and must be investigated. We want to know about all possible gaps and vulnerabilities, even, or especially, if we have no fixes. The worst-case scenario is when we get too comfortable with the unknown and unquestioningly accept the risks we don't understand.
Eventually, we will start to include the long-term consequences of failures in the costs of AI applications.
There is much to be gained in innovation races, especially with rewards like Artificial General Intelligence (AGI) at the target. In the short term, ignoring or dismissing security and safety concerns might seem a rational shortcut, but that is rarely a good strategy. Initial results from AI can be awe-inspiring and arrive quickly, but it usually takes more time for gaps, costs, and unintended consequences to manifest fully. At some point, failures and incidents will get much more attention, as a lack of security could destroy potentially promising projects and quickly become a competitive disadvantage. It may not be about getting to the finish line first but about getting there sufficiently early without an unmanageable disaster. Currently, when referring to the costs of AI projects, we usually think about necessary investments in infrastructure and development. Eventually, we will start to include the long-term consequences of failures in the costs of AI applications. That should be the moment when, hopefully, we will also recognize security and our ability to manage risks as the critical foundations for the success of AI applications. AI applications may often be connected with quick gains and delayed costs, and doing rational things should become an opportunity for a win.
What can we do now?
We should slow down. Make haste slowly. Take a few moments to start conversations about both risks AND benefits. And do not allow ourselves to feel too comfortable with the unknown variables. That includes some good engineering practices, using new tools for AI-specific problems, or following general principles of Transparency and Accountability when Implementing AI projects.
Some more practical rules can also be handy:
The goals of AI projects must be SMART (Specific, Measurable, Achievable, Relevant, and Time-Bound). AI applications require a clear vision and practical motivations that need to be successfully sold to affected individuals (others are doing that will not work). We need to manage users’ expectations around hopes, promises, and marketing narratives for AI applications. That includes defining clear goals, deliverables, responsibilities, and requirements for technology providers/enablers. A big part of these tasks is continuous documentation of practical Limitations, Constraints, and Guidelines, with special emphasis on all the gaps and insufficient or missing controls.
AI-focused tools and frameworks are available and can provide practical value. AI Risk Management Framework from NIST is a good place to start. The MAP function defined there covers understanding the context, checking related assumptions, or recognizing when systems are beneficial or not functional within or outside their intended context. We should assume that AI capabilities, risks, and controls will continue to evolve and surprise us for some time. We need up-to-date big pictures covering our local threat landscape, measured risk posture, and help with prioritization of efforts. Such structure can be especially advantageous when combined with continuous Threat Modeling of AI Use Contexts.
Untested AI capabilities must not be rushed from Experiments to Production. We need to look holistically at Secure Artificial Intelligence Lifecycles, but a critical decision point covers the transition from prototyping to production. Experiments should be conducted in a dedicated environment, operating on selected data, without integration with critical workflows. Even with positive results, going beyond a proof of concept should be treated as releasing new software with all the requirements, guardrails, monitoring, and incident response in place. In addition to technical aspects, moving AI applications to production also requires business, organizational, and cultural readiness.
Last, but not least, decisions about AI projects must be Informed, Transparent, and Accountable. Decisions about integrating AI in a particular scenario should be considered critical from a security POV. They must be informed, based on technical and business analysis, and supported by metrics and required experimentations. They must be transparent, with clarity of all inputs and outputs, and complete documentation of all assumptions, identified gaps, and items that required research. Mistakes will be made, but with proper track records, we can learn from them. Finally, the decisions must be accountable, with clear owners and responsibilities, and sign-off when a lot is at stake.
With AI, everything is still new; there are many unknowns, and it is difficult to make decisions, especially under pressure. It's tempting to switch to a temporary decision process until we know better, and as a result, focus too much on benefits (they're shiny), ignoring the risks (they're uncomfortable), and accepting the unknowns (they're irrelevant). Complex decisions require more attention, research, and diligence. We need to avoid situations when individuals could benefit from the success of a risky call but wouldn't need to face the consequences of failures or shortcuts. Similarly, we don't want decisions made by a committee where the actual accountability is diluted, and the process's quality often depends solely on concerned individuals willing to step in. There is also a related problem of companies and organizations making risky bets for profits in AI, with costs and impacts falling on other groups or individuals (e.g., customers or owners of data), but these challenges are unlikely solvable without the maturity of a market and usable regulations in place (which are currently more and more distant).
We cannot build safe and secure systems without understanding the risks AND benefits of new technologies. If we don’t understand the benefits, we cannot determine if the risks are acceptable or if the costs of controls are justified. We can end up with decision processes that are uninformed, obfuscated, or unaccountable. Progress and innovation can be seductive, but we need to manage our expectations and base our calls on things we can measure and validate while documenting everything still unknown. We don’t want to find ourselves in a situation where we exposed data, users, infrastructure, or software to unprecedented and unjustified risks, and promised benefits were disappointing or never delivered (and we are not sure why that happened!). Especially with applications in high-risk domains or mission-critical scenarios, we must keep asking questions about the value of new technologies in the present or near-future tense: Are benefits ALREADY worth it given gaps in controls? Are benefits STILL worth it, given new types of threats? After all, it seems that in the context of AI security, misunderstood, unrealistic, or misleading benefits can actually be a problem.
FOMO is a big one for companies. They are seem as leaders. Esp public listed companies. As it hits their share prices in a positive way.