Responsible use of Trustworthy AI
We cannot control AI development, but we must control how AI is used
Security and safety are critical requirements for AI applications, and Trustworthy and Responsible are commonly used as required attributes of such solutions. These attributes are not the same, as they better apply to the development of AI products and their use in particular scenarios – the stages with different levels of control. This is an opinion post about these key terms in the context of AI security.
I first heard the expression Responsible use of Trustworthy AI while listening to discussions during one of the workshops about building the NIST AI Risk Management Framework. Before that, I used the adjectives Responsible and Trustworthy almost interchangeably or without paying much attention. Then, however, I realized that this expression could be handy for AI Security, as it captures two key challenges, and each of these attributes refers to one of them:
Trustworthy refers to an AI application as a product and its properties.
Responsible refers to the use of that product in a particular context.
That disconnection between a product and its use is not typical for most traditional software applications, which are usually developed for specific usage scenarios (also when agile practices are applied). From an AI security point of view, both these attributes are necessary, especially for high-risk domains or mission-critical scenarios. However, a disjoined focus on a product and its usage can be instrumental as we learn about new applications, interactions, expectations, and consequences of AI implementations.
Recently, we have witnessed an AI arms race that includes the most prominent tech corporations and new companies developing AI technologies or trying to take advantage of them in practical applications. There are hopes and fears that new AI applications could give significant competitive advantage or, in some cases, redefine the foundations of established business (with search as the often-discussed example). As a result, AI became the hottest startup buzzword, replacing blockchains and meta/omni/multiverses, and we see a rush to deploy new technologies to production, often when they are not ready for that. In the background, there are voices of concern about existential threats related to AI development, but also opinions about limitations of new capabilities or that a next AI winter is unavoidable. We don’t know where AI technologies are going, but the intensity of that AI arms race already changes our reality beyond technology. AI applications already enable the automation of many cognitive tasks and real conversational interactions. They are also changing users’ perceptions and expectations about technology and exposing flaws and biases in our decision processes and tasks that are getting automated. We will likely be disappointed with many ideas for applications of AI, but that will not prevent us from trying most of them.Â
We cannot really control if AI solutions are researched and developed, but we should be able to control how they are legitimately used.
Both the development and application of AI technologies have accelerated so much in recent years, and that is unlikely to change unless major technical obstacles are hit or critical incidents happen. Compared to other types of practical research, building AI applications requires relatively small computing resources and can be entirely based on public research and open-source software. AI applications are getting unprecedented global and mass attention, and there is much to gain or lose for individuals, companies, and in a bigger scheme – also for states and governments. Concerns about unclear social impact or bias exposed by AI applications are unlikely to impact development work until practical regulations are in place. And since the race participants are not limited to research on academic institutions, we cannot easily apply approaches that worked before, for example, with genetic research. There might be excellent reasons beyond the idea of stopping or slowing down AI research, but they are unrealistic and would be impossible to implement in practice. What is worse, attempts to restrict working on AI might have very undesired consequences. The development of AI technologies would not stop but would be redefined, hidden in other projects, or moved underground or to locations where regulations are not followed. It is reasonable to assume we cannot really control if AI solutions are researched and developed, but we should be able to control how they are legitimately used.
The development of AI technologies and their use in real-life scenarios are closely connected but differ significantly regarding our control over these efforts. It makes sense that we approach them separately, and this is where attributes of Trustworthiness and Responsibility can be helpful in the security context.
TRUSTWORTHY applies to an AI SYSTEM, its development process, quality, and unique properties. This is an attribute of an AI as a software product: how it was designed, built, tested and deployed, monitored, and managed. That also includes elements unique for AI applications, like data used in training a model or general principles followed along the way. The broader context is also relevant, just to mention owners of the system, defined objectives, openness, transparency, or involvement of actual users. In the end, the Trustworthiness of an AI system depends on its ability to meet all requirements and expectations for specific applications (not only security and safety).
RESPONSIBLE applies to the USE of an AI System in a particular context and all those associated requirements. This attribute starts with the USE CONTEXT and questions whether AI is suitable for the scenario, what conditions must be met, and what regulations would apply. We need a solid understanding of how AI would be integrated with the environment, the intended and actual use, and who would or could be affected. That all should lead us to a definition of the requirements for AI application, the MODEL in scope, and the underlying technology PLATFORM. Analysis of how these requirements are (or could be) met with controls and procedures will allow us to verify if the AI application is acceptable for specific scenarios or if additional changes are needed.
Like in the early days of cybersecurity, we need more awareness about new problems and ensure that proper attention is paid to AI projects.
In most scenarios, harm is not caused by the mere existence of an AI application running in a lab or a proof of concept operating with test data. The associated risks will materialize when that solution gets connected to real-life data flows and is allowed to automate or augment critical decisions and tasks. Release from development to production is a critical step in any software lifecycle. In the case of AI applications, that step is even more complex, as we are not only looking at the properties of software but also at its actual USE CONTEXT, which could be different than assumed in earlier stages. That creates unique opportunities for implementing rules for legitimate applications and getting control over what and how AI solutions are used in real scenarios. What constitutes a TRUSTWORTHY system dependent on the ability to meet the requirements for its RESPONSIBLE use; gaps may result in a need for limitations, constraints, or guidelines. Eventually, the proper frameworks will be provided by regulation, especially in high-risk domains or mission-critical scenarios, and industrial standards and best practices. Until these elements are mature, we really need transparency and accountability in making decisions about using AI solutions on a case-by-case basis. Also, similarly to the early days of cybersecurity, we need to work more on awareness about new problems and ensure that proper attention is paid to AI projects despite their perceived urgency and necessity.
AI technologies have the potential to improve our world but also to cause harm and damage if we don’t fully understand the benefits and risks of AI applications. We need Trustworthy AI systems that are used in a Responsible way, meeting all requirements identified for specific scenarios. AI research and development cannot be realistically controlled, but AI applications in real-life scenarios should be. We need Responsible Innovation in AI that would balance promises of great opportunities and competitive fever with the complexity of new security risks and threats (which can be more significant than benefits). Transparency, accountability, and awareness are necessary and instrumental, but we will also need standards, procedures, patterns, metrics, and tools. Functional regulations should eventually play their role after many conversations and mistakes we will likely make. Still, we need to be sure that we will pay proper attention to the alignment of technologies, policies, and cultures in an organizational and broader sense. The success of a new technology depends on its acceptance by users. AI in public perception is both very shiny and very scary. Legitimate AI applications need to be Trustworthy and used Responsibly to address that. Illegitimate or malicious use of AI is obviously a completely different story.
I wonder if the AI space is going to take 10 to 14 years before practices are the norm. Similar to how companies didn't take security serious until around 2015. That's a long time given the pace at which AI is evolving and companies are picking it up.