AI Must be Trusted to be Successful
That is a necessity but also an opportunity for competitive advantage
Every technical revolution is slightly different, but in every case, we must face unknowns and learn about benefits, risks, and tradeoffs. AI revolution is no different, even though AI technologies are. AI is not like traditional software. AI has different requirements, operates on unstructured and sensitive data, is closely integrated with our decision workflows, supporting new types of user interactions and covering both digital and physical spaces. By design, AI solutions are intended to be used in a broad range of applications, and even though AI technologies are still very new (as are threats and risks), we can expect their impacts to be significant. It seems that much more is at stake with this revolution, and that is not a threat but rather a business promise. Just because of that, we could assume that AI applications must be trusted, safe, and secure; however, there is more to that.
Trust in AI can have a different scope and nature, affecting individuals (e.g., trusting results from a chatbot), organizations (e.g., challenges to organizational resilience), or the global level (e.g., the impact of deepfakes and misinformation). Trust in the context of AI is broadly studied, encompassing both purely technical (e.g., emerging protocols for agent interactions) and non-technical requirements. It is a complex and dynamic phenomenon that can lead to situations where trust exists in a model that is not trustworthy, or a trustworthy model does not necessarily gain trust. But we can start with something more basic. It seems rational that we want to trust AI applications, especially in high-risk domains or mission-critical scenarios. We want to be sure that technology performs as expected, without surprising consequences, with predictable long-term impacts, and with a limited risk of damaging the entire system along the way. The success of technology depends on its perception and acceptance by users, so trust should be the top priority for most AI projects. But that is not necessarily the case, at least not yet.
Currently, we have a largely unregulated race that is driven by hopes and promises of hefty profits, fears of missing out, and selective focus on shiny benefits while dismissing risks and costs. That situation seems to apply similarly to big tech companies and startups looking for unique opportunities, as well as end-users who often make difficult decisions about moving from AI experimentation to production surprisingly quickly. That wouldn’t be such a huge problem if consequences of failures were to be also paid mostly by risk-takers. Unfortunately, end-users typically have limited choices regarding AI, as new features are heavily advertised with a sense of urgency and often presented as a necessity for future relevance. Furthermore, humans are also becoming subjects of AI analysis without explicit consent or even awareness, as more and more aspects of our lives are silently covered by decisions that are automated or augmented with AI. It is hard to discuss trust in new technologies when capabilities and applications are very new, business models are not yet fully baked, transparency and accountability are limited, and attempts at regulation are often presented as threats to innovation.
Fears, concerns, and hesitations about AI applications should be considered very real, and it would be unwise to ignore them.
AI capabilities, whether existing, expected, or wished-for, generate a lot of emotions, positive and negative – as both sides are not well understood. Since changes are implemented so quickly, there is usually not much time to investigate them properly. It appears that potential users and customers are expected to share the same level of excitement about AI as developers, investors, or early adopters. However, high trust in the tech sector doesn’t translate to trust in AI (76% vs. 50%), which also had a 15-point drop over the past five years (2024 Edelman Trust Barometer). Based on surveys from the Pew Research Center, experts are far more positive about AI than the public (56% vs. 17%), who are also more likely to foresee personal harm from AI than benefit (43% vs. 24%). There are numerous misunderstandings and miscommunications (intentional or accidental) regarding the benefits, costs, and risks of AI. However, given the levels of investment in AI, the broad adoption of these solutions is expected to occur soon. In this context, fears, concerns, and hesitations about AI applications should be considered very real, and it would be unwise to ignore them.
There are multiple issues that need to be addressed. Probably the biggest one is training AI models and, specifically, the value and control over the data used in that process. Initially, data were collected over the Internet, not necessarily with full attention to copyright laws. However, much more data is now needed. The policies governing the use of individual data for training AI are not always clear (especially for individual users), and some companies have officially begun to repurpose data collected over the years. Due to the computational costs of inference, advanced AI capabilities usually require processing in the cloud, where data may be insufficiently protected. Moving forward, the vision of personalized AI assistants comes with requirements of full access to users’ devices, fundamentally changing patterns around data privacy. We are not talking about queries, isolated data points, or even records of our social interactions, but rather data covering complete user activity (no longer limited to digital). Such data enables the creation of unprecedentedly precise behavioral profiles or digital twins, which can be used to benefit their owners (when clearly defined), but not necessarily only for that purpose if adequate guardrails are not in place.
Unfortunately, we don’t have good reasons to assume that rules and regulations will be sufficient anytime soon. Existing privacy protections, even though very limited, are challenged in the context of new AI realities. Due to massive investments in AI development and infrastructure, major players need more users and more data. As a result, practices related to AI applications are often getting close to UX dark patterns. In practice, that can mean enabling AI features by default, skipping users’ consent for uploading data to the cloud, changing functionality (e.g., removing local processing in Amazon Echo), or quietly updating terms of service to allow use of data for AI training or sharing them with 3rd parties for that purpose. Such practices only increase concerns about control over personal data, devices, and experiences, as well as the long-term consequences of working with AI. AI models are expected to continuously evolve, which leads to rational worries about the extraction of individual skills and competencies by technology providers, employers, or other entities due to a lack of adequate protection (e.g., individual style is not covered by copyright).
The push for the broad adoption of AI may be perceived as rushed and premature, also because of challenges in demonstrating its practical value. AI applications are still struggling with quality, reliability, and hallucinations. Most chatbots, including paid premium ones, continue to present inaccurate answers with alarming confidence and struggle with retrieving and citing news correctly. In the organizational context, 30% of generative AI projects are expected to be scrapped by 2025 due to a lack of return on investment. On the other hand, one area where AI seems to be doing exceptionally well includes different forms of misuse and abuse, as considerable evidence is emerging of a substantial acceleration in AI-enabled crime. That encompasses traditional attacks that can be scaled or enhanced with AI (e.g., AI agents outperform humans in spear phishing experiments), as well as new types that are enabled by AI (e.g., a deepfake video call with chief financial officer). Each such incident has a potential impact on the public's perception of the pros and cons of AI, which is only reinforced by growing exposure to AI-enabled misinformation and deepfakes, eroding public trust in media and communication systems.
The value of new technology must be sold, concerns need to be addressed, accountabilities accepted, and integrations implemented with transparency.
Different requirements can be identified for a new technology to be considered trusted, such as effectiveness, competence, accountability, and transparency. Since trust is a universal and general concept, agreeing even about basic definitions of contributing characteristics can be challenging. In this blog, we focus on requirements related to security, privacy, and, occasionally, safety (as closely related from an execution point of view). Security, without a doubt, is one of the foundational requirements for trust; if an AI system is not secure, it cannot be safe, fair, ethical, accountable, reliable, accurate, or practical. It would also be difficult to discuss human agency and oversight seriously if an AI application can be controlled in any way by an unauthorized entity. Trust is essential for any non-trivial AI applications and should be at the center of attention for technology providers, organizations using new technologies, or essentially anyone in a position that relies on AI and might result in severe consequences for others. To establish trust, the value of new technology must be effectively "sold", concerns need to be acknowledged and addressed, accountabilities accepted, and integrations implemented with transparency.
The value of new technology must be sold to users. Technology must deliver clear, undisputable, and measurable improvements over existing solutions in terms of quality, cost, performance, or any other relevant metric. All benefits, limitations, and constraints should be fully explained in a manner that is useful and accessible to both intended and affected users. In every situation, stakeholders should be empowered to make intentional and informed decisions if the benefits of AI justify risks and costs.
Users’ concerns must be acknowledged and addressed. It should be clear for users how their data are protected, how trustworthy the results are, and what mitigations are required in specific use contexts. These efforts must specifically address all unknowns, existing vulnerabilities, gaps in controls, or any areas where further research is needed or is still in progress. That process needs to be continuous and involve proactive research of risks and possible consequences of new scenarios.
Accountabilities and responsibilities must be clearly defined. Regardless of regulations (or the lack thereof), it should be clear how responsibilities are shared and who would be accountable for different failures. That includes determining business models and rules of engagement, also in the long term to avoid the risk of unmanageable overdependencies. Finally, we need accountability for preventing and responding to harm caused with AI, whether intentionally or through negligence, including any unintended consequences.
Integrations with AI must be implemented with complete transparency. Critical information regarding any aspects of interactions with AI technologies should always be fully available to users. Users or subjects must provide consent and maintain agency and control over their data, experiences, levels of engagement, and dependencies. Navigating the complexity of relationships between users, agents, organizations, and providers will not be possible without the foundation of transparency.
Trust is always contextual. We can have complete trust in a friend to pick us up from an airport, but not necessarily to perform a complex medical diagnosis (unless that’s their actual profession). Decisions regarding trust will be even trickier with AI, as we rarely have sufficient insight and details to determine if a particular model or solution is fully suitable for use in a specific scenario with potentially severe consequences. That applies to all types of user-machine configurations, including humans directly and intentionally interacting with AI, humans that are expected to work closely with AI (e.g., with tools used by employees), or whenever they are subjects that can be affected by decisions that are automated or augmented with AI (also regarding their healthcare). In that context, we should hope that the users will start asking much more questions.
Trust towards AI may start as a business necessity, but eventually, it will become an opportunity for competitive advantage.
The AI race is not slowing down. Companies invest heavily in expanding their computing infrastructures, improving model capabilities, and betting on the most innovative applications. By 2030, data centers are projected to require $6.7 trillion worldwide to keep pace with the demand for computing power. Only Meta, Amazon, Alphabet, and Microsoft are about to spend $320 billion in 2025 alone. But in the end, it may all come to trust. The efforts aimed at regulating the AI space are moving very slowly, even though common-sense and evolving rules would be beneficial for the industry. We should not expect a revolution in that scope any time soon, especially given initiatives like the proposed 10-year moratorium on state AI regulations (also preventing enforcement of existing laws!). The free-market approach is supposed to be the answer, helping to drive innovation, offer consumer choice, and provide self-correction mechanisms. However, we are talking about situations where we don’t fully understand benefits, risks, and long-term consequences, have limited transparency and accountability, and need to deal with asymmetry, where risk-takers will usually benefit from success, but costs of failures will be paid by users of AI products or individuals and groups that are subjected to them.
In practice, that means that we are most likely to learn from failures. These are unavoidable with the early adoption of any technology, but the costs of the lessons can vary depending on selected strategies. The more impactful applications, the more severe the consequences of failure, and AI is rapidly moving into high-risk domains or mission-critical scenarios. Currently, users (both individuals and organizations) are primarily focused on hopes and benefits; however, with an increasing number of public incidents, they will start paying closer attention to risks and costs. As more will be at stake in their local applications, users will have more serious questions and requests for AI providers and stakeholders driving the adoption of AI. After all, in the free market approach, it is customers’ demand that is supposed to drive positive changes. With continued investments, as the offers from major AI providers become comparable in terms of infrastructure or model capabilities (at least from the perspective of regular end users), the previously small differentiating factors will gain much more relevance. That includes the ability to provide satisfactory answers to customers’ questions and requests regarding trust. Trust towards AI may start as a business necessity, but eventually, it will become an opportunity for competitive advantage.
Building trust is a long game, especially in a business context. Trust must be earned and proven; it requires time, consistency, and commitment, and it is rarely something that can be added quickly after a commercial milestone is reached. For providers of AI technologies and organizations adopting them, investing in trust should yield significant returns, but only if the trust-focused strategy is effectively executed (not just a marketing slogan). That requires analysis of what must, should, and may be done to make platforms and solutions trusted, how to convince users about value propositions, how to prioritize their concerns, and to ensure that their priorities are taken into consideration. Given the scope of investments in AI, the resources needed for trust-related efforts are minimal but should result in practical improvements, such as readiness for upcoming regulations or dealing with unexpected situations. Both will be crucial in the end, as general and domain-specific regulations will inevitably arrive sooner or later (when lessons become too expensive), and it is helpful to have meaningful proof of effort whenever an AI-related incident or emergency occurs. In any context where harm could be potentially significant, trust will play a critical role in the success or failure of projects, solutions, or companies. A lack of trust could lead to increased user resistance, a slowdown in technology adoption, or even the complete rejection of AI use in specific scenarios or domains.
AI, in broad perception, is both very shiny and very scary. Rightfully. Due to the promised and expected impact of AI, these new applications need to be trusted; unfortunately, that is rarely possible at the moment. There are numerous discussions and studies on trust, but the topic doesn’t seem to be a priority for the industry, at least not yet. Concerns and hesitations surrounding the speed of AI adoption, our current gaps in understanding and mitigating threats, as well as long-term consequences must not be ignored or dismissed. It is reasonable to assume that as more users and businesses start to onboard and depend on AI, the need for trust will also become increasingly critical. At some point, costs and risks will receive similar (or more) attention than benefits, and even non-significant early investments in that space should ultimately prove to be invaluable. As efforts continue to grow in the scope of building AI platforms, improving models, and inventing vertical applications, more attention should be given to establishing trust in these solutions and providing users with reasons to remain excited while also being less concerned.