Digital Trust: Focus on the Forest Rather Than the Trees

Jon Brandt
Author: Jon Brandt, Director, Professional Practices and Innovation, ISACA
Date Published: 10 May 2023
Related: ISACA - The Digital Trust Leader

Digital trust has never been more important than it is today. Recently, the godfather of AI resigned from his position at Google. The reasons for his decision are reportedly not singular, but one explanation has captured headlines—to freely warn of AI dangers. He is not the first, as others recently sought a delay in further development. The problem with this all is we cannot put the technology back in a box. Even if a particular company or country wanted to curtail its development of AI, it is an unreasonable proposition given the AI arms race between prominent world superpowers. Unfortunately, far too many users are starstruck by AI capabilities and are increasingly using it with hopes of decreasing the burden carrying out responsibilities at home or work. The latter is extremely problematic as it puts enterprises at substantial risk.

Digital trust is a concept without a globally accepted, uniform definition. ISACA has defined it as, “the confidence in the integrity of the relationships, interactions and transactions among providers and consumers within an associated digital ecosystem. This includes the ability of people, organizations, processes, information and technology to create and maintain a trustworthy digital world.” Some may challenge ISACA’s lack of specificity, but the variants I have seen are too focused on technological aspects and therefore diminish the complex integrations between technology and any business function. The problem with other definitions and the free market is that we end up overlooking the greatest risk: human fallibility. Bias coupled with a slew of documented ethical issues should rightfully result in a reasonable pessimism surrounding the fairness and transparency of AI, especially when algorithms have already negatively impacted lives.

Not surprisingly, the term “digital trust” has already been hijacked by solution providers. However, there is no single or suite of products that provides digital trust. This is eerily like the Zero Trust (ZT) movement whereby far too many solution providers claim to offer ZT products when, in fact, there are only products that help fulfill components of an overall ZT strategy.

To be clear, digital trust is not just about technology. Behind every service, product and component is human involvement and error. Public alarms now being sounded by tech giants over AI basically amount to responsible parties telling on themselves for insufficient oversight and controls. What we have now is a major mess that further complicates matters for not only businesses and consumers but will heighten geopolitical tensions.

Privacy and fairness remain core to any conversation involving AI, and now we have public awareness of emerging technology that can influence how countries conduct military operations. A recent demonstration by one defense contractor is scary. No technology is immune to bugs, breaches and weaponization. The lack of transparency in how technology is developed, operated, and protected is concerning and, I would go as far as to say, reckless.

Course Correction

The US Navy SEAL mantra, “slow is smooth, smooth is fast” is particularly relevant today. ChatGPT has been garnering a lot of attention, but we must remember it is just one product. There are others and there will be more. Recognizing that generative AI is here, enterprises must face that employees are likely already using them, which is an expansion of shadow IT. As such, business leaders should assume employees have already freely uploaded IP or sensitive information to training models. In many cases, the information users have given generative AI tools will be used to shape future outputs, which creates challenges involving copyright infringement. Accepting these realities serves as ample justification for an ad hoc risk assessment. Ideally, all enterprises have controls in place for handling corporate data and use of unauthorized software and devices. While temporary bans are not unusual, administrative controls (e.g., policies, security and education awareness training, etc.) by themselves will not protect IP or otherwise sensitive data.

To learn more about digital trust, check out related resources on ISACA’s website and join us at Digital Trust World Virtual.

About the author:

Jon Brandt, CISM, CDPSE, CCISO, CISSP, PMP is director of professional practices and innovation in ISACA’s Content Development and Services department. In this role, he leads audit, emerging technology, GRC, IT, information security and privacy thought leadership initiatives relevant to ISACA’s constituents. He serves ISACA® departments as subject matter expert on infosec, influences innovative workforce readiness solutions and leads development of performance assessments. Brandt is a highly accomplished US Navy veteran with 30 years of experience spanning multidisciplinary security, cyberoperations and technical workforce development. Formal education includes an MSED in Workforce Education and Development from Southern Illinois University and BS in Cybersecurity from Champlain College.