AI risk framework adoption lags as productivity gains accelerate

Craig Nash
By
Craig Nash
Tech writer at All Things Geek. Covers artificial intelligence, semiconductors, and computing hardware.
9 Min Read
AI risk framework adoption lags as productivity gains accelerate — AI-generated illustration

The world’s biggest firms are racing to adopt artificial intelligence and reaping real productivity benefits—86% report measurable improvements—yet nearly half lack a critical AI risk framework, creating a dangerous disconnect between speed and safety. As AI adoption accelerates at a projected 36.6% annual growth rate through 2030, this governance gap exposes organizations to unchecked technical, ethical, and legal risks.

Key Takeaways

  • 86% of respondents report improved productivity from AI adoption across major firms.
  • 43% of the world’s biggest firms lack a critical AI risk framework.
  • NIST AI Risk Management Framework provides four core functions: Govern, Map, Measure, and Manage.
  • KPMG’s responsible AI principles address fairness, explainability, accountability, and seven other safeguards.
  • AI adoption is growing 36.6% annually, but governance remains incomplete and inconsistent.

Why 43% of Big Firms Are Gambling With AI Risk

The productivity numbers are seductive. When nearly nine out of ten executives see tangible gains from AI tools—whether through automation, insights, or innovation—the business case feels settled. But the KPMG 2023 US AI Risk Survey revealed a troubling blind spot: executives often underestimate AI-specific risks even as they embed these systems into critical operations. The gap between adoption speed and risk governance has widened precisely because traditional risk frameworks—built for cybersecurity, regulatory compliance, and operational resilience—do not address the novel challenges AI introduces.

Modern AI tools are pushing ethical and legal boundaries faster than existing frameworks can accommodate. Generative AI systems like ChatGPT and DALL-E amplified this urgency, forcing organizations to confront questions about bias, explainability, data privacy, and accountability that were theoretical just two years ago. Yet 43% of the world’s biggest firms still operate without a structured approach to identifying and mitigating these risks. That is not negligence—it is the predictable result of AI adoption outpacing governance maturity.

The NIST AI Risk Management Framework: Four Functions for Enterprise Control

The National Institute of Standards and Technology released the AI Risk Management Framework (AI RMF 1.0) on January 26, 2023, as voluntary guidance for organizations seeking to embed trustworthiness into AI systems. Unlike traditional cybersecurity frameworks that focus on external threats, the NIST AI RMF is technology-neutral and sector-agnostic, addressing risks to individuals, organizations, and society at large. The framework organizes risk management around four core functions that apply across the AI lifecycle.

The first function, Govern, establishes organizational oversight, policies, and culture for responsible AI. This is not a compliance checkbox—it requires leadership commitment to embed risk management into decision-making, resource allocation, and accountability structures. The second function, Map, asks organizations to identify and understand both technical risks (like model drift or adversarial attacks) and societal impacts (like algorithmic bias or labor displacement). The third, Measure, demands ongoing evaluation and monitoring of identified risks through metrics, audits, and assessment protocols. The fourth, Manage, prioritizes mitigation actions and implements controls to reduce risk to acceptable levels. Together, these four functions create a closed-loop system that treats AI risk as dynamic and requiring continuous attention.

NIST also emphasizes seven characteristics of trustworthy AI: reliability, safety, security, accountability, explainability, privacy, and fairness—though the framework acknowledges trade-offs exist between these ideals. An organization cannot maximize both explainability and performance without compromise; the framework’s job is to make those trade-offs explicit and deliberate, not hidden.

KPMG’s Eight Principles: A Practical Alternative for Responsible AI

While NIST provides the structural framework, KPMG’s responsible AI program offers eight guiding principles tailored to enterprise deployment. Fairness comes first, implemented through KPMG’s Fairness Maturity Framework to ensure AI systems meet expectations across diverse stakeholder groups. Explainability ensures products are transparent and open for review, so stakeholders understand how decisions are made. Accountability establishes mechanisms for responsibility throughout planning, development, deployment, and use.

Data integrity—often overlooked in AI discussions—embeds trust by enforcing data quality, governance, and enrichment practices upstream. Reliability demands that systems perform at desired precision and consistency. Security adds safeguards against unauthorized access, corruption, and adversarial attacks. Privacy protects data through limitation, retention controls, misuse prevention, transparency, user control, and access management. These seven principles are not abstract ideals; they are operational levers that reduce the likelihood of costly failures, regulatory penalties, and reputational damage.

The Adoption-Governance Mismatch: Why It Matters Now

The danger lies not in AI itself but in the speed differential. Organizations adopting AI at 36.6% annual growth are moving faster than they can govern. A firm that implements ChatGPT for customer service, deploys machine learning for hiring decisions, and uses generative AI for code generation simultaneously has created three separate risk surfaces—each with different failure modes, stakeholder impacts, and regulatory exposures. Without a unified framework like NIST AI RMF or KPMG’s principles, these deployments operate in silos, each optimized for speed, none optimized for safety.

NIST acknowledged this challenge by releasing the Generative AI Profile (NIST-AI-600-1) on July 26, 2024, as specialized guidance for the AI systems causing the most immediate concern. The framework is intended for voluntary use and will be updated over time as knowledge and practices evolve, according to NIST IT lab chief of staff Elham Tabassi. This flexibility is both a strength—allowing organizations to adapt the framework to their context—and a weakness, as voluntary guidance cannot enforce adoption.

Closing the Gap: What Firms Must Do

Organizations that have implemented an AI risk framework report stronger governance, clearer accountability, and fewer surprises. Those without one are essentially running an uncontrolled experiment on their business. The gap between the 86% reporting productivity gains and the 43% lacking frameworks is not sustainable. Regulators, customers, and investors will demand evidence of responsible AI practices. The firms that move first to adopt NIST AI RMF or equivalent guidance will set the competitive standard and avoid the costly retrofitting that latecomers will face.

What is the NIST AI Risk Management Framework?

The NIST AI Risk Management Framework is voluntary guidance released in January 2023 to help organizations manage AI risks across design, development, use, and evaluation. It organizes risk management into four functions—Govern, Map, Measure, and Manage—and emphasizes seven characteristics of trustworthy AI: reliability, safety, security, accountability, explainability, privacy, and fairness.

How does KPMG’s approach differ from NIST?

KPMG’s responsible AI program focuses on eight guiding principles (fairness, explainability, accountability, data integrity, reliability, security, privacy, and an eighth implied principle) tailored to enterprise deployment, while NIST provides a broader structural framework applicable across sectors and technologies. Both are complementary—NIST sets the architecture, KPMG offers operational guardrails.

Why do 43% of major firms still lack an AI risk framework?

AI adoption is outpacing governance maturity. Organizations prioritize speed and productivity gains over risk management because traditional frameworks do not address AI-specific challenges like bias, explainability, and accountability. Implementing a formal framework requires leadership commitment and cross-functional coordination that many firms have not yet mobilized.

The productivity gains from AI are real and compelling. But they are also masking a critical vulnerability: the absence of systematic risk management in nearly half of the world’s biggest firms. As AI adoption accelerates and tools become more autonomous and powerful, that gap will become increasingly untenable. The firms that adopt NIST AI RMF or equivalent guidance today will compete from a position of strength tomorrow. Those that delay are betting that luck will substitute for governance—a gamble that rarely pays off.

Edited by the All Things Geek team.

Source: TechRadar

Share This Article
Tech writer at All Things Geek. Covers artificial intelligence, semiconductors, and computing hardware.