top of page

Shaping AI Governance: A Comparative Look at the EU AI Act and U.S. NIST Framework

isabellsheang

In the realm of developing AI governance and ethical oversight, the European Union (EU) once again is leading the charge, much like the General Data Protection Regulation (GDPR) back in the day. The EU's AI Act (draft), with its detailed four-tier risk classification, provides a nuanced framework, more fleshed out than the United States' National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF).

We are at a pivotal moment to influence, set, and adopt AI governance at a global, national, and organizational level. No time to pour through hundreds of pages of documents? Here is a high-level summary and I encourage everyone to get familiarized.


An Overview of the EU AI Act

The EU AI Act is a pioneering regulation that categorizes AI systems into four distinct risk levels: Unacceptable, High, Limited, and Minimal. This system allows for a tailored approach to AI governance, reflecting the varying degrees of impact AI systems can have on society and individuals.


Unacceptable Risk: AI applications falling under this category are deemed too harmful, encompassing technologies like real-time biometric identification systems and AI that manipulate human behavior to circumvent users' free will.

High Risk: This includes AI systems used in critical infrastructures, educational or vocational training, employment, essential private and public services, law enforcement, migration, asylum, and border control management, as well as the administration of justice and democratic processes.

Limited Risk: AI applications that interact with humans, such as chatbots, are required to disclose their non-human nature to users.

Minimal Risk: AI applications that pose minimal risk to citizens' rights, like AI-enabled video games or spam filters, are subject to minimal obligations.


Understanding the U.S. NIST AI RMF

The U.S. NIST AI RMF, while not providing a risk categorization as specific as the EU's, offers a structured methodology for managing AI risks. It focuses on the creation of trustworthy AI systems, prioritizing attributes such as reliability, safety, security, resilience, accountability, transparency, explainability, privacy enhancement, and fairness. The framework encourages a multidisciplinary approach, involving diverse stakeholders in the AI system's lifecycle, from design to deployment and monitoring.

So what’s the major difference between the two frameworks?

While both frameworks aim to foster responsible AI development and use, their approaches differ. The EU AI Act's risk-based classification offers a more immediate and practical guideline for AI application developers and users, especially in identifying and complying with the specific requirements associated with different levels of risk. The NIST AI RMF, conversely, provides a broader set of principles and a flexible process-oriented approach, adaptable to various types of AI applications and business models.





So What Are Some Industry-Specific Implications and Examples

Healthcare: AI applications in diagnostics and treatment recommendations in healthcare, classified as high risk under the EU AI Act, must ensure accuracy, transparency, and non-discrimination. The NIST AI RMF complements this by advocating for continuous monitoring and validation of these systems.

Finance: AI used for credit scoring or fraud detection, also high risk, necessitates fairness and data protection under the EU framework, while the NIST AI RMF emphasizes the importance of resilience and security against AI-specific threats.

Customer Service: AI chatbots, classified as limited risk, require clear disclosure of their AI nature under the EU Act. The NIST framework reinforces the need for transparency and explainability in these interactions.

Entertainment: AI in personalized content recommendations, a minimal-risk application, while lightly regulated under the EU Act, can benefit from the NIST framework's focus on privacy and user data protection.


How Can Organizations Take the First Step

The proactive adoption of these guidelines will position organizations as leaders in the responsible use of AI.


The EU AI Act mandates stringent compliance for high-risk AI, especially in sectors like healthcare and finance. The NIST AI RMF's structured risk management approach is crucial for businesses operating across various sectors, including those with Limited or Minimal risk AI applications. The integration of insights from both the EU AI Act and the NIST AI RMF is a good first step to assessing and aligning your AI practices with these frameworks, emphasizing the development of ethical and compliant AI systems and applications.

6 views0 comments

Recent Posts

See All

Kommentare

Mit 0 von 5 Sternen bewertet.
Noch keine Ratings

Rating hinzufügen

Contact

  • LinkedIn

    © 2023 by Satori, LLC. All rights reserved.

    Thanks for submitting!

    bottom of page