Transcription

Promoting Innovation WorldwideApril 29, 2022Via email to: [email protected]: ITI Response to National Institute of Standards and Technology (NIST)Artificial Intelligence Risk Management Framework: Initial DraftThe Information Technology Industry Council (ITI) appreciates the opportunity to continue itsengagement with the National Institute of Standards and Technology as it seeks to develop anArtificial Intelligence Risk Management Framework. As such, we are pleased to providecomments on the AI Risk Management Framework: Initial Draft.ITI represents the world’s leading information and communications technology (ICT)companies. We promote innovation worldwide, serving as the ICT industry’s premier advocateand thought leader in the United States and around the globe. ITI’s membership comprisesleading innovative companies from all corners of the technology sector, including hardware,software, digital services, semiconductor, network equipment, and other internet andtechnology-enabled companies that rely on ICT to evolve their businesses. Artificial Intelligence(AI) is a priority technology area for many of our members, who develop and use AI systems toimprove technology, facilitate business, and solve problems big and small.ITI is actively engaged on AI policy around the world. We issued a set of Global AI PolicyRecommendations in 2021, aimed at helping governments facilitate an environment thatsupports AI while simultaneously recognizing that there are challenges that need to beaddressed as the uptake of AI grows around the world.1 We have also actively worked to informNIST’s efforts to foster trust in AI technology, including responding to NIST’s RFI on an AI RiskManagement Framework2 and the RFI on the AI RMF Concept Paper.3ITI and our members share the firm belief that building trust in the era of digital transformationis essential and agree there are important questions to address regarding the responsibledevelopment and use of AI technology. As this technology evolves, we take seriously ourresponsibility as enablers of a world with AI, including seeking solutions to address potentialnegative externalities and helping to train the workforce of the future. To be sure, our1Our complete Global AI Policy Recommendations are available here: nce/ITI GlobalAIPrinciples 032321 v3.pdf2See ITI response to RFI on AI RMF here: nce/NISTRFIonAIRMFITICommentsFINAL.pdf3See ITI response to RFI on AI RMF Concept Paper here: ITI Comments on AI RMF Concept Paper FINAL.pdf

members are aware of and are taking steps to understand, identify and treat the potential fornegative outcomes while leveraging opportunities that may be associated with the use of AIsystems. As such, we appreciate that NIST is working to establish an AI Risk ManagementFramework (RMF) and that we can provide input to the Initial Draft of this framework.Below, we highlight some overarching recommendations that we believe will be helpful instrengthening the AI RMF. Following that, we provide feedback on the questions NIST poses inthe Initial Draft.Overarching RecommendationsAt the outset, we would like to thank NIST for considering our previous feedback to the ConceptPaper. We also provide additional general comments for NIST to consider as it further buildsout the AI RMF, in some instances reiterating our previous recommendations, which wecontinue to believe will strengthen the ultimate RMF.NIST should seek to maintain coherence with prior works, clearly establishing a linkagebetween the AI Risk Management Framework and the Cybersecurity and PrivacyFrameworks. We appreciate NIST acknowledging that the RMF aims to fill the gaps relatedspecifically to AI as any software or information-based system includes risks related tocybersecurity, privacy, safety, and infrastructure. However, it would be helpful for NIST toarticulate more clearly what the overlap or interplay between the AI RMF and Cybersecurityand Privacy Frameworks looks like. This could appear in the form of a Venn diagram, such asincluded in the Privacy Framework, demonstrating the overlap between the three Frameworks,or as a more detailed crosswalk akin to the one between the Privacy and Cyber Frameworks,where the AI Risk Management, Privacy, and Cyber Frameworks are mapped to each other.NIST should seek to leverage and align the RMF with published standards or those currentlyunder development in international standards bodies. In the Initial Draft, NIST has recognizedthat certain risks can be positive, which is in alignment with Guide 73:2009; IEC/ISO 31010.However, we would like to reiterate that NIST should seek to align the RMF with otherstandards and frameworks, such as the ISO/IEC DIS 23894 – Information technology — Artificialintelligence — Risk management and ISO/IEC CD 5338 — Information Technology — ArtificialIntelligence — AI system life cycle processes, which are currently under development.NIST should seek to maintain and foster consistency internationally to the extent possible. Aswe have noted in our earlier submission to NIST on the AI RMF, international consistency isessential, particularly as countries around the world are beginning to consider how to addressrisks and harness benefits that may stem from the use of AI.In considering risks, NIST should clarify how risks differ for human facing and non-humanfacing AI systems, as well as appropriate risk evaluation criteria. We suggested this inresponse to the initial Concept Paper and continue to believe this is imperative for theFramework to address moving forward. While some AI applications are human facing (e.g., face2

recognition systems, recommender systems, or hiring systems) many AI applications are not(e.g., analysis of weather information, defects on the factory floor, bottlenecks in networks, orstate of the roads). AI systems that are not human facing typically do not have PII (Personallyidentifiable information) in the data sets and frequently feed analytics to other machines, nothuman end users. As a result, human facing and non-human facing AI system have distincttypes of risks associated with them. For example, considering privacy risks is essential forhuman facing systems. But privacy risks are not present in weather sensor data analysis fed toanother system that uses the analytics to assess climate patterns over a longer period of time.Applying the same risk management requirements to both types of AI systems would not allowthe technologists and evaluators to assess the risks for the AI systems in an actionable fashionand would also be onerous to organizations – disproportionately hindering innovation.NIST should add a function that accounts for contingencies. We continue to believe thatadding a separate “Respond” function to account for contingencies would be helpful. AlthoughNIST briefly references “incident response” in the context of the proposed Manage functionand as a subcategory in the Governance function, we continue to believe that a separatefunction that maps practices that organizations might undertake to respond to an AI-relatedincident would be useful. While we understand the intent of the Manage function is likely tocapture activities such as response and contingencies, in the AI context it may be appropriate toinclude both Respond and Govern functions. Furthermore, it might also be useful to create adatabase with best practices gathered from the results of such a Respond function so thatorganizations can leverage such data to anticipate new incidents and deploy mechanisms (someof which may be automated, i.e., MLOps) to consistently check for risk factors. This may alsohelp to encourage stakeholder alignment. Furthermore, it is also worth noting that the OECD isplanning to also develop a common framework for reporting on AI incidents, and a Respondfunction would help feed into and help align with that process.4 The current incident databasecurated by the Partnership on AI may also yield useful insights.5Specific Responses to Questions Posed in the Concept PaperBelow, we also offer discrete thoughts on the questions that NIST poses in the Initial Draft1. Whether the AI RMF appropriately covers and addresses AI risks, including the right levelof specificity for various use cases.As a general matter, we appreciate that NIST has widened the meaning of risk to includepositive occurrences and acknowledged that such occurrences can result in opportunities.While we recognize that NIST’s definition of “risk” is aligned with NIST SP 800-160 vol. 1, whichnotes that risk outcomes can be positive (and can in some cases can provide an opportunity)and with the International Organization for Standardization (Guide 73:2009; IEC/ISO 31010), weencourage NIST to make clear in conversations with international stakeholders that that is how45See more information on the OECD Risk Classification Framework here: https://oecd.ai/en/wonk/classificationSee more information here: s-database/3

positive risk should be interpreted. Oftentimes, risk is only associated with the likelihood of anegative outcome. Alternatively, NIST could consider using the word “opportunity” in theFramework itself. We also encourage NIST to further differentiate between “risk” and “impact,”as the RMF confuses the two terms at times. If NIST decides to use both terms in the document,it should either clearly define both terms up front, or clarify that they are used interchangeablythroughout the document, or both.We do however have questions related to Section 5: AI Risks and Trustworthiness and thestructure that NIST uses to classify different characteristics. Likely every aspect (or almost everyaspect) of an AI system has a socio-technical component because of the way that AI interactswith society, so it seems unhelpful to break out the characteristics into two other categories,without referencing this potential overlap. For example, the characteristics that constitute“Guiding Principles” bridge across several socio-technical components, which should not beoverlooked. Mapping the overlap between the guiding principles and other characteristicscould be helpful and provide a more accurate representation. Beyond that, it is somewhatunclear how this taxonomy is leveraged in the AI RMF itself, as it is not into the Framework in ameaningful way, aside from a brief mention that organizations should consider all three classesof characteristics in executing of the functions. Additionally, we were pleased to see that NISTincorporated considerations around adversarial influence as we had recommended in oursubmission to the Concept Paper but encourage NIST to add content to Section 5.1.4 Resilienceor ML Security to further reflect the breadth of considerations necessary to sufficiently mapsecurity risks in AI/ML systems. It may be helpful to leverage the MITRE ATLAS Matrix, or atleast reference it as a starting point, as it provides a solid overview of the myriad ofsecurity/resiliency risks that may be useful for organizations to consider in identifying their riskprofile.6As currently drafted, we also do not believe that the Framework appropriately captures AI risks.Indeed, the nature and severity of risks can dramatically vary based on whether a system ishuman-facing or non-human facing, but the Framework lack any clear distinction between thetwo. As such, we encourage NIST to include a discussion around the distinction between humanand non-human facing AI systems, whether an AI system can impact a person’s safety andfundamental human rights, and how that determination might feed into an organization’soverarching risk assessment process.In the Map phase, NIST addresses the need for organizations to understand the intendedpurpose of the system, the setting in which the system is to be deployed, and the specific taskssupported; however more time could be spent addressing the need to understand the potentialunintended uses of the system. How could the system be used inappropriately and/or outsideof the bounds of its currently scoped intended purpose? If the system is in place, what elsecould be done with it outside of the current scope? In later phases of the AI RMF, more timecould be spent addressing how likely such scenarios would be, and ways to mitigate theseunintended uses of the system.6See MITRE ATLAS Matrix here: https://atlas.mitre.org/4

Additionally, it would be helpful to clarify the distinction between fairness and the absence ofharmful bias. Section 5.3.1 notes that “(f)airness is increasingly related to the existence of aharmful system, i.e., even if demographic parity and other fairness measures are satisfied,sometimes the harm of a system is in its existence.” The section then goes on to state that“[w]hile there are many technical definitions for fairness, determinations of fairness are notgenerally just a technical exercise.” The statement is quite broad, implying an expectation to domore than mitigate harmful bias, yet fails to elaborate on what else this should encompass.Finally, we do not believe that the Framework is currently specific enough to enable effectiveimplementation. The Practice Guide will be imperative to making the Framework functional andimplementable. We offer additional thoughts on this in response to Question 8.2. Whether the AI RMF is flexible enough to serve as a continuing resource consideringevolving technology and standards landscape.We believe that the AI RMF is flexible enough to serve as a continuing resource. We understandthe Framework can be updated as things change and the landscape evolves, in the same waythat the Cybersecurity Framework has undergone periodic updates. That being said, it is alsoimportant to develop the Practice Guides in a similarly flexible way because AI is such a nascenttechnology and standards and best practices to address many of the subcategories are stillunder development. Indeed, it may be that there are no existing standards to address some ofthe subcategories, and NIST should reflect that in the Practice Guide/companion document.Furthermore, NIST should construct the Practice Guide/companion in such a way that it issimple for diverse stakeholders to use.We also urge NIST to develop a similar online AI Informative Reference program for to the onethat currently exists for Cybersecurity Informative References. The web-based nature of theOnline Informative Reference (OLIR) Program makes it easy to update. New resources can beadded as they become available, and the database is evergreen in a way that a published pdfdocument is not. Something similar would be immensely helpful for AI Informative References,recognizing how rapidly things will likely evolve in this space.3. Whether the AI RMF enables decisions about how an organization can increaseunderstanding of, communication about, and efforts to manage AI risks.We believe that the Framework is fairly comprehensive, though as referenced above, thecompanion document will be imperative to facilitating its implementation. That said, we believeseveral areas of the document could be strengthened to foster additional understanding, whichwe outline in response to Question 7.4. Whether the functions, categories and subcategories are complete, appropriate, andclearly stated.5

As we referenced at the outset, it is our view that the Functions are incomplete and that theycould better account for contingencies. In cybersecurity, for example, practitioners do theirbest to avoid, mitigate, share, transfer, and accept risks. However, organizations also establishincident response practices given the inevitability that incidents will occur. In the same way,organizations should also ensure they are adequately prepared to respond should they beunable to avoid, mitigate, transfer, or accept an AI-related risk. We reiterate ourrecommendation that NIST develop a Respond function, which would map to practices thatorganizations might undertake to respond to an AI-related incident. In so doing, as wementioned above, it would be useful to also consider developing a database or othermechanism to log and/or share best practices across organizations, where applicable, as well asengage with the OECD as it embarks on its effort to develop a common framework to report onAI-related incidents.Furthermore, documentation (or even technical traceability) is missing from the draft’s“technical characteristics” of trustworthy AI. NIST should include documentation as a standardalong with accuracy, reliability, robustness, and resilience. If not documenting the thresholdsfor accuracy, reliability, robustness, and resilience, along with the intended uses and limitationsof the AI, will create unnecessary risk.We also note that one of the functions in the AI RMF focuses specifically on measurement. Weappreciate that NIST has included Section 4.2 Challenges for AI Risk Management, and that itincludes a discussion around challenges in measuring AI risk. Beyond the fact that some AI risksmay not be well-defined or well-understood, or that opaqueness of an AI system maycontribute to measurement challenges, we also think it is worth adding content that furtheremphasizes the fact that risks might only be able to be described in a qualitative or semiquantitative manner due to the current lack of measurements or lack of robust and verifiablemeasurement methods.In developing qualitative and quantitative measurements and monitoring, it might be helpfulfor NIST to look to ISO/IEC 31010 Risk management – Risk assessment techniques. Annexes Aand B in particular provide an exhaustive list and comparison of risk assessment methods, someof which could be leveraged or adapted. Both annexes also provide selection criteria andconsiderations. Leveraging such tools for AI would allow organizations to integrate AI riskmanagement (both of organizations and of AI systems) directly into existing cultures andpractices, if any; this would lessen the burden on functions such as engineering qualityassurance or internal auditing, and limit overall cost while improving effectiveness.AI is an emerging technology area, and standards, guidelines, and best practices are still underdevelopment. Because of this, we are also still learning about the range of potential risks, theirlikelihood, and how to measure them. Thus, we continue to believe that it would be helpful forNIST to indicate how the RMF might address a situation where such risks cannot beappropriately measured. We continue to encourage NIST, in developing the AI RMF, tospecifically address situations where risk cannot be measured and offer guidance on reasonable6

steps for treating that risk, without limiting innovation and investments in new, and potentiallybeneficial, AI technologies. And importantly, NIST should note that the inability to measure AIrisk does not imply that an AI system poses high or infinite risk. To put it another way, theabsence of data should not be treated as justification for halting all use or development of atechnology or use. In the same vein, not every measure of risk is meaningful. NIST shouldconsider these inherent limitations in measuring risk which could lead to certain harms beingoverlooked.75. Whether the AI RMF is in alignment with or leverages other frameworks and standardssuch as those developed or being developed by IEEE or ISO/IEC SC42.As we mentioned in our overarching recommendations section, NIST should seek to furtheralign with international standards to encourage consistency in the way organizations areimplementing risk management processes. We particularly encourage NIST to utilize ISO/IECDIS 23894 AI Risk Management, and it would be helpful for NIST to reference this standard inthe body of the AI RMF itself, in addition to including it as an informative reference in theforthcoming Practice Guide.We also encourage NIST to seek to further align with ISO/IEC 5338 – Information technology –Artificial intelligence – AI system life cycle processes ISO/IEC DIS 38507 Information technology— Governance of IT — Governance implications of the use of artificial intelligence byorganizations and ISO/IEC DIS 23894 Table C.1 Risk Management and AI System Lifecycle. As wementioned in our prior response to the Concept Paper, it would be helpful for NIST to furtherillustrate the stages following deployment, including the post-market stages, which mayengender certain risks across a longer period of time, and the retirement phase, which marksthe end of the lifecycle and may also have a different set of risks associated with it. Indeed, riskmanagement does not cease with the deployment of an AI system. NIST should takeinterdependencies between risks and residual risks into consideration to an appropriatedegree.NIST should also seek to align the terminology used in the AI RMF with the terminologyspecified in ISO 31000:2018, IEC/ISO 31010:2009, ISO/IEC DIS 23894 (Clauses 6 to 6.7) andISO/IEC 22989. Alternatively, NIST could map the RMF terminology with these internationalstandards. By doing so, NIST could serve as an example for other regional efforts,demonstrating the importance of alignment with international standards. Additionally, amisalignment in terminology, nomenclature, processes, and methods with those used ininternational standards will make it difficult for both industry and government to understandand apply the AI RMF. By mapping and seeking to reconcile terminology, guidelines, andrequirements across multiple jurisdictions, NIST can help to prevent duplication of efforts,prevent different interpretations of key terms and requirements, and help to facilitate seamless7See Fazelpour and Lipton's "Algorithmic Fairness from a Non-Ideal Perspective"(https://arxiv.org/abs/2001.09773).7

integration into existing organizational risk governance (e.g., Safety, Security, Quality,Environmental, Ethical risk management systems).For example, “Map - Measure - Manage” does not seem to align with the ISO/IECterminology, though it covers some of the same elements: “Map” is covered by ISO/IEC 23894 under 6.2 “Communication and consultation” 6.3 “Scope, context and criteria.”“Measure” is referred to in ISO/IEC 23894 as the iterative 6.4 “Risk Assessment Risk identification, risk analysis, risk evaluation” cycle.“Manage” corresponds to Risk Treatment in ISO/IEC 23894 and ISO 31000.ISO/IEC 23894/ISO 31000 include a response function as part of “implementing risktreatment plans”, inclusive of verification of effectiveness.It is our view that elements of ISO/IEC 23894 such as “Monitor and Review” and“Recording and reporting” are not sufficiently emphasized throughout the riskmanagement process set forward by the NIST AI RMF, so we would encourageadditional alignment there.NIST should also consider leveraging the definition of AI stakeholders described by ISO/IEC22989 Information technology — Artificial intelligence — Artificial intelligence concepts andterminology 5.17 AI Stakeholders roles defines AI provider, user, customer, partner, and subjectroles. In addition, SC 42 work particularly considers the complexity of the AI value chain. AI RMFrequirements might thus not apply uniformly across the value chain. Assignment of riskaccountability and responsibilities, for example, should consider several factors, such as wherethe stakeholder is located in the value chain and the type of AI system (e.g., general purpose,custom- or special-purpose). The stakeholder roles could also be part of a single organization orbroken down across multiple organizations. All these factors could impact implementation ofthe AI RMF.Lastly, we encourage NIST to align with terminology in ISO/IEC 22989 Information technology —Artificial intelligence — Artificial intelligence concepts and terminology around the AI lifecycle.In particular, we point NIST toward ISO/IEC 22989 Figure 3 — Example of AI system life cyclemodel stages and Figure 4 — Example AI system life cycle model with AI system-specificprocesses: NIST specifies the “pre-design” stage as “Inception” by ISO/IEC 22989, and the “Datacollection” activity in the AI RMF is part of the ISO/IEC Design and development stage. Additionally, NIST uses the term “Deployment” to describe the entire stage after releaseof the AI system. On the other hand, ISO/IEC 22989 breaks the post-deploymentlifecycle down into several stages: “Deployment” which is the initial release tooperation; “Operation & monitoring” (which is the longest, sustaining stage); “Reevaluation”; “retirement”. Each of these stages incur different risks, challenges, andopportunities.8

Finally, ISO /IEC 22989 uses the term “retirement”, where “decommissioning” is onlyone of several retirement options of the system.NIST could also consider leveraging the OECD Framework for the Classification of AI Systems. Itis a highly practical and instructional document. In particular, the OECD document provides adetailed matrix which matches contexts, technical and socio-technicalcharacteristics/principles, and lifecycle sub-stages, which could be useful to informing the AIRMF.6. Whether the AI RMF is in alignment with existing practices and broader riskmanagement practices.Some of the key modules of a risk management framework are not visible in the AI RMF. Firstly,as we further note below, the Respond function is not currently included in the AI RMF. ThisFunction is critical, as it is not possible to control for all risks and vulnerabilities.We also note that Record & Report is missing in this Framework – non-external reports shouldcover internal reporting and awareness. This may fit underneath the Manage function.We also note that in many instances throughout the Framework, impact is defined as adverse.However, it is important to consider the fact that there may be impacts due to AI risk, whichmay not be adverse at the initial stages, but may require fixing to avoid having an adverseeffect when merged with other security vulnerabilities7. What might be missing from the AI RMFSomething we have advocated throughout NIST’s development of the AI RMF is establishingrisk evaluation criteria to help guide organizations as they seek to establish risk thresholds andunderstand their risk tolerance/appetite. While we recognize this is a significant undertaking,we continue to believe that such a methodology would be helpful for organizations indetermining the risk-level of a specific AI use case, informing the steps that they should take tomitigate or treat the risk. Such a methodology should also identify the appropriate roles for AIdevelopers, deployers, users, and other stakeholders in making risk determinations. Thesedeterminations are also crucial for helping stakeholders identify specific technologicalmechanisms for measuring, mitigating, and controlling high-risk attributes of AI systems, whereapplicable. We are not saying that NIST should bucket specific uses of AI into a “high-risk”category, but instead that it should develop criteria that can help the relevant roles withresponsibilities and authorities to figure out what level of a risk a particular use case may pose.Including illustrative examples may be helpful, with the clear caveat that the examples are justthat, illustrative, and not meant as a categorical determination. If NIST deems it unfeasible toinclude evaluation criteria in the AI RMF itself, then we strongly encourage NIST to launch aprocess with the goal of working with stakeholders to develop such criteria.9

As we mentioned in our introductory recommendations, we also think it would be useful forNIST to add additional discussion around the linkage between the Privacy and CybersecurityFrameworks and the AI RMF. Both privacy and cybersecurity characteristics are discussed in thetaxonomy NIST lays out in the AI RMF, but it is not clear how an organization might leverage theAI RMF in conjunction with the other NIST Frameworks, or if there are aspects of the AI RMFthat map to either (or both) the Privacy and Cyber Frameworks. Section 1.2.1 of the PrivacyFramework, for example, discusses the relationship between cybersecurity and riskmanagement, and offers a helpful Venn diagram that very clearly illustrates where cyber andprivacy risks overlap.8 We strongly encourage NIST to add a similar section on cyber and privacyrisk management and AI risk management so as to help organizations understand how theserisks appear in the context of AI and how they might use other Frameworks to address theserisks together with the AI RMF.NIST should also consider the implications of including all AI systems within the AI RMFframework. Due to the ubiquitous use of AI systems across organizations, it would likely beburdensome to include all AI systems within the AI RMF. Ideally, organizations should have theability to decide which of their systems is covered by the AI RMF. We recommend that NISTinclude this as a category or sub-category under the Governance Function.We also think it would be useful for NIST to add to Section 1 Overview (lines 18-23), where NISTdiscusses federal and/or legislative initiatives that the AI RMF is consistent with and/orotherwise supporting, it would be useful to also explain how the AI RMF is also aligned with theprinciples laid out in OMB Memo M-21-06, Guidance for Regulation of AI Applications.Finally, on p. 10, NIST notes that “organizations need to establish and maintain the appropriateaccountability mechanisms, roles and responsibilities, culture, and incentive structures for riskmanagement to be effective.” Specifically, on creating incentive structures, we NIST can includemore content about how to help people understand how they themselves are stakeholders inthe RMF process.8. Whether the soon to be published draft companion document citing AI risk managementpractices is useful as a complementary resource and what practices or standards shouldbe added.The soon-to

While we recognize that NIST's definition of "risk" is aligned with NIST SP 800-160 vol. 1, which notes that risk outcomes can be positive (and can in some cases can provide an opportunity) and with the International Organization for Standardization (Guide 73:2009; IEC/ISO 31010), we