top of page
Search

AI and Third-Party Risk

  • Writer: samdelucia
    samdelucia
  • Jan 11
  • 5 min read

I was asked recently by a client to review an Artificial Intelligence risk questionnaire. In developing suggestions, it started to feel like the makings of a great blog. Here are some thoughts on Third-Party Risk Management for AI providers and those who use AI.


First, a little level setting. Commonly we see the following AI-use structures. Regardless of your specific use case, one of these broad scenarios generally will describe how most organizations use AI. 


  • Providers: A term used to describe organizations that develop and use their own AI solutions.

  • Deployers: Describes the organizations that use another company's AI solutions to some extent. 

  • Hybrids: Organizations that use both internally developed AI and leverage some externally developed AI. 


In these last two scenarios - "Deployers" and "Hybrids" - it’s important to understand how the company offering the artificial intelligence solution is dealing with regulatory compliance, and how they’re developing, governing, and implementing their product. In cases where the external AI offering has unfettered access to your sensitive data or works collaboratively with internally developed AI, you’ll also want to know about the way any data they access, and use is handled. 


Below are some suggested areas, but by no means an exhaustive list, of areas to clarify in writing before engaging with a firm offering artificial intelligence solutions.


  • Compliance with Regulations: For many types of businesses there are already in existence specific regulations around handling and use of data that should be followed.  (For example the healthcare field has HIPAA).  The use of AI creates another layer where regulatory compliance needs to be monitored, and in some cases also opens you up to additional regulation.  The EU AI Act (https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence) and the United States’ Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/).  Third parties providing you with AI solutions should provide you with a clear picture regarding how compliant they are with applicable laws and regulations. There are many great references out there, and I like how the IAPP laid this out.  (https://iapp.org/media/pdf/resource_center/global_ai_law_policy_tracker.pdf)  Make sure you know which AI Specific regulations apply to you, and which of these the third-party takes responsibility for.

  • Leveraging Frameworks: There are several recognized, and continually improved, frameworks against which many AI Offerings are measured. It’s a great sign when the firm offering you AI solutions talks about how they follow and continuously monitor themselves (or have others monitor them) for compliance with a trusted framework. Here are two of my favorites: NIST AI Risk Management Framework (AI RMF 1.0  - https://www.nist.gov/itl/ai-risk-management-framework) ISO / IEC 42001 (https://www.iso.org/standard/81230.html) Both are great examples to ask about when interviewing or assessing third parties. If they use another - great. Be sure to check it out and make sure it covers the areas you care about.

  • Your Data: Your data is one of your most valuable assets, if not the most valuable. Its handling and use come with lots of reputational and financial risk to your company and to those whose data you’ve been entrusted to manage. Will your data be shared?  Will it be used to help improve AI models that support your competitors? What will your data be used for (beyond what you want it used for)? Will your data be confined to US borders?  Will your data be used, traverse, or be stored in systems or networks with components outside of your home country? What are their cross-borders practices?

  • Third-Party Acquired, Augmented or Generated Data: Does the third-party use live data?  Do they use augmented data? Do they use synthetic data?  (How does that impact model performance?)

  • Use of PII: How will the third-party handle personally identifying information (PII)?  Will the third-party train their models with PII?  Who has access to the PII? How long do they keep the data? 

  • Consent Practices: Are the third-party following sound privacy practices (see the references to frameworks, above)?  Does the third-party inform the data owner of all uses of their data?  Does the third-party request for consent before using anyone’s data?  Does the third-party provide a means of redress?  Do they follow privacy regulation consent requirements like CCPA?

  • Breach Notification and Indemnification: What are the third-party's breach notification practices?  Is there a process for breach notification?  What constitutes a breach? When will you be notified?  How will you be notified?  Does the third-party notify regulatory agencies? Does the third-party’s insurance cover you sufficiently in the event of a breach?

  • RA / DPIA / PIA / CA: Does the third-party conduct Risk Assessments on their AI solutions?  Does the third-party conduct Data Protection Impact Assessments on their AI solutions?  Does the third-party conduct Privacy Impact Assessments on their AI solutions?  Does the third-party conduct Conformity Assessments on their AI solutions?  For all of these, how frequently - and what constitutes a successful assessment?

  • Model and Algorithm Training: How does the third-party train their models? Are they using real or synthetic data to train models? How do they handle the loss of efficiency when using synthetic data to train models? Does the third-party implement single use only for each model?

  • Model and Algorithm Monitoring: What is the process for monitoring the models after implementation?  What parameters are used when monitoring to identify model drift?  Are humans involved with monitoring?  What level of involvement will your organization have in monitoring models over time?  Is there supporting logs and documentation of results from monitoring each model?

  • Model and Algorithm Documentation: Does the third-party have technical documentation for each model?  Will your organization have access to the technical documentation? Can all documentation be easily followed to understand logic and decision-making processes?  How frequently is technical documentation updated?  Does your documentation provide insight into Transparency & Explainability of models?  Can the third-party provide an example of the technical documentation?

  • Human Intervention: Where are all of the areas in the AI process where human intervention is implemented?  Are there fail safes and overrides? 

  • Bias Protections: What safeguards are in place to protect your organization from model Bias and Discrimination? How do these safeguards align with your firm’s diversity, equity, and inclusion commitments?

  • Model and Algorithm Ownership: Who owns the models?  Does ownership of models change hands, and if so, at what point?

  • Data Ownership: Who owns the output from the AI solution?  Can the third-party or anyone else use that output?

  • Intellectual Property: Is your intellectual property, when used by a third-party AI model, still considered your intellectual property?  What about internal risk models and algorithms used in your own business analytics... whose intellectual property is that?

  • Security: There is much in the security arena that can be suggested for third-party questionnaires.  The dynamic of what and how you are protecting your organization changes depending upon the implementation.  If you haven't checked out the MITRE Att&ck Framework (https://attack.mitre.org/) , then I suggest you do so. It offers a lot to consider for protecting your organization.   There is also a MITRE ATLAS framework specifically for AI (https://atlas.mitre.org/matrices/ATLAS)


If You Remember Anything: Remember this: There’s a great deal to consider when engaging any AI service provider.  The best approach is to consider your own internal AI Governance model and controls, apply that to the third-party relationship, and define where the third-party’s responsibility ends and yours begins.


For help with or questions about AI Governance, IT Leadership, Risk, Compliance and Cybersecurity, please reach out to us at: https://www.cobaltshields.com/


 
 
 

Comments


bottom of page