Financial Services And The Biden Artificial Intelligence Executive Order – Financial Services – United States

0
846


To print this article, all you need is to be registered or login on Mondaq.com.

On October 30, 2023, the White House announced that President Biden had issued an
Executive Order regarding artificial intelligence
(“AI”)
. The Executive Order was accompanied by a Fact Sheet summarizing
the eight policy goals on AI that the White House wanted to
emphasize: 1) creating new standards for AI safety and security; 2)
bipartisan privacy protections at the Federal level; 3) ensuring AI
advances equity and civil rights; 4) ensuring consumers are
benefited, and not harmed, by AI; 5) ensuring workers are protected
and supported as AI develops; 6) promoting innovation and
competition so that AI development can occur at large and small
companies; 7) advancing American leadership in AI abroad; and 8)
ensuring responsible and effective use of AI by the Federal
Government. The White House previously issued an AI Bill of Rights in February 2023.

The Executive Order directs executive agencies, including the
Department of Treasury and the United States Department of Housing
and Urban Development (“HUD”), to undertake a variety of
actions to operationalize aspects of the Executive Order’s
broad policy goals. In addition, the Executive Order makes
recommendations to both Federal consumer protection agencies, the
Federal Trade Commission (“FTC”) and the Consumer
Financial Protection Bureau (“CFPB”), to take aligned
action. Because both the FTC and the CFPB are independent
regulatory agencies that are not part of the Executive Branch, the
White House is constrained only to making recommendations.

While most of the Executive Order dealt with technology,
workforce and social concerns raised by AI developments, there were
specific directives regarding financial services. Specifically:

  1. The Department of the Treasury was instructed to produce a
    “public report on best practices for financial institutions to
    manage AI-specific cybersecurity risks” within 150 days
    (i.e., by March 2024).
  2. The Executive Order instructed HUD (and encouraged the CFPB) to
    issue guidance within 180 days (i.e., by April 2024)
    “to combat unlawful discrimination enabled by automated or
    algorithmic tools used to make decisions about access to housing
    and in other real estate-related transactions.” Three forms of
    guidance should be issued. First, there should be guidance
    addressing the use of tenant screening systems in ways that may
    violate the Fair Housing Act (“FHA”) and the Fair Credit
    Reporting Act (“FCRA”). Next, there should be guidance
    that addresses how the FHA, the Consumer Financial Protection Act
    (“CFPA”) and the Equal Credit Opportunity Act
    (“ECOA”) may apply to “the advertising of housing,
    credit and other real estate-related transactions through digital
    platforms, including those that use algorithms to facilitate
    advertising delivery” and guidance that provides best
    practices for these digital platforms to avoid violations of
    Federal law related to these topics.
  3. A curious practitioner’s point is that the CFPA gave the
    CFPB “exclusive” authority for interpreting ECOA and the
    FCRA, as well as the CFPA, and courts were instructed to afford the
    CFPB deference “with respect to [its] interpretation of any
    provision of a Federal consumer financial law . . . as if the
    Bureau were the only agency authorized to apply, enforce,
    interpret, or administer the provisions of such Federal consumer
    financial law.” 12 U.S.C. 5512(b)(4)(B). Accordingly, should
    any guidance be issued only by HUD and not in conjunction with the
    CFPB, that guidance would not have broad effect.
  4. The Federal Trade Commission was “encouraged” to
    exercise any of its existing authorities “to ensure fair
    competition in the AI marketplace and to ensure that consumers and
    workers are protected from harms that may be enabled by the use of
    AI.”
  5. The Executive Order also broadly encouraged independent
    regulatory agencies to “consider using their full range of
    authorities to protect American consumers from fraud,
    discrimination, and threats to privacy” posed by AI
    technologies.

This summary of directives to the Department of the Treasury and
HUD (and encouragements to the CFPB and the FTC) in the Executive
Order directly impacts the financial services industry, but there
are other aspects of the Executive Order that will necessarily
affect financial services, as well. For example, the Executive
Order also seeks to address risks posed by synthetic content
(i.e., the use of AI to generate deep-fake photographs,
voice recordings and video recordings), instructing the Secretary
of Commerce to work with other agencies to develop
“science-backed standards and techniques for 1) authenticating
content and tracking its provenance; 2) labeling synthetic content,
such as using watermarking; 3) detecting synthetic content; . . .
4) testing software used for the above purposes; and 5) auditing
and maintaining synthetic content.” Ever vigilant regarding
phishing and other types of fraudulent attempts that trick
customers into accessing their online accounts or even sending
funds from their accounts, synthetic content issues are bound to
become an increasing point of focus for financial services fraud
teams.

Highlighting the risks of synthetic content generally, Vice
President Kamala Harris noted in remarks that she gave at the U.S. Embassy in
London regarding the Future of Artificial Intelligence
on
November 1, “when people around the world cannot discern fact
from fiction because of a flood of AI-enabled mis-and
disinformation . . . is that not existential for democracy?”
In a Fact Sheet accompanying Vice President Harris’
speech in London
, it was announced that the White House had
voluntary commitments from 15 leading AI companies to develop
mechanisms dealing with synthetic content, but also recognized that
all nations must “support the development and implementation
of international standards to enable the public to effectively
identify and trace authentic” digital content and to
distinguish it from “harmful synthetic AI-generated or
manipulated” content.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

POPULAR ARTICLES ON: Finance and Banking from United States



Source link