As Prepared for Delivery
Introduction
Thank you to the ETA for inviting me here today to speak. My name is Laurie Schaffer, and I serve as the Acting Assistant Secretary for Financial Institutions at the Department of the Treasury. In my current role, I oversee a broad policy portfolio, encompassing banks, credit unions, fintechs, the insurance sector, cybersecurity and critical infrastructure protection, community development, and consumer protection.
Since beginning this role, I have observed that the growing prominence of AI touches almost all aspects of my portfolio. AI holds the promise of facilitating innovation and modernization of financial services in product design, distribution, delivery, and cost. These are good things, and we are united in our effort to promote the beneficial uses of technology. At the same time, we must also acknowledge that the use of AI involves potential risks and challenges, including related to fairness and privacy.
Treasury has dedicated a substantial amount of time and effort to better understand the implications of developments in AI and their potential to alter our financial environment. Today, I look forward to discussing that growing understanding with you.
I will begin my remarks with a short summary of Treasury’s views about AI broadly and then turn to how we see AI risks and benefits when the technology is applied in financial services, particularly in consumer lending and insurance spaces. I will then highlight a few of Treasury and the Biden-Harris Administration’s key ongoing efforts to continue to evaluate the impact of AI in the financial services sector.
AI and Market Innovation
The use of AI technologies has been commonplace in the financial services industry. From early rules-based AI models like automated telephone customer service, to automated or algorithmic systems within trading markets, to the machine-learning based systems that assist in fraud detection, to the AI systems that allow us to deposit handwritten checks directly from our phones, financial services companies have implemented emerging technologies to the great benefit of their business models and their customers.
Treasury sees the promise that AI technologies have to assist in creating a more functional, efficient, and accessible financial services industry. In fact, Treasury’s own Bureau of Fiscal Service implemented an enhanced fraud detection process utilizing AI at the beginning of the 2023 Fiscal year and has recovered over 375 million dollars as a result of that implementation.
Recent advances in computing capacity and the latest developments in AI like generative AI, represent a dramatic step up in AI capabilities. New AI models can ingest a wide range of data, generate personalized content or services based on that data, and have greater ability to self-adjust and automate decision making. All these new capabilities, when applied to the sector, could create opportunities to make financial services less costly and easier to access.
Policy makers are not starting from zero when thinking about how to best balance the risks and the opportunities presented by advancements in AI technologies. Policymakers have experience with changing technologies and have developed regulatory frameworks focused on building guardrails for the sector regardless of the underlying technology used. I believe that this mindset of “technology neutral” regulation continues to guide policymakers as we look forward at the next generation of AI technologies.
I’ve articulated some of the promises of AI, but I’d like to dedicate the next few minutes to discuss its potential risks.
Risks in AI in Financial Services
When thinking about the risks associated with AI, I find it helpful to categorize them into three main areas: 1) risks resulting from the design of the AI tool or system; 2) operational and cyber risks; and 3) risks arising from how humans use or deploy AI.
First, risks related to the design of an AI tool or system can stem from the model, or the process by which an AI tool translates data into useful predictions, or from the data itself.
In the case of data, the quality and volume of available data will likely determine the quality of the tool’s resulting analysis. Without sufficient high-quality data, a model is very unlikely to create useful or reliable predictions. As a result, AI requires a very large amount of data from different sources, and gathering that data can create problems regarding data quality, availability, and privacy. Additionally, training a model on historical or low-quality data may perpetuate existing bias represented in those data sets due to, for example, a lack of representation of minority populations. Therefore, it is incredibly important that data used for models is clean, robust, and free of bias.
In the case of the model, Machine Learning systems do not always have transparent reasons as to why the system generates a specific output. This explainability or “black box” problem can produce, and possibly mask, biased or inaccurate results. Those flawed results could, in turn, create a host of risks, including consumer and investor protection issues.
The second category of risk I mentioned has to do with operational and cybersecurity risks related to the adoption of AI. Recent advancements in AI have been made possible by increases in processing power and data storage capacity. With advancements in cloud computing and cloud storage, many financial institutions will likely rely on third parties cloud services to provide processing and storage for their AI systems or will opt to use an AI system developed entirely by a third party. The involvement of third parties could reduce a firm’s visibility into their AI models, and potentially contribute to a consolidation of dependencies on AI/cloud service providers. Additionally, as GenAI tools become more widely available and easier to use, larger groups of threat actors can effectively leverage these tools for cyber-attacks, fraud, or other adversarial actions.
The third category of risk I’d like to highlight today is risk arising from the interaction between the AI tool and human stakeholders. Even well-designed AI systems, if misused, can create incorrect and potentially harmful outcomes. It is important for the individual or group utilizing an AI tool to understand the underlying assumptions of that specific AI model. In addition, the adoption of AI technologies may tempt businesses to overly rely on the outputs of these systems, particularly in spaces where speed is key and human intervention opportunities are limited. When considering these situations, we must remain aware that humans are ultimately responsible for the output of these models, and risk-mitigation efforts should account for that.
I will now move on to how the AI risks and opportunities I mentioned manifest in particular contexts. As you will see, many of the risks permeate across different sectors and, as a result, require significant attention and monitoring.
AI in Consumer Finance
Beginning with consumer finance, the use of AI for credit underwriting and scoring has received significant attention recently and serves as a useful example of the potential benefits and risks of AI when used in consumer financial products.
The CFPB has indicated that around 45 million consumers lack a credit score from one of the major credit reporting agencies. Traditional credit scoring methods may exclude potentially creditworthy borrowers because of reliance on a static range of data inputs and, in part because reliance on historical data may perpetuate bias. As we have seen in the past several years, new market entrants such as fintech firms have utilized AI or Machine Learning driven technologies that look at a wider range of data, such as cash flow data, to arrive at different credit analyses that could broaden access to consumer credit. The Department of the Treasury’s November 2022 report titled “Assessing the Impact of New Entrant Nonbank Firms on Competition in Consumer Finance Markets” noted this phenomenon and cited some research that indicated that these models may indeed improve access – providing cheaper funding and to a broader range of customers, including minority borrowers – and perform at least as well as traditional credit scoring in predicting behavior.
At the same time, because these alternative credit scoring tools rely on a greater amount and variety of consumer data, they may also present increased risks regarding consumer privacy and security. First, historical data – whether used in traditional modeling or AI – may embed historically biased outcomes. A lender’s reliance on such historical data may be particularly problematic if the reasoning of a model is not clear, and if a decision may result in a consumer being denied service or credit in wrongful ways.
Second, Machine learning models use many more variables to assess creditworthiness than traditional methodologies, and much of the data used to develop these models is nonfinancial. Treasury’s November 2022 Fintech report that I mentioned earlier highlighted how this data use within consumer lending could pose broad societal surveillance and privacy risks, observing that including alternative data on consumers’ non-financial behavior in financial decision making could lead to growing amounts of consumer behavior being subject to commercial surveillance, which could have far-reaching and unpredictable effects.
Finally, the “black-box” problem of some AI tools that I mentioned earlier also raises concerns regarding compliance with existing fair lending laws if disparities are reflected in the data and/or an institution can not explain the basis for an adverse action underwriting decision.
As policymakers look more carefully at the risks and benefits of these products, they will need to analyze how they perform in comparison to traditional credit analysis techniques and will also need to ensure that these new processes are properly accountable to their consumers.
AI in Insurance
I’d like to now turn to AI’s effects on the insurance industry. In surveys conducted between 2022 and 2023, the NAIC’s Big Data and Artificial Intelligence Working Group found that 88 percent of surveyed auto insurers and 70 percent of surveyed homeowners insurers use, plan to use, or plan to explore using AI or Machine Learning. With such an expanding focus on the use of AI in insurance, it is critical to ensure that the application of these technologies does not perpetuate unequal treatment.
AI has the potential to increase the efficiency and lower the cost of nearly every aspect of the insurance business, including claims handling, underwriting, customer service, marketing, fraud detection, and ratings. Such benefits from AI could reduce insurance protection gaps by improving the availability, affordability, and accessibility of insurance. On the other hand, a lack of opacity and explainability in both AI models and in the data fed into predictive models means that it is difficult to know if decisions reflect accurate risk assessments or if they perpetuate biases in its decision-making process and outcomes.
For example, life insurers are also increasingly using AI to accelerate their underwriting process. Concern has been expressed that AI could quickly identify an applicant’s health risks and corroborate information in lab reports or tests, skipping the medical exams and slow “back and forth” exchange of information typical in a traditional underwriting process. We are not suggesting that any company is doing this now but this is an area of concern. If an AI model is trained on data with biases, it is likely to perpetuate them in its decision-making process. For example, algorithmic bias in AI may unfairly calculate higher premiums for a specific racial group with historically higher mortality rates, even if individual risk factors differ.
Insurers and regulators can take important steps to better protect consumer privacy through adherence to consumer notification and consent requirements, data retention and deletion policies, data sharing agreements, and data security protocols. Further, by requiring transparency in AI algorithms—through data source tracking, audit trails, or other methods—insurers and regulators can better assess AI systems’ accuracy, fairness, and suitability.
Administration Efforts on AI
Finally, I’d like to conclude my remarks today with a few examples of Treasury’s work to understand and address the risks of AI while fostering responsible innovation in the financial sector.
As many of you know, in 2022, the White House released a Blueprint for an AI Bill of Rights, in January 2023, the National Institute of Standards and Technology developed an AI Risk Management Framework, and in October 2023, President Biden issued a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to establish new standards for AI use. This Executive Order supports a regulatory approach to AI that is intended to ensure its safety and security; promote responsible innovation, competition, and collaboration; advance equity; protect American workers; protect the interests, privacy, and civil liberties of American consumers, and advance American technological and economic leadership.
Following the publication of this Executive Order, Treasury released its AI report in March of this year, Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector. This report outlines best practices and a series of next steps to address immediate AI-related operational risk, cybersecurity, and fraud challenges, and is intended to catalyze further work and collaboration that responds to the risks I’ve mentioned today.
For example, the report identifies and explores recommendations for ensuring high-integrity data. Treasury met with a wide array of organizations and financial firms to collect feedback on a host of topics, including data. Several of those firms stated that they would benefit from standardized practices concerning the mapping of data supply chains, and proposed a standardized description, similar to a nutrition label, for vendor-provided GenAI systems to clearly identify what data was used to train a model, where it came from, and how any data submitted to the model will be incorporated. Treasury is now working with the financial sector, the National Institute of Standards and Technology, and the Cybersecurity and Infrastructure Security Agency to identify whether such a solution should be explored further.
The report also articulates the need for an appropriate comprehensive framework for the testing and audit of black box AI solutions. This framework would help guide firms through the critical steps to assess inputs, outputs, training of models, and the underlying models themselves. Such a framework should be repeatable and scalable to firms of varied sizes and complexity, and the report recommends that the financial sector collaborate to align with frameworks like NIST’s AI Risk Management Framework and to create sector specific standardized strategies for managing AI-related risk.
While the AI Cybersecurity Risk Report stands as a substantial effort towards responding to the President’s Executive Order, Treasury’s work does not stop there. Treasury is continuing its stakeholder engagement to improve our understanding of AI in financial services. Treasury issued a public request for information to seek comments from financial institutions, technology companies, advocates, consumers, academics, and other stakeholders on the uses and potential impacts of AI in the financial services sector and on the opportunities and risks presented by new developments and applications of AI in the sector.
Through this RFI, Treasury seeks to further its understanding of the uses, risks, and opportunities of AI, including potential obstacles for facilitating responsible use of AI within financial institutions, the extent of the impact on consumers and other end-users through the use of AI by financial institutions, and potential gaps in legislative, regulatory, and supervisory frameworks related to AI risk management and governance.
Treasury received a broad range of perspectives on this topic and is currently reviewing the over 100 comments we received, including a comment from the ETA. Thank you to all in this room whose organizations returned comments responding to this request, these insights will inform our work moving forward.
Finally, on insurance specifically, the Federal Insurance Office just conducted a roundtable on AI in the Insurance Sector on Tuesday. The insights gathered at this forum will guide the Office as they continue to monitor the sector and gather information on potential best practices.
Closing
Let me close by thanking you again for having me here today. Events like this one today are critical to moving the dialogue on AI forward and to ensuring that agencies are prepared for the days ahead. The Treasury Department will continue to monitor the use of AI in financial services, and we are committed to the responsible innovation and appropriate regulation of technologies that are accurate and fair, protect privacy and security, and advance the financial wellbeing of the American people.
Thank you.
###
Official news published at https://home.treasury.gov/news/press-releases/jy2620