On July 10, 2025, the Massachusetts Attorney General (AGO) entered into an Assurance of Discontinuance (AOD) with a private student loan lender (the Company), resolving allegations that the Company's underwriting practices violated the state's unfair and deceptive act or practice (UDAP) law and federal fair lending laws. The AOD imposes sweeping reforms to the Company's use of artificial intelligence (AI) and algorithmic models in credit underwriting. The settlement underscores growing regulatory scrutiny of AI-based decisionmaking in consumer finance at the state level and provides a roadmap for governance expectations around automated lending systems.
At the center of the AGO's investigation were the Company's algorithmic and judgmental underwriting practices. The AGO alleged that the Company failed to prevent disparate outcomes in both types of underwriting, relied on variables in its AI models that allegedly produced discriminatory effects, and issued adverse action notices that were inaccurate; and, in doing so engaged in an unfair or deceptive act or practice.
Breaking Down the AI Allegations: From Inputs to Outcomes
The AGO's investigation revealed that the Company used artificial intelligence models—defined in the AOD as "machine-based systems that...make predictions, recommendations, or decisions influencing lending outcomes"—to automate loan approval and pricing decisions. The models operated in three stages: "prescreen decline," "quick decline," and "risk score," with each stage applying algorithmic assessments and "Knockout Rules" to screen applicants. In scrutinizing all three stages of the underwriting model's operation, the AGO underscored that compliance obligations apply not only to final underwriting decisions but to every automated stage that influences who advances in the application pipeline. Companies can reasonably take from this action that the failure to have compliant models (including those using AI) at all automated stages could result in a UDAP action.
Key AI-related allegations included:
The matter was resolved without litigation. The Company neither admitted nor denied the allegations made by the AOD, including the ones detailed above. The Company agreed to pay $2.5 million, adopt extensive compliance measures, and submit periodic compliance reports to the AGO over a multi-year period, with the AGO retaining the right to request raw data and documentation.
Algorithmic Governance Mandates: The Compliance Blueprint
The AOD imposes an expansive governance framework for the Company's AI underwriting practices. It reflects a blueprint that is increasingly emerging from both federal and state regulators. Companies that engage in automated decision-making or incorporate AI into their underwriting processes should consider adopting similar strategies to mitigate compliance risk. A variation of this framework may also be valuable for companies that license models developed by third parties. Key elements of the framework include:
Implications for Fintech and AI in Lending
The AOD highlights a growing trend among regulators, particularly at the state level, to hold lenders accountable for the outputs of AI systems, regardless of intent. It is a reminder to all companies that use models that reliance on opaque or "black box" models may expose institutions to risk if the models cannot be audited or explained. For fintechs and traditional lenders alike, the settlement underscores the importance of some additional fundamental compliance controls:
Looking Ahead: AI Risk Management as a Regulatory Imperative
This settlement adds to a growing body of regulatory actions addressing AI in consumer finance. It also aligns with broader initiatives to ensure AI systems are fair and transparent and that companies are held accountable for their use of these systems. With the steep monetary penalties and long-term regulatory oversight that we see more and more states employing, Companies deploying AI in credit underwriting should closely monitor evolving expectations and consider proactive enhancements to their AI governance programs to mitigate compliance risks.
Mark D. Metrey is an associate in the Washington, D.C., office of Hudson Cook, LLP. He can be reached at 202.715.2009 or by email at mmetrey@hudco.com.
Copyright © 2025 CounselorLibrary.com, LLC. All rights reserved.