Actions: HPREF [2] HCPAC/HJC-HCPAC [3] DP-HJC [9] DNP-CS/DP
Scheduled: Not Scheduled
The House Finance Committee substitute for House Bill 60 (HFCcs/HB 60) enacts the Artificial Intelligence Act, establishing a regulatory framework for artificial intelligence (AI) systems used in consequential decision-making. The bill imposes disclosure, documentation, risk assessment, and consumer protection requirements on AI developers and deployers, with enforcement authority granted to the State Department of Justice. It requires developers to provide transparency regarding AI system training data, risks of algorithmic discrimination, and intended system use. Deployers of high-risk AI systems must conduct impact assessments, provide notice to consumers, and allow for appeals of adverse decisions. The bill includes exemptions for financial institutions, federal compliance cases, and certain AI applications, ensuring that only systems significantly impacting consumer rights fall under its scope. The Act takes effect July 1, 2026.Legislation Overview:
House Bill 60 (HB 60) introduces comprehensive regulations to address the ethical, legal, and social implications of AI systems used in consequential decision-making contexts, such as education, employment, healthcare, and housing. It defines key terms to clarify the scope and applicability of the bill, including: Definitions and Scope An Artificial Intelligence System refers to any machine-based system that generates outputs, such as decisions, predictions, or recommendations, based on the data it processes. A High-Risk Artificial Intelligence System is a specific type of AI system that significantly influences consequential decisions affecting consumers’ access to essential services, such as education, employment, financial services, healthcare, housing, or legal services. Algorithmic Discrimination occurs when an AI system results in differential treatment or adverse impacts based on protected characteristics, including age, gender, race, or disability. A Consequential Decision is defined as a material decision that impacts a consumer’s access to or the terms of essential services, including education, healthcare, and employment opportunities. A Developer is an individual or entity responsible for creating or substantially modifying an AI system, while a Deployer refers to an individual or entity that implements and uses an AI system in practice. Developer Obligations Developers of high-risk AI systems are required to fulfill several responsibilities to ensure transparency and accountability. They must use reasonable care to protect consumers from foreseeable risks of algorithmic discrimination. Developers are also obligated to provide detailed documentation to deployers, including the system’s purpose, the types of data used for training, risk mitigation strategies, and evaluations of system performance and bias. Additionally, they must disclose high-risk AI systems in a public-use inventory, detailing how risks of algorithmic discrimination are managed. If a risk incident, such as algorithmic discrimination, is discovered, developers must notify the State Department of Justice and all recipients of the system within 90 days. Finally, developers are required to post clear and regularly updated statements on their website summarizing their AI systems and outlining their risk management practices. Deployer Obligations Deployers of high-risk AI systems are required to implement and maintain risk management policies and programs designed to address known and reasonably foreseeable risks of algorithmic discrimination. They must conduct annual impact assessments for each high-risk AI system, which should include details on the system’s intended uses, risk mitigation strategies, and performance evaluations, as well as the demographics of the data sets used for training and testing. Deployers are also obligated to notify consumers whenever an AI system is used in consequential decision-making, providing information about the system’s purpose, the data sources used, and an explanation of the decision-making process. Furthermore, deployers must offer consumers opportunities to correct inaccurate data and appeal adverse decisions, ensuring human review where feasible. Lastly, deployers are required to maintain detailed records of all impact assessments for a minimum of three years following the final deployment of a high-risk AI system. Consumer Protections The bill requires clear, accessible, and plain-language communication about the use of AI systems, including details about potential data collection and the risks of algorithmic discrimination. This ensures that consumers have a transparent understanding of how AI systems operate and the potential implications of their use. In cases where AI systems drive adverse decisions, the bill mandates transparency by requiring deployers to inform consumers of the reasons behind such decisions. Consumers must also be provided with opportunities to correct errors in their data and appeal adverse decisions, with a process in place for human review when feasible. Additionally, the bill requires that consumers be notified when they are interacting with an AI system, unless the nature of the interaction is self-evident. This provision ensures that consumers are aware of when AI is involved in their interactions and decision-making processes. Risk Management and Enforcement Risk incidents, such as instances of algorithmic discrimination, must be promptly disclosed to the New Mexico Department of Justice (NMDOJ). The legislation provides exemptions to protect trade secrets and proprietary information; however, developers and deployers are required to notify consumers of any withheld information and provide justifications for the non-disclosure. The NMDOJ is authorized to enforce compliance through measures such as audits, penalties, and injunctions to ensure adherence to the law. Additionally, developers and deployers may invoke affirmative defenses if they can demonstrate proactive compliance with risk management standards, including efforts to identify and mitigate risks. Exemptions and Public Awareness The Act provides exemptions for trade secrets, proprietary information, and cases involving national security or compliance with federal regulations, ensuring sensitive information is protected while maintaining accountability. Additionally, the legislation includes provisions to enhance public understanding of AI systems by requiring mandatory disclosures, offering educational resources, and supporting outreach initiatives to promote awareness and transparency in the use of AI technologies. Fiscal Impacts NMDOJ is likely to incur significant costs for hiring staff, developing expertise in AI oversight, and managing compliance programs. Industries adopting AI systems may face increased compliance costs due to documentation, risk mitigation, and reporting requirements. However, these measures aim to reduce long-term economic and social costs associated with algorithmic discrimination, fostering public trust in AI systems and promoting equitable outcomes in critical sectors like housing, healthcare, and employment.Current Law:
New Mexico currently lacks a comprehensive regulatory framework addressing artificial intelligence systems. The state has no state-level AI regulations governing transparency, consumer rights, mitigating risks associated with bias in AI systems, or algorithmic discrimination. AI use in consequential decision-making is primarily regulated at the federal level, with oversight from agencies such as the Equal Employment Opportunity Commission (EEOC) for hiring algorithms and the Consumer Financial Protection Bureau (CFPB) for AI-driven credit assessments.Committee Substitute:
Committee Substitute February 25, 2025 in HJC: HJCcs/HB 60: House Judiciary Committee Substitute for House Bill 60 enacts the Artificial Intelligence Act, establishing a regulatory framework for artificial intelligence (AI) systems used in consequential decision-making. The bill defines high-risk AI systems as those used in consequential decision-making related to education, employment, financial services, health care, housing, insurance, and legal services. The bill establishes key responsibilities for both AI developers and deployers. Developers must provide documentation to deployers regarding the AI system’s intended use, training data sources, and potential biases. They are also required to conduct impact assessments and disclose risks of algorithmic discrimination, maintain public transparency by posting summaries of their AI systems and risk mitigation strategies online, and notify the State Department of Justice if an AI system causes algorithmic discrimination within 90 days of discovery. Deployers are responsible for implementing risk management policies to prevent discriminatory outcomes, conducting annual impact assessments, and reassessing systems after significant modifications. Additionally, they must provide clear notice to consumers before an AI system makes or influences a consequential decision about them and offer consumers appeal rights and the ability to correct erroneous AI-driven decisions. The bill includes exemptions for AI systems that are regulated by federal law or part of federal contracts, used exclusively for cybersecurity, fraud detection, or routine database functions, and employed by financial institutions that meet equivalent federal compliance standards. The New Mexico Department of Justice (NMDOJ) is responsible for enforcing compliance and may bring legal actions against violators. Consumers can file lawsuits for injunctive or declaratory relief if harmed by a non-compliant AI system. The bill establishes a one-year “right to cure” period, allowing developers and deployers to rectify violations before legal enforcement begins. HFC significantly revised the bill, introducing three major changes. First, it narrowed the scope of high-risk AI regulation. The original bill broadly applied AI regulations to any AI system used in consequential decision-making, while the committee substitute excludes financial institutions, cybersecurity applications, and routine AI functions like spell-checking and database management. This adjustment limits oversight to systems that directly impact consumer rights. Second, the substitute establishes a new compliance and exemption framework. While the original bill did not include a structured exemption process for AI systems used under federal regulations or by financial institutions, the substitute creates explicit exemptions for federally regulated AI systems and financial institutions that meet equivalent compliance standards, ensuring no duplication of oversight. Third, the substitute introduces a one-year compliance window before full enforcement. The original bill imposed immediate legal consequences for violations, but the substitute includes a one-year “right to cure” provision, allowing developers and deployers to fix compliance issues before facing penalties. This adjustment balances consumer protection with industry feasibility. Implications HFCcs/HB 60 balances AI regulation with economic and technological feasibility by limiting its scope to high-risk AI systems while ensuring robust consumer protections. The inclusion of transparency and disclosure requirements increases public trust in AI decision-making, particularly in areas where bias and discrimination risks are high. The “right to cure” provision helps AI developers and deployers adjust to compliance requirements before facing penalties, reducing legal uncertainty for businesses. However, this delay in enforcement could allow potentially harmful AI practices to continue for an additional year before regulatory oversight takes effect. The bill’s exemption for financial institutions and federally regulated AI systems prevents duplicative regulation, aligning state law with existing federal oversight mechanisms. This ensures that companies already subject to federal AI regulations are not burdened with conflicting compliance requirements. While the exclusion of cybersecurity and fraud detection AI systems makes sense from an operational standpoint, it raises concerns that some high-risk AI applications might escape scrutiny, especially if they involve indirect consumer decision-making. By setting July 1, 2026, as the effective date, the bill provides ample time for rulemaking and industry adaptation. However, its success will depend on NMDOJ’s ability to develop clear enforcement guidelines and effectively oversee AI compliance in the state.