State Lawmakers Seek to Regulate Employer Use of AI for Wage Decisions
Time 4 Minute Read
State Lawmakers Seek to Regulate Employer Use of AI for Wage Decisions

As employers continue finding new ways to use artificial intelligence (“AI”) tools and software to support business operations, state legislators have taken notice. Specifically, state lawmakers are increasingly scrutinizing employers’ use of AI and automated decision tools to set or influence employee compensation, with the stated aim of curbing potentially discriminatory impacts resulting from the use of algorithmic wage setting and to increase transparency to employees and applicants regarding the use of such technology.

Recent State Legislative Activity

Several states—including California, Colorado, Georgia, and Illinois—introduced bills in 2025 seeking to place parameters on AI-driven compensation decisions. New York and Maryland lawmakers continued the trend in January 2026, introducing bills containing similar restrictions.

While these proposed state laws are not all identical, they share common features. First, they similarly define “automated decision systems” to include systems, software, or processes—including those which rely on machine learning or AI techniques—that are used to assist or replace human decision making. In the employment context, these definitions encompass automated human resources tools and software systems that use predefined rules to process data through algorithms and assist with the performance of human resources functions. These tools could include everything from basic rule-based systems to sophisticated technologies powered by generative AI. 

Additionally, the majority of these proposed state laws provide guidance for conduct that would not constitute unlawful use of algorithmic wage setting. These exclusions include, for example, when employers (1) offer individualized wages based on data related to services workers perform; (2) disclose in plain language their use of automated decision systems, including the data considered by the systems and how the systems consider such data, to employees and applicants whose compensation is influenced or determined by these methods; and (3) develop and implement procedures to ensure the accuracy of the data considered by automated decision systems in setting wages.

Legal Risks Associated With AI-Driven Compensation Decisions

The lawmakers advocating for these proposed state laws have emphasized that the unregulated use of AI by employers in compensation decisions may result in discriminatory compensation results. Indeed, employers’ AI-driven compensation decisions may be covered by and actionable under a variety of employment laws, including Title VII of the Civil Rights Act, the Americans with Disabilities Act, the Age Discrimination in Employment Act, the Equal Pay Act, and/or applicable state and local laws.

The nature of automated decision systems already creates unique legal risk for employers, particularly in the context of relying on these systems to render employee compensation determinations. A key challenge employers face when using AI-driven tools in general is the lack of transparency in how the tools arrive at their conclusions or recommendations. While human decision makers can explain the reasoning motivating compensation decisions, it is difficult—and in some instances may be impossible—to discern the reasoning underlying decisions made by certain AI tools. This leaves employers vulnerable to legal challenges regarding compensation decisions rendered by AI tools or software. The scope of potential liability may be amplified if the processes are used to set or influence the compensation of a large number of employees or applicants.

Takeaways for Employers

For now, employers should ensure compliance with applicable federal and state laws that have been enacted or are scheduled to take effect in 2026. This includes, at a minimum, identifying each AI tool currently used in employment decision making and assessing whether those tools are subject to regulation by any state or local laws. Employers also should establish and implement a comprehensive AI policy that outlines internal procedures for using AI, provides required notice to employees and applicants about AI use, and mandates human oversight of AI-driven recommendations.

Looking forward, employers should actively monitor developments in federal, state, and local legislation and agency regulations aimed at governing the use of AI in decisions related to employee compensation and other employment terms. As states move rapidly to establish boundaries for AI’s role in workplace decision-making, employers who proactively audit their AI-related practices and prioritize transparent human involvement in decision-making processes, including compensation decisions, will be better positioned to minimize legal risks and adapt to evolving regulatory requirements.

  • Partner

    Bob’s practice focuses on representing and advising employers in complex labor relations and employment planning and disputes, including trade secrets/non-compete controversies and wage and hour. Bob has obtained numerous ...

  • Associate

    Keenan provides guidance to clients spanning industries—including retail, financial services, energy, manufacturing, transportation, and telecommunications—on a diverse array of employment law issues at every stage of ...

You May Also Be Interested In

Time 3 Minute Read

On March 24, 2026, Washington Governor Bob Ferguson signed House Bill 2225, an Act regulating artificial intelligence companion chatbots.

Time 3 Minute Read

The Connecticut Attorney General recently issued a legal memorandum regarding the application of existing Connecticut laws, such as the Connecticut Data Privacy Act, to the use of artificial intelligence.

Time 1 Minute Read

As reported on the Hunton Employment & Labor Perspectives blog, SB 574 is a California bill that would set specific duties for attorneys who use generative artificial intelligence and would restrict how arbitrators may use such tools in decision-making.

Time 3 Minute Read

SB 574 is a California bill that would set specific duties for attorneys who use generative artificial intelligence and would restrict how arbitrators may use such tools in decision-making. It would amend provisions in the Business and Professions Code and the Code of Civil Procedure to address confidentiality, accuracy, bias, and citation verification for attorneys, and to prohibit delegation of arbitral decision-making to AI while adding disclosure and responsibility requirements for arbitrators.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Authors

Archives

Jump to Page