Gradual AI Rollouts, Early Planning Benefit In-House Counsel, Bloomberg Law

Time 6 Minute Read
May 27, 2025
Publication

Hunton’s Robert Quackenboss examines the challenges companies face with AI and how leaders can integrate the tools into company culture effectively and legally.

Early enthusiasm for artificial intelligence tools such as ChatGPT has been tempered by the realization that using them involves unexpected pitfalls, operational risks, and communications challenges for employers.

Companies find themselves forced to address a variety of risks and hurdles simultaneously and rapidly, as modified and new versions of products come online.

That job often falls to people managers, including in-house counsel, who may find they lack the right resources, time, and guidance to integrate this new technology into company culture effectively and that complies with the law.

Bumps and Barriers

The rapid introduction of AI tools can overwhelm businesses with speed bumps or outright barriers. For example, employers and AI designers report a frequent reluctance or lack of confidence on the part of employees in using AI tools for work. This reluctance, sometimes following generational patterns, often gets compounded by a lack of adequate training resources.

When designers rush to get new AI tools out the door, first-generation user manuals and training materials are often superficial and unclear—if any exist at all. This leaves managers reaching out to the tool designers themselves to seek guidance or troubleshoot glitches. But the nascency of the AI industry means few vendors and tool designers offer hotlines or consultant teams to address this need.

Another common challenge from employers is the lack of complete and honest reporting by employees about the scope and extent of their AI use. Employees have shared that they feel using generative AI is “cheating” that would devalue their work product if customers knew the extent that they used the tool.

Reporting protocols can be difficult to implement when managers drafting them are still learning about the capabilities of the tools themselves. The lack of full and accurate reporting affects performance evaluation as well, as an employee’s skills can’t be accurately assessed or compared to others if managers don’t know the extent of each employee’s use of AI resources.

The rush to onboard new workplace AI has outpaced employers’ awareness and evaluation of its legal risks. In-house counsel frequently learn about new AI tools only during or after onboarding, requiring them to intervene with legal concerns.

When employees are asked to draft content with AI, the tool frequently reproduces or draws from the work of some prior author on the subject. That raises concerns of plagiarism and copyright, particularly in environments where content creation is the core business, such as in digital publishing.

A related legal hurdle arises when a labor union represents a workforce or is in discussions about representation. In those instances, the union may protest integrating new work tools as a violation of a collective bargaining agreement, or of the workers’ rights under the National Labor Relations Act.

A prominent area of legal exposure stems from the use of AI tools for recruiting and hiring, as well as the concerns that the AI tool introduces unlawful discrimination into the process. Managers frequently find themselves tapping the brakes mid-stream on AI implementation to allow counsel to develop compliance and risk recommendations, negotiate with a union, and re-check vendor contracts about risk-sharing.

Mood Swings

Extreme swings in the national mood about AI regulation has frustrated employers’ efforts to predict and conform AI practices to state and federal AI regulation.

In the months following AI’s arrival in workplaces, an atmosphere of aggressive regulation appeared to emerge within states and municipalities imposing guardrails around use. The Biden administration encouraged regulation and caution through executive orders and through agencies such as the Equal Employment Opportunity Commission.

As a result, employers designed—and boards approved—compliance policies to conform to what appeared to be uniform national trends for transparency requirements, bias elimination, validation and privacy, to name a few.

With the arrival of the second Trump administration, the national atmosphere of consolidating regulation trends is yielding to one of decreased regulation and limited legislation. President Donald Trump rescinded former President Joe Biden’s executive orders for AI regulation and has championed the unshackling of AI technology and opportunity.

State governments appear to be following that lead. Texas legislators recently softened the language in their proposed AI legislation, while bills modeled after the Virginia and California legislation have failed or appear likely to fail in Vermont and New Mexico. Other states, including Georgia, are entertaining legislation that regulates AI less comprehensively.

For employers—particularly those managing regional or national workforces—the collapse of a national consensus on regulation confounds the efforts to create a coherent AI compliance strategy.

Human Replacement

Perhaps the greatest challenge for employers is the pace at which AI tools are replacing human-performed tasks, and the extent to which AI will replace some human workers altogether. This will affect the stability, satisfaction and efficiency of a company’s workforce, and nurturing employee culture.

According to the 2025 World Economic Forum’s Future of Jobs Report, 39% of workers’ skill sets will be transformed or become outdated by 2030. While 85% of employers surveyed plan to upskill their workforce over the next five years, 40% of employers plan to reduce staffing, the report stated.

The effect of these projections on the sense of stability and security of workers has created an outsize impact on the role of managers trying to communicate in an honest and reassuring way. The task of new skills training opens an entirely new workstream for managers who already are strained by the other demands created by AI integration.

Employers are best served by entertaining proposals for AI integration gradually, considering all the implications above, and involving legal counsel in concept discussions before signing vendor contracts.

Onboarding itself can be gradual, with narrow pilot programs to monitor the effect of AI on smaller work groups before companywide rollouts. Managers should begin early internal conversations about resource needs and challenges.


Copyright 2025 Bloomberg Industry Group, Inc. (800-372-1033) www.bloombergindustry.com. Reproduced with permission.

Related Services

Media Contact

Lisa Franz
Director of Public Relations

Jeremy Heallen
Public Relations Senior Manager
mediarelations@Hunton.com

.

Related Insights

Jump to Page