FACTORING BLOG

AI in Staffing: Legal Risks and Considerations

AI has become an integral part of the staffing industry, offering automation and increased efficiency for recruiters. However, it’s crucial for us to understand the associated legal risks. Scale Funding sat down with Kate Bischoff, Employment Law & HR Consultant, to discuss legal considerations, vendor analysis, and strategic use of AI for recruiting.

Kate discusses five key legal risks and considerations when it comes to AI and the staffing industry.

 

    1. The HR Tech industry is booming, with a market value of $20 billion, surpassing consumer sectors like weight loss. However, the regulation of AI and machine learning has become a significant concern. In 2020, the National Artificial Intelligence Initiative Act was passed, providing a legal definition of AI as machine-based systems that make predictions, recommendations, or decisions to influence real or virtual environments. It’s crucial for us to understand AI within this legal framework, as there are a lot of things that can be automated but wouldn’t necessarily be considered artificial intelligence.
    2. The four main categories of AI and analytic tools are text analytics, audio and video analytics, usage analytics, and predictive modeling. Text analytics involves converting spoken words into text, as seen in closed captioning. Interestingly enough, certain AI systems, such as Microsoft’s, limit certain things, such as omitting certain words while live captioning. Video analytics, on the other hand, can analyze video content and enhance resolution or zoom in on specific scenes, like what we see on TV crime shows. Usage analytics focus on monitoring user activity, such as keyboard usage and on-screen engagement. Lastly, predictive modeling utilizes inputs to forecast future outcomes, as depicted in movies like “Captain America: The Winter Soldier” and “Minority Report.”
    3. AI is set to revolutionize every aspect of the employee life cycle, including sourcing, recruiting, onboarding, performance management, off-boarding, and discipline. As a result, we can anticipate new regulations and laws concerning this topic. For instance, Workday, a software vendor specializing in AI for human capital management, recently faced a lawsuit from an applicant claiming discrimination based on race and disability. The applicant alleged that Workday’s AI-powered applicant tracking system played a role in discriminatory practices. Workday denies liability and asserts that they are taking appropriate measures. This case sets a precedent for similar litigations as plaintiff attorneys become more knowledgeable about AI and its functioning. Therefore, it’s crucial for us to stay informed about the impact of AI on HR practices and the evolving legal landscape.
    4. In practical applications, AI and analytics play a significant role in recruiting and employee management. They are used to find candidates, perform background checks, assess employee productivity, identify flight risks, conduct compensation analysis, assist with succession planning, and track the usage of confidential information. However, it’s crucial to consider legal requirements like the Fair Credit Reporting Act (FCRA) when conducting background checks to ensure proper consent and compliance.
    5. If you are using AI to improve your processes and get more applicants or better candidates, the most important thing to understand is that the employer is always liable for employment decisions. Very rarely could a vendor be held liable or sued under the statutes directly. Here are 2 questions to ask a vendor:
      • Has the process demonstrated an adverse impact?  Ask them to verify that there is no disparate impact on any protected classes.  Systems exist, including adversarial bias systems that test algorithms to make sure there isn’t a biased result.  Request they show you that they’re doing this analysis.
      • What validation evidence has been collected to establish the job-relatedness of the algorithm? Without giving away trade secrets, they should be able to tell you what the correlations are and the validation evidence to support those correlations.

While AI and analytics offer exciting possibilities, there are concerns to address. Bias is inherent in the data used, performance evaluations can be incomplete or inaccurate, and the algorithms themselves may be proprietary and opaque. Discrimination and bias can still arise despite claims that AI is unbiased.  Understanding these challenges is essential for responsible and effective AI implementation.

 

Watch the full webinar here: http://getscalefunding.com/tci-insights/ai-in-staffing-legal-risks-and-considerations-webinar/

RESOURCE CENTER

Learn More About Staffing

6 Ways the Staffing Industry Can Support Veterans

November 14, 2024

Veterans are highly skilled people, and their value in the workplace is clear. Not only are they more likely to…

Discussing the Current Staffing M&A Market and H2 2024 Outlook

October 8, 2024

Jeremy, how would you describe the current staffing M&A market? Having presented on the topic of staffing M&A at 10+…

Staffing Stats

September 12, 2024

Insights on trends, market dynamics, and industry innovations