AI has become an integral part of the staffing industry, offering automation and increased efficiency for recruiters. However, it’s crucial for us to understand the associated legal risks. Scale Funding sat down with Kate Bischoff, Employment Law & HR Consultant, to discuss legal considerations, vendor analysis, and strategic use of AI for recruiting.
Kate discusses five key legal risks and considerations when it comes to AI and the staffing industry.
The HR Tech industry is booming, with a market value of $20 billion, surpassing consumer sectors like weight loss. However, the regulation of AI and machine learning has become a significant concern. In 2020, the National Artificial Intelligence Initiative Act was passed, providing a legal definition of AI as machine-based systems that make predictions, recommendations, or decisions to influence real or virtual environments. It’s crucial for us to understand AI within this legal framework, as there are a lot of things that can be automated but wouldn’t necessarily be considered artificial intelligence.
The four main categories of AI and analytic tools are text analytics, audio and video analytics, usage analytics, and predictive modeling. Text analytics involves converting spoken words into text, as seen in closed captioning. Interestingly enough, certain AI systems, such as Microsoft’s, limit certain things, such as omitting certain words while live captioning. Video analytics, on the other hand, can analyze video content and enhance resolution or zoom in on specific scenes, like what we see on TV crime shows. Usage analytics focus on monitoring user activity, such as keyboard usage and on-screen engagement. Lastly, predictive modeling utilizes inputs to forecast future outcomes, as depicted in movies like “Captain America: The Winter Soldier” and “Minority Report.”
AIis set to revolutionize every aspect of the employee life cycle, including sourcing, recruiting, onboarding, performance management, off-boarding, and discipline. As a result, we can anticipate new regulations and laws concerning this topic. For instance, Workday, a software vendor specializing in AI for human capital management, recently faced a lawsuit from an applicant claiming discrimination based on race and disability. The applicant alleged that Workday’s AI-powered applicant tracking system played a role in discriminatory practices. Workday denies liability and asserts that they are taking appropriate measures. This case sets a precedent for similar litigations as plaintiff attorneys become more knowledgeable about AI and its functioning. Therefore, it’s crucial for us to stay informed about the impact of AI on HR practices and the evolving legal landscape.
In practical applications, AI and analytics play a significant role in recruiting and employee management. They are used to find candidates, perform background checks, assess employee productivity, identify flight risks, conduct compensation analysis, assist with succession planning, and track the usage of confidential information. However, it’s crucial to consider legal requirements like the Fair Credit Reporting Act (FCRA) when conducting background checks to ensure proper consent and compliance.
If you are using AI to improve your processes and get more applicants or better candidates, the most important thing to understand is that the employer is always liable for employment decisions. Very rarely could a vendor be held liable or sued under the statutes directly. Here are 2 questions to ask a vendor:
Has the process demonstrated an adverse impact? Ask them to verify that there is no disparate impact on any protected classes. Systems exist, including adversarial bias systems that test algorithms to make sure there isn’t a biased result. Request they show you that they’re doing this analysis.
What validation evidence has been collected to establish the job-relatedness of the algorithm? Without giving away trade secrets, they should be able to tell you what the correlations are and the validation evidence to support those correlations.
While AI and analytics offer exciting possibilities, there are concerns to address. Bias is inherent in the data used, performance evaluations can be incomplete or inaccurate, and the algorithms themselves may be proprietary and opaque. Discrimination and bias can still arise despite claims that AI is unbiased. Understanding these challenges is essential for responsible and effective AI implementation.
By selecting "Continue", you will leave SCALE Funding's website and enter a third party website. SCALE Funding does not control, endorse or guarantee any of the information contained therein. This third party site may have privacy and information security policies different from those of SCALE Funding.