Can an Algorithm Fire You? New AI Employment Laws Protecting Workers on Leave
You take FMLA leave for surgery. You follow every rule. When you come back, your manager tells you your position has been "eliminated." What nobody mentions is that an automated system flagged you as a low-productivity employee because it counted your protected medical absences against you. This is already happening, and three states have passed new laws to stop it. Here is what you need to know.
On this page
How AI Is Already Making Employment Decisions About You
If you work for a mid-size or large employer, there is a good chance that automated systems are already influencing decisions about your career. These tools operate behind the scenes, and most employees never know they exist.
- Attendance scoring algorithms automatically track absences and generate "reliability scores." If the system does not distinguish between FMLA-protected leave and unexcused absences, your score drops every time you see your doctor.
- Productivity tracking software monitors keystrokes, mouse movements, email volume, and task completion rates. If you take intermittent leave for a chronic condition, your output numbers look lower than your peers, even though the gap is entirely due to protected absences.
- Layoff selection algorithms rank employees by performance metrics and generate recommended termination lists. If the algorithm uses attendance or productivity data that includes FMLA-protected time, it systematically disadvantages workers with disabilities or medical conditions.
- Hiring screening tools filter resumes and rank candidates before a human ever sees them. These systems can penalize employment gaps caused by medical leave, flag disability-related accommodations in application responses, or use proxies (like zip code or graduation year) that correlate with disability status.
The problem is not that employers are intentionally programming these systems to discriminate. The problem is that algorithms trained on historical data will reproduce existing patterns of bias unless they are explicitly designed not to. And most are not.
The New State Laws: Colorado, Illinois, and California
Three states have moved ahead of the federal government to regulate AI in the workplace. Each law takes a different approach, but they share a common goal: making sure humans, not algorithms, are accountable for employment decisions.
Colorado's law is the broadest. It targets "high-risk AI systems" used in "consequential decisions," including hiring, firing, promotion, and benefits allocation.
- Employers must notify applicants and employees when AI is being used before a consequential decision is made.
- If an AI system contributes to an adverse decision, the employer must explain the AI's role, the main reasons for the decision, and give the affected person a chance to correct inaccurate data the AI relied on.
- Employers must implement a risk-management program, conduct impact assessments, and report any discovered algorithmic discrimination to the Colorado Attorney General within 90 days.
Illinois amended its Human Rights Act to make it a civil rights violation to use AI in recruitment, hiring, or promotion if the AI discriminates based on protected characteristics (or uses proxies like zip codes that correlate with protected status).
- Employers must notify employees when AI is used in employment decisions.
- The AI Video Interview Act amendments (effective February 2026) require explicit written consent before using AI to analyze video interviews. Candidates cannot be disqualified for refusing AI analysis.
- Candidates have the right to request human review of their interview, and the human's decision overrides the AI's recommendation.
California's Civil Rights Council amended the Fair Employment and Housing Act to explicitly cover automated decision systems (ADS).
- Employers with 5 or more employees cannot use ADS that produce discriminatory outcomes, even if the discrimination is unintentional.
- Employers are legally liable for discriminatory results even if a third-party vendor built the AI tool.
- The state requires employers to maintain records of algorithmic data used in employment decisions for at least four years.
Mobley v. Workday: The Lawsuit That Could Change Everything
Filed in 2023, Mobley v. Workday is a federal class action lawsuit alleging that Workday's AI screening tools discriminate against job applicants based on race, age, and disability. The case has produced a series of rulings that are reshaping how courts treat AI vendors:
- July 2024: A federal court ruled that Workday can be held liable as an "agent" of the employer. This was a major ruling because it means third-party AI vendors, not just the employers who use their products, can face direct discrimination claims under federal law.
- May 2025: A federal judge granted preliminary certification for a nationwide class action under the Age Discrimination in Employment Act (ADEA) for applicants over 40 who were screened out by Workday's software.
- July 2025: The class was expanded to include applicants processed by Workday's "HiredScore" AI screening features.
The significance of this case goes beyond Workday. If the court ultimately finds that AI vendors can be held liable for discriminatory outcomes, every company that builds or sells hiring software will need to prove their algorithms do not systematically exclude protected groups, including people with disabilities and medical conditions.
How Algorithms Target Workers on Leave
The intersection of AI and medical leave creates a specific, dangerous problem. Here is how it plays out:
An employer uses an automated system to generate layoff lists based on productivity and attendance metrics. The algorithm is not programmed to exclude FMLA-protected absences or ADA-mandated medical leave from its calculations. Employees who took protected leave appear as "low performers" or "high absenteeism risks" in the system's output.
When layoffs happen, those employees show up at the top of the termination list. A manager reviews the list, sees the metrics, and approves the terminations without ever realizing the numbers were skewed by legally protected leave.
The EEOC and Department of Justice have also issued joint guidance titled "The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees." That guidance identifies three primary ways AI violates the ADA:
Unlawful screening out
AI tools that screen out candidates because a disability prevents them from meeting an algorithmic metric (for example, keystroke tracking that penalizes someone with a motor impairment, or video analysis that flags neurodivergent eye contact patterns).
Failure to accommodate
Employers must provide reasonable accommodations for algorithmic assessments, including waiving the use of the AI tool entirely if it cannot accurately measure a candidate's actual ability to do the job.
Prohibited inquiries
AI cannot be used to indirectly ask disability-related questions or conduct medical examinations. Personality assessments or behavioral analysis tools that screen for mental health conditions may violate this rule.
Your Right to Human Review
One of the most significant changes in the new state laws is the right to have a human being, not software, make the final call on your employment.
| State | Human Review Rights | Effective Date |
|---|---|---|
| Illinois | Right to request human review of AI video interview assessments. Human decision overrides the AI. | February 2026 |
| Colorado | Right to explanation of AI's role in adverse decisions. Right to correct inaccurate data used by the algorithm. | June 2026 |
| California | Employers must maintain human oversight over automated decision systems. Record-keeping required for 4+ years. | October 2025 |
Even if you do not live in one of these states, federal law still applies. If an AI system produces a discriminatory outcome, the employer is liable under the ADA, Title VII, or ADEA whether or not a state AI law exists. The new state laws give you additional tools to challenge the decision.
What to Do If You Suspect an Algorithm Flagged You
If you were terminated, passed over for promotion, or not hired, and you suspect an automated system played a role, here are your next steps:
Ask your employer (or former employer) for a written explanation of the factors that led to the decision. Ask specifically whether any automated system, algorithm, AI tool, or scoring model was involved. In Colorado, they are legally required to provide this information for adverse decisions after June 2026.
Gather records of all FMLA leave taken, ADA accommodation requests, disability-related absences, or any other protected activity. If your attendance or productivity was affected by legally protected leave, that documentation is critical for showing the algorithm used tainted data.
Depending on your situation, you can file with the DOL (for FMLA violations), the EEOC (for ADA or Title VII discrimination), or your state civil rights agency. In Colorado, you can also report algorithmic discrimination directly to the Attorney General. The EEOC has made AI discrimination a strategic enforcement priority.
AI discrimination cases are complex. An employment attorney can help you request discovery about the employer's algorithmic systems, subpoena the vendor's data, and build a case based on disparate impact theory. Many employment lawyers offer free initial consultations and work on contingency.
Frequently Asked Questions About AI and Employment Decisions
Yes, but new state laws restrict how. Colorado's AI Act (effective June 2026) requires notice, explanation, and data correction rights. Illinois and California have similar protections. Federal anti-discrimination laws also apply, meaning the employer is liable if the AI produces discriminatory outcomes.
If an automated system counts your FMLA-protected absences against you, that is illegal FMLA interference under 29 CFR § 825.220. Your employer is liable regardless of whether a human or an algorithm made the calculation. Many AI systems are not programmed to exclude protected leave, which creates serious legal exposure for employers.
Mobley v. Workday is a federal class action alleging that Workday's AI hiring tools discriminate based on race, age, and disability. A 2024 court ruling found Workday could be liable as an "agent" of the employer. The case was granted class action certification in 2025 and could set nationwide precedent for AI vendor liability.
In some states, yes. Illinois requires human review of AI video interview assessments. Colorado requires explanation and data correction rights for adverse decisions. California requires human oversight of automated decision systems. Even without a specific state law, federal anti-discrimination statutes apply to all AI-driven employment decisions.
Request a written explanation of the factors behind your termination. Ask specifically whether any automated system or AI tool was involved. Document all protected activity (FMLA leave, ADA accommodations). File a complaint with the DOL or EEOC. Consult an employment attorney who handles AI discrimination cases.
Know Your Leave Rights
Whether your employment decisions are made by a human or an algorithm, your federal and state leave protections still apply. Our free rights check tool evaluates your situation in a few minutes. No data is stored.
Check Your Rights Now