In December 2025 and again in April 2026, Chinese courts ruled that employers cannot legally terminate workers simply because artificial intelligence can now perform their jobs more cheaply or efficiently. In the latest case, the Hangzhou Intermediate People’s Court held that a company’s decision to automate a senior quality-assurance role using large language models did not constitute lawful grounds for dismissal under China’s Labor Contract Law. The employer had demoted the worker, reduced his salary by 40%, and then fired him after he rejected reassignment. The court rejected the argument that AI adoption amounted to an “objective circumstance” making the contract impossible to perform. Instead, it treated automation as a business choice—one that does not erase employer obligations to workers.
China remains one of the world’s fastest-moving AI adopters, embedding AI across logistics, manufacturing, digital infrastructure, and services. Yet its courts are drawing legal boundaries around how companies may deploy automation without destabilising labour markets. Businesses worldwide are increasingly consulting artificial intelligence (AI) law firm and lawyers to understand the legal implications of AI adoption and workforce restructuring.
This undercuts a central policy argument often made in the United States: that stronger AI regulation or labour safeguards would allow China to outcompete America technologically. Instead, China is demonstrating that it is possible to pursue rapid AI integration while still constraining the social costs of labour displacement.
China’s motivations are not purely worker-centric. The country’s economic and political model depends heavily on social stability. With youth unemployment, slowing growth, weak consumer demand, and mounting debt already creating economic fragility, unchecked AI-driven layoffs could threaten broader macroeconomic equilibrium. In this context, limiting AI-based dismissals is as much about preserving governance legitimacy as it is about protecting workers. Many organisations are now seeking guidance from the best technology lawyers to evaluate how emerging AI regulations may impact employment and operational decisions.
Globally, however, China remains an outlier.
Regulations for AI-Driven Layoffs
European Union
In the European Union, labour and AI governance frameworks are significantly more protective than those in the U.S., but they fall short of outright banning AI-related dismissals. Under GDPR Article 22, workers have the right not to be subject to decisions based solely on automated processing where those decisions significantly affect them, requiring meaningful human oversight and avenues for challenge. Meanwhile, the EU AI Act classifies many employment-related AI systems, including hiring, performance monitoring, and termination tools, as “high-risk,” imposing strict compliance, transparency, and worker consultation requirements. Employers in many EU states must also consult works councils or labour representatives before implementing AI, and in some cases, obtain the work council’s approval.
Further, the EU AI Act imposes obligations on companies to ensure human oversight, risk assessments, transparency disclosures, logging and documentation, bias monitoring, and accountability mechanisms for high-risk AI systems. This creates decision-making architecture around AI, rather than replacing human involvement with AI. Businesses navigating such compliance frameworks often rely on a top data protection law firm for advice on AI governance, privacy obligations, and employee data protection.
United Kingdom
The UK does not have a regulatory framework to protect employees from being laid off due to AI adoption. Further, the recent February 2026 amendment to the UK GDPR allows controllers to adopt automated decision-making software where such decisions are made using the subjects’ personal data, without the requirement of significant human involvement.
United States
The United States is even further removed. Under at-will employment, employers generally retain broad authority to dismiss workers for automation-related reasons, absent discrimination, union protections, or contractual restrictions. No federal legal framework currently prohibits replacing workers with AI, and policy debates have focused more on bias, surveillance, and national competitiveness than on labour displacement itself. However, several trade unions have taken a strong stance on the use of AI in their industries, especially when it threatens to completely replace humans, such as the Writers Guild of America (WGA) and Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA). Read our pieces here and here.
South Korea
Other jurisdictions, such as South Korea, are also moving toward stronger procedural safeguards. South Korea’s AI Basic Act, which took effect in January 2026, classifies certain employment-adjacent systems—such as hiring and other high-impact decision-making tools—as “high-impact AI,” requiring human oversight, explainability, transparency, and user protection plans where AI materially affects fundamental rights. While it does not prohibit employers from replacing workers with AI, it significantly increases compliance burdens and raises workers’ concerns about their rights in AI-driven employment decisions.
Conclusion
This divergence may ultimately shape not just labour law, but AI adoption itself. Some labour scholars argue that stronger protections could actually accelerate sustainable AI implementation by reducing worker resistance, encouraging retraining, and incentivising employers to redeploy rather than discard human capital.
The broader legal question is no longer whether AI will transform work—it already is. The question is whether legal systems will merely manage the process of automation or actively constrain who bears its costs.
China’s rulings suggest an emerging model in which technological advancement is not treated as an automatic override of social obligations. If replicated elsewhere, these cases could mark the beginning of a global shift toward a new legal doctrine: that AI innovation should enhance productivity without rendering labour protections obsolete.
The future AI race may therefore hinge less on who builds the fastest systems—and more on who develops the most resilient social contract for surviving them.
Authors: Shantanu Mukherjee, Shruti Gupta























