When Elon Musk announced earlier this year that Tesla would have fully autonomous robotaxis ready for public service, regulators and investors alike took notice.
However, following Tesla’s first public test of its robotaxis in late June in Austin, Texas, videos showed the vehicles speeding, braking suddenly, driving over a curb, entering the wrong lane, and dropping off passengers in the middle of multilane roads.
This resulted in federal investigations taking place, and also a proposed shareholder class action lawsuit accusing Tesla and Musk of securities fraud for concealing the significant risks that the robotaxis posed and for overstating the technology’s readiness.
On a broader level, this highlights the growing legal and ethical challenges that robotaxis face as they move from controlled testing to real-world deployment.
Legal Fault Lines
Robotaxis sit at the intersection of transport safety, product design and massive commercial promises. When a robotaxi behaves unexpectedly on public streets the consequences can be physical (crashes and injuries), financial (lost market value, insurance claims) and legal (regulatory probes, civil suits, even criminal investigations). Some of the major legal considerations are:
Product liability
To put it simply: who is responsible when the technology fails? If a robotaxi causes an accident, plaintiffs may argue that its hardware, sensors, or self-driving software were defectively designed or manufactured. But determining who bears liability is a legal question – is it the vehicle manufacturer, the software developer that trained the AI, the sensor supplier, or the company operating the fleet?
The courts will need to decide whether to treat software errors the same way they treat a faulty brake system, and how much testing or validation is enough to prove a robotaxi was “reasonably safe.”
Negligence
Companies can face negligence claims if they deployed robotaxis in unsafe conditions, failed to supervise operations properly, or didn’t update software to fix known issues. Regulators could also argue negligence where companies prioritize speed-to-market over safety testing or fail to respond to early warning signs. These cases often turn on evidence like internal communications, safety logs, and incident reports.
Securities and Investor Claims
Robotaxis are not just a technical project, they’re also a major business promise. When CEOs and other executives make bold claims about autonomy, safety, or timelines, those statements can move markets. If later evidence shows that such claims were exaggerated or premature, investors may sue under securities laws for being misled.
Regulatory Oversight and Enforcement
Government agencies are still catching up to the pace of autonomous technology, but they’re watching closely. Regulators such as the U.S. National Highway Traffic Safety Administration (NHTSA) and counterparts in Europe and Asia have the power to open safety investigations, order recalls, or impose fines for non-compliance. Autonomous driving companies are increasingly seeking support from an AI law firm to build compliance frameworks and manage investigations as regulations evolve.
Insurance and Indemnity Disputes
Unlike human drivers, a robotaxi doesn’t have personal liability, so insurance policies and commercial contracts have to fill that gap. Insurers, manufacturers, fleet operators, and software developers are now negotiating new forms of coverage and risk-sharing models. Future court cases will clarify whether liability rests primarily with the company that built the car, the one that trained the AI, or the operator that put it on the street.
Privacy and Data Protection
Robotaxis continuously record video, map streets, log passenger locations, and collect sensor data, all of which can raise privacy and consumer protection concerns. Questions are also emerging around whether passengers must consent to being recorded and how long companies can retain this information.
In this complex landscape, companies often turn to a technology law firm to address privacy compliance, data security obligations, and consumer regulatory requirements.
Notable Litigation and Investigations
1. Tesla
After Tesla’s robotaxi test debacle in June, the NHTSA launched an inquiry into the company’s safety practices and is reviewing reported incidents. Tesla and Elon Musk were also hit with a proposed shareholder class action accusing them of overstating the robotaxi program’s readiness and concealing key safety risks. Tesla’s early public tests and an ensuing fall in its share price triggered shareholder claims that Tesla executives misled investors about safety and deployment timelines.
Apart from this, in August a Florida jury found Tesla’s driving-assistance software defective in a high-profile 2019 fatal crash which killed a 22-year-old woman and injured her boyfriend. The court ordered Tesla to pay about $243 million in damages to the victims.
2. Cruise
Cruise, the autonomous driving company and General Motors subsidiary, faced major backlash after an October 2023 incident in San Francisco where one of its robotaxis hit and dragged a pedestrian.
Cruise admitted to submitting an inaccurate report in connection with the crash and in 2024, entered into a deferred prosecution agreement under which it paid fines and had its operating permit suspended. The regulatory penalties included a $1.5 million fine from the NHTSA and a $500,000 criminal fine from the DOJ.
3. Waymo
Even the industry’s most experienced player, Waymo, has seen legal trouble. In early 2025, a cyclist filed a lawsuit in California claiming serious injury after a Waymo robotaxi failed to follow safe drop-off protocols, leading to a “dooring” incident. The suit accuses the company of negligent operation and inadequate safeguards in mixed-traffic environments.
4. Uber
The first recorded pedestrian fatality involving an autonomous test vehicle, in Tempe, Arizona, remains one of the defining moments for the entire industry. An Uber test vehicle struck and killed a pedestrian in 2018, leading to criminal charges against the backup safety driver, civil settlements, and a complete halt to Uber’s self-driving program at the time. This case still informs how prosecutors and plaintiffs view corporate liability for autonomous systems.
Conclusion
Just as courts around the world are now grappling with how copyright law applies to AI-generated content, the question of how existing legal frameworks will adapt to robotaxis is only beginning to play out. These vehicles raise entirely new questions about fault, safety, transparency, and accountability – questions that will ultimately be tested and clarified in courtrooms rather than labs.
Authors: Shantanu Mukherjee, Varun Alase























