Nearly every auto manufacturer in America today offers some form of Crash Avoidance Technology (or “CAT”). From self-driving cars and adaptive headlights to infrared cameras and automatic emergency braking, CAT systems are becoming increasingly ubiquitous on our roadways, promising a future where thousands of driving-related deaths could be avoided every year.

However, the push for this technology has not been without its problems. Auto manufacturers have long recognized the consumer demand for CAT systems, as well as its enormous marketing and profit-generating potential, so they have rushed to release newer and newer technologies faster than the government can regulate them. Meanwhile, many manufacturers have lobbied to immunize themselves from liability, so that when their relatively untested technology fails and people are hurt, they are not held accountable. Instead, the injured driver is often blamed and/or taxpayers foot the bill.

This pattern seems to be reoccurring with Tesla today. On July 14, 2016, Tesla co-founder and CEO Elon Musk issued several statements, declaring that Tesla’s “Autopilot” system was turned off during a July 1 crash in Pennsylvania involving a Tesla Model X vehicle, and that the “crash would not have occurred if it was on.” This was followed by a tweet stating that Tesla would be working on modifications to its radar system, testing “Tesla radar by itself (decoupled from camera) [with] temporal smoothing to create a coarse point cloud, like lidar.” Then on July 17, he wrote “[p]romising call today with @BoschGlobal, maker of our radar sensor. Looks like significant improvements possible via OTA software update.” On July 20, he then wrote that “[w]hen used correctly… [Autopilot] is already significantly safer than a person driving by themselves and it would therefore be morally reprehensible to delay release simply for fear of bad press or some mercantile calculation of legal liability.”

These statements from Musk followed weeks of heightened public attention and scrutiny after it was revealed that Tesla’s “Autopilot” system malfunctioned, resulting in the death of Joshua Brown, 40, in a Williston, Florida crash on May 7, 2016. Although the Tesla Model S’s system was engaged at the time, it apparently failed to detect a white truck against a bright sky, so it did not automatically slow the car down like it was supposed to before hitting the truck. News of the “Autopilot” system’s failure and Brown’s death sparked a media firestorm, as well as investigations into Tesla by government agencies, consumer advocacy groups, and other safety organizations. NHTSA, for instance, opened an investigation into the “Autopilot” malfunction on May 16, 2016, while the SEC began a securities fraud investigation to determine whether Tesla failed to timely notify investors of a $2 billion stock sale completed on May 18-19, before Tesla informed investors about NHTSA’s investigation. Soon afterward, it was revealed that a possible second crash involving a Tesla Model X’s “Autopilot” system occurred in Pennsylvania on July 1, 2016. Then a third crash allegedly involving another Tesla Model X’s system in Montana was revealed on July 11.

Three days after news of the latest crash was exposed, Musk began issuing the above statements. Assuring the public that Tesla is busy tweaking the system to approximate state-of-the-art technologies, such as LIDAR, he also indicated that Tesla has no plans to disable or recall the “Autopilot” feature on their vehicles. Tesla has, furthermore, repeatedly tried to downplay the possibility of consumers’ reliance on the “Autopilot” system, claiming that they tell drivers that the software is still in its “beta” testing phase, and that engaging it does not mean drivers can stop paying attention to the road ahead.

It is worth noting that during a press conference in October 2015, when asked his thoughts about Google’s LIDAR technology in the context of self-driving cars, Musk had this to say: “I don’t think you need LIDAR. I think you can do this all with passive optical and then with maybe one forward RADAR… if you are driving fast into rain or snow or dust. I think that completely solves it without the use of LIDAR. I’m not a big fan of LIDAR, I don’t think it makes sense in this context.” He, therefore, dismissed LIDAR technology only nine months ago, but is now attempting to mimic it in order to placate the public. Additionally, he is hoping to do so with only minimal effort and cost by issuing “OTA” or “over-the-air” software reprogramming of his existing RADAR technology. Thus, he is hoping to avoid the costly product recall process while maintaining his company’s existing streams of profit, issuing de minimis tweaks to technology that has already been proven to have safety issues. That’s like if Ford issued duct tape to consumers who bought Ford vehicles that came with defective, puncture-prone airbags.

The question is, if Tesla and other auto manufacturers were truly confident in their technologies, such that restricting access to them would be “morally reprehensible” as Musk claims, then why do they also demand our forgiveness and indulgence when their technology fails? When the auto manufacturers issue technology that is not yet proven to be safe, should they be allowed to reap record profits while continuing to educate themselves at the expense of public safety?

As a general proposition, it seems necessary and reasonable to hold auto manufacturers accountable for their failure to adequately test technologies before releasing them to the public. This seems especially appropriate in the case of Tesla’s “Autopilot,” a system branded in such a way as to give consumers a false sense of security, suggesting that the technology should be able to self-drive the car without the need for driver intervention. In fact, Joshua Brown had posted several YouTube videos in which it is clear that that’s what he thought the technology was supposed to do.

So, while CAT systems can and do enhance driver safety, we cannot forget the high costs of reckless innovation, especially in the automotive context. The bottom line is, Musk could have, and should have, thoroughly tested his “Autopilot” feature before releasing the product to the public. Moreover, he should have considered the kinds of tweaks he is now implementing, well before anyone was hurt or killed due to its failures.