Tesla is implementing a new hazard light pattern that improves drivers' attention
Emergency Safety Solutions
Tesla is rolling out a significant safety enhancement through a software update. Teslas, already the safest vehicles on the planet, got a little safer thanks to a small company based in Texas. While this article will highlight the advancement in safety, it will also applaud the work of Emergency Safety Solutions, which dared to challenge the old way of doing things.
Overnight Evolution: The Game Changer
Tesla North America didn't mince words when they announced: "If an airbag is deployed, hazard lights will automatically activate & flash faster to improve visibility." Elon Musk added, "New Tesla safety feature uploaded via over-the-air software update. Your car just got better while you slept."
It got much better thanks to a partnership with Emergency Safety Solutions (ESS), which we spotlighted a year ago. The small company, now just five years old, used a Tesla Model 3 to display its advancements in the hazard lights system, which had remained unchanged for more than 70 years. After numerous studies, the company changed everything about the hazard lights and approached Tesla with its findings.
Chilling Frequency: Every seven minutes, a disabled vehicle is involved in a crash on American roads. The result? An alarming 15,000 injuries or fatalities annually.
Ancient Flaws: The primary culprit behind these startling figures is a hazard light system that hasn't been updated in over seven decades.
The Solution: A frequency shift by adjusting flash frequencies from the sluggish 1.5Hz system to between 4Hz and 6Hz immensely heightens driver alertness. Hertz is a unit of frequency, which equals the number of cycles per second. In this case, the frequency of flashing lights is increased from 1.5 flashes per second, up to 4 - 6 flashes per second.
Real-World Outcomes: When 5Hz flash frequency was tested, drivers reacted a crucial 12 seconds faster. Moreover, they recognized an issue of more than three football fields sooner than the 70-year version. The number of drivers shifting to the safer side of a disabled vehicle also shot up dramatically — from 30% to an impressive 87%.
Emergency Safety Solutions also posted on X: "Great step toward making our roads safer for people in disabled and vulnerable vehicles! We appreciate our partnership with Tesla and applaud this major milestone in our mission to protect drivers when they need it most."
Software Update
Tesla states in their post on X that this update is rolling out now in the U.S. to Model 3/Y vehicles and newer Model S and Xs.
It's not clear whether Tesla means this enhancement is available in update 2023.32, or whether it's in the upcoming 2023.38 update, which is currently in employee testing.
It's more likely that H.E.L.P. is implemented in update 2023.38, but we have yet to receive release notes for vehicles in the U.S., so we'll have to wait and see if this enhancement made it in.
More H.E.L.P. to Come
Keep an eye out for even more safety advancements courtesy of this partnership with ESS and Tesla. The company created the Hazard Enhanced Location Protocol or HELP. Beyond the lightning-fast flashes, HELP seamlessly integrates with in-car and phone navigation systems, giving drivers a heads-up about potential hazards before they become visible. It's like giving your Tesla a sixth sense.
Unfortunately, that will take longer as it would require more automakers to get on board with this new system. However, as we've realized, automakers are following Tesla's leadership on several fronts, and they may also increase road safety and implement the advanced system.
If an airbag is deployed, hazard lights will automatically activate & flash faster to improve visibility
Tesla has always embraced whimsy in its software, packing it with playful Easter eggs and surprises. From transforming the on-screen car into James Bond’s submarine to the ever-entertaining Emissions Testing Mode and the fan-favorite Rainbow Road, these hidden features have become a signature part of Tesla’s software.
Of course, launching a new product like Robotaxi wouldn’t be complete without a fun little easter egg of its own. The end-of-ride screen in the Robotaxi app presents a familiar option “Leave a tip.”
For anyone pleased with their Robotaxi ride, they may be tempted to leave a tip. However, tapping the button presents our favorite hedgehog instead of a payment screen.
The app displays a message, alongside the familiar Tesla hedgehog, that simply states “Just kidding.”
While it's a fun prank, it’s also a nod to what Tesla really wants to do. They want to reinforce the economic advantage of an autonomous Robotaxi Network. Without a driver, there is simply no need to tip. The gesture is playful, but it’s a reminder of what Tesla’s real aim is here.
Over the last few days, we’ve seen some exceptionally smooth performance from the latest version of FSD on Tesla’s Robotaxi Network pilot. However, the entire purpose of an early access program with Safety Monitors is to identify and learn from edge cases.
This week, the public saw the first recorded instance of a Safety Monitor intervention, providing a first look at how they’re expected to stop the vehicle.
The event involved a complex, low-speed interaction with a reversing UPS truck. The Safety Monitor intervened to stop the Robotaxi immediately, potentially avoiding a collision with the delivery truck. Let’s break down this textbook case of real-world unpredictability.
The Intervention [VIDEO]
In a video from a ride in Austin, a Robotaxi is preparing to pull over to its destination on the right side of the road, with its turn signal active. Ahead, a UPS truck comes to a stop. As the Model Y begins turning into the spot, the UPS truck, seemingly without signaling, starts to reverse. At this point, the Safety Monitor stepped in and pressed the In Lane Stop button on the main display, bringing the Robotaxi to an immediate halt.
This is precisely why Tesla has employed Safety Monitors in this initial pilot. They are there to proactively manage ambiguous situations where the intentions of other drivers are unclear. The system worked as designed, but it raises a key question: What would FSD have done on its own? It’s not clear whether the vehicle saw the truck backing up, or what it would do when it finally detected it. It’s also unclear whether the UPS driver recognized that the Robotaxi was pulling into the same spot at the exact same time.
It’s possible this wouldn’t result in a collision at all, but the Safety Monitor did the right thing by stepping in to prevent a potential collision, even one at low speed. Any collision just a few days after the Robotaxi Network launch could result in complications for Tesla.
Who Would Be At Fault?
This scenario is a classic edge case. It involves unclear right-of-way and unpredictable human behavior. Even for human drivers, the right-of-way here is complicated. While a reversing vehicle often bears responsibility, a forward-moving vehicle must also take precautions to avoid a collision. This legal and practical gray area is what makes these scenarios so challenging for AI to navigate.
Would the Robotaxi have continued, assuming the reversing truck would stop?
Or would it have identified the potential conflict and used its own ability to stop and reverse?
Without the intervention, it’s impossible to say for sure. However, crucial context comes from a different clip involving, surprisingly, another UPS delivery truck.
A Tale of Two Trucks
In a separate video posted on X, another Robotaxi encounters a remarkably similar situation. In that instance, as another UPS delivery truck obstructs the path forward, the Robotaxi comes to a stop to let its two passengers out just a few feet from their destination.
Once they depart, the Robotaxi successfully reverses and performs a three-point turn to extricate itself from a tight spot. That was all done without human intervention, by correctly identifying the situation.
This second clip is vital because it proves that the Robotaxi's FSD build has the underlying logic and capability to handle these scenarios. It can, and does, use reverse to safely navigate complex situations.
Far from being a failure, this first intervention should be seen as a success for Tesla’s safety methodology. It shows the safety system is working, allowing monitors to mitigate ambiguous events proactively.
More importantly, this incident provides Tesla’s FSD team with an invaluable real-world data point.
By comparing the intervened ride with the successful autonomous one, Tesla’s engineers can fine-tune FSD’s decision-making, which will likely have a positive impact on its edge case handling in the near future.
This is the purpose of a public pilot — to find the final edge cases and build a more robust system, one unpredictable reversing truck at a time.