AI-powered video telematics have revolutionized trucking operations in recent years. Forward-facing cameras provide video of road incidents, often providing important evidence during litigation. Driver-facing cameras monitor driver behavior and fatigue. Combining both views enable the continuous assessment of how safely the driver is operating the vehicle, reducing safety incidents. Across Europe and North America, the installed base of video telematics systems was five million and it is expected to top eleven million by 2027.
Overall, truck drivers are highly professional and safety conscious. Dashcam systems provide a key safety backup.
Key players in the space are Lytx with 35% market share, followed by Samsara and Motive with essentially equal market share of approximately 15%. With many other offerors, competition is fierce and the elbowing is getting fiercer.
Recently, Motive announced the results of a Motive-funded study conducted by the Virginia Tech Transportation Institute (VTTI). This was a limited scope evaluation conducted on VTTI’s test track. The results reported that Motive’s AI Dashcam successfully generated driver alerts related to six unsafe driving behaviors 86% of the time, compared to 32% for Lytx and 21% for Samsara.
In releasing the results, Shoaib Makani, co-founder and CEO at Motive, said, “There is an epidemic of road accidents and fatalities that continues to worsen but, with advancements in AI, they should be getting better. Even worse, the principal cause of these accidents – including distracted and unsafe driving behavior – are 100% preventable. The results of VTTI’s research are not just about comparing products. They show that these technologies don’t all perform the same, which can have major implications for accident prevention.”
AI Dashcam capabilities from Motive include:
- Stop sign violation: Identifies stop signs and detects if a driver fails to come to a complete stop. (According to Motive, rolling stops are frequently ticketed and are one of the leading causes of accidents.)
- Driver distraction: Detects and alerts drivers when they are looking downward due to eating, drinking, smoking, drowsiness, cell phone use or general inattentiveness.
- Unsafe lane changes: Alerts drivers when they are swerving, weaving, or switching lanes at high speeds as well as adherence to the road lanes, regardless of lane type.
The VTTI study specifically observed the likelihood of each tested system to generate alerts for these common unsafe behaviors: close following, rolling stops at stop signs, not using seat belts, phone calls, and texting. Motive’s press release highlighted three unsafe driving behaviors where differences in performance were prominent:
· Phone Call Overall Alert Rates (Motive: 95%, Samsara: 38%, Lytx: 28%)
· Texting Overall Alert Rates (Motive: 71%, Samsara: 30%, Lytx: 13%).
· Close Following of Vehicle Immediately Ahead (Motive: 67%, Samsara: 18%, Lytx: 28%)
The study methodology involved protocols designed to mimic real-world conditions and driver behaviors on a closed test track. Testing was conducted across three in-cab placement locations and three different times of day (daytime, dusk, and nighttime). The systems were installed by a certified, professional third-party installer to ensure that the placement of the cameras met each technology provider’s installation standards. Factors that could influence system performance, such as weather conditions, driver identity, and system placement, were controlled to maintain the integrity of the study. More info on the full methodology, data, and results can be found here.
What Is And Isn’t Bad Driving Behavior?
Assessing any type of human behavior requires specific criteria. Maybe the most straightforward is seat belt usage: as long as the video processing algorithm is up to the perception task, this item is a pass/fail. Detecting a driver’s activity with a phone is more complex but still relatively straightforward.
But what constitutes close-following, or rolling through a stop sign? In response to a request, Motive provided their kinematic criteria for these two behaviors. When approaching a stop sign, the study called for an event trigger if the vehicle speed did not go below 6 mph within 7 seconds of a stop sign being invisible to camera (moving past the stop sign). And, a close-following event required a speed above 35 mph and was defined as a time headway of 0.7 seconds or less for at least 15 seconds.
There are no industry standards regarding detection of these behaviors. While the Motive kinematic criteria in the study seem reasonable, another company could have similar but different criteria. If the close-following speed threshold for a competitor dashcam system is 45 mph, the VTTI protocol when run at 40 mph would mark this as a missed detection for that system.
But in holding further discussions with Motive, it became clear that VTTI empirically identified the alerting thresholds for each unsafe behavior for all three vendors and then designed their experiment to ensure their experimental protocols for each unsafe behavior tested covered each vendor’s settings. For example, for close following the minimum speed was set at 50 mph which they asserted exceeded the alerting thresholds for all 3 vendors. Good move, although “discovering” the thresholds and other factors is not the same as having the engineering specs directly from the other two companies. Needless to say, this creates room for debate and rebuttal.
The VTTI report captures this and other factors by stating that “because of the specific tasks evaluated, number of experimental runs, and other study design features, analysis results may not apply to conditions outside of those tested in the current study.”
Not So Fast
Another critical dimension is the promptness of detection. Because seconds matter when rolling down a highway at high speed, it would be ideal to immediately detect a driver’s eyes off the road, thus sounding an alert. But the more quickly this detection is made, the higher the probability of a false positive. There is always a challenging engineering trade-off between accurate detection and early warning.
False positives for driver behaviors irritate both the driver and the fleet safety manager.
As noted in the VTTI report, “For the phone-in-lap task, Motive was associated with statistically significant, higher likelihoods to successfully issue an in-cab alert for phone-in-lap than Lytx and Samsara…. with a significantly shorter time to alert than Lytx over all study conditions.” The term “successfully” raises eyebrows with the other guys. If Lytx and Samsara intentionally use a longer detection time to reduce false positives, was this considered un-successful?
Motive noted that the behaviors were tested in a manner that allowed sufficient time for each system to provide alerts. For example, each Close Following trial lasted for 30 seconds, which the researchers considered to be well above the thresholds for all three dashcam systems. VTTI’s results showed that this experimental protocol failed to elicit alerts from two of the vendors on unsafe behaviors as often as Motive’s system did.
The testing protocol called for there to be an event on every test run, such that it was impossible for a false positive to occur. Therefore, the rate of false positives is an important factor that is unaddressed.
Competitors Weigh In
To put it mildly, comparing the performance of the Motive system to the Lytx and Samsara systems raises some hackles with these competitors.
Jim Brady, Lytx’s VP of Product Management, has this to say: “This study, similar to a previous one also funded by Motive, does not compare precision rates. It only reports how often a behavior is captured. It does not take into account precision, or how often the alerts are correct. In other words, it is missing critical information about false positives. It is entirely possible for a device to capture more events at the expense of being significantly noisier.” Noting that the report only evaluates a small selection of alerts and risk types within Lytx’s portfolio, Mr. Brady also raised questions about the design of the study relative to the real world, noting that “The study was an isolated test, set under artificial conditions, that used one or two devices less than 40 times. Lytx, on the other hand, analyzes tens of thousands of events per week and currently has over 221 billion miles of driving data that continually inform and refine our systems. When properly configured and deployed, Lytx customers enjoy 95% or greater precision in real world environments, the highest in the industry across the widest portfolio of alert types.”
Similar points were raised by a Samsara representative.
The Customer Approach
Fleet operators are the customers. How do they choose which dashcam tech to adopt? Rather than a deep data dive, fleet safety managers typically evaluate several different systems with their drivers on the road. In this setting, false positives or too many alarms will stick out like a sore thumb. The system must be practical and effective, and each vendor has metrics and data showing their system’s capability.
Bottom Line
The dashcam space is complex; I’m certainly not attempting to provide an exhaustive treatment here. In fact, in working through the details above I have a feeling I got in over my head!
But this much is clear. There’s value in doing comparative studies, and resources are always limited, so decisions about scope must be made. The results must be comprehensive enough to be meaningful. This is the core of the external criticisms about Motive’s study.
Nevertheless, Motive’s initiatve does potentially provide a new set of questions that customers can ask of any dashcam vendor.
The conversation moves forward.
Read the full article here