© volvo (illustration purpose only)
Analysis |
An accident waiting to happen
So it begins – from zero to one – the first recorded death of a pedestrian involved in an accident with a self-driving car operating in autonomous mode.
While the incident remains the subject of on-going investigations, both by the local Police Department in Tempe, Arizona, and the NTSB, in this article Colin Barnden, Principal Analyst at Semicast Research asks: Please can we hit the brakes on this autonomous driving experiment until it has proper regulatory oversight?
The incident raises some basic questions about the capabilities of the autonomous driving system involved. Official reports state the time of the accident to have been about 10pm; with sunset at approximately 6:30pm that day, the vehicle was thus operating at nighttime. This raises the question of the reliability of the vision systems at night and operating under a myriad of combinations of lighting conditions and light sources. Further questions can be asked about the dynamic range of the image sensors used, and their suitability for the inevitable instance of a pedestrian stepping from darkness straight onto an illuminated highway.
Preliminary police reports state the casualty to have been “Pushing a bicycle laden with plastic shopping bags…abruptly walked from a center median into a lane of traffic and was struck by a self-driving Uber operating in autonomous mode.” This raises the question of the effectiveness of the AI object identification system and if it could correctly identify a pedestrian pushing, rather than riding, a bicycle. Add in shopping bags for further complexity for the AI computers to interpret…and quickly. A pedestrian walking a bicycle across a street would likely provide a minimal radar profile, which raises the question of the effectiveness of the radar sensors under these specific conditions and whether an object was detected.
It is therefore possible that inconclusive readings were provided by the vision system and radar sensors and the Lidar sensors were simply overruled by the AI computer. Many questions will be asked by the NTSB of the autonomous driving software and how it prioritizes and resolves conflicting sensor data, as is possible happened in this case. The argument that AI computers can reduce the worldwide death toll on our roads and highways to zero – from more than one million per year with human drivers – is destroyed if the AI computer neither detected an imminent collision, nor activated the braking systems, prior to impact.
Sylvia Moir, Chief of Police, Tempe PD, is reported to have stated “The driver said it was like a flash, the person walked out in front of them…His first alert to the collision was the sound of the collision.” This raises some basic questions about the qualifications and suitability of the human safety driver, who is the last line of defense before a fatal accident resulting from the AI computers misinterpreting a situation. It is to be assumed that all companies developing autonomous driving technology employ only qualified test pilots or professional test drivers for the task of human safety driver – individuals with proven exceptional situational awareness, extraordinary reaction times and long concentration spans – to supervise the training of autonomous driving systems on public highways.
The outcome of the NTSB investigation will almost certainly confirm the qualifications and suitability of the safety driver involved and this is unlikely to be a contributory factor in the accident. It is also to be assumed that all companies developing autonomous driving technology have installed into each test vehicle a driver monitoring system (DMS), to permanently track and record the level of awareness and engagement of the safety driver. The precise head position, eye-gaze and engagement of the safety driver in the vital seconds leading up to the accident must also be investigated by the NTSB and made public; such data will have been stored by the on-board event data recorders.
Semicast questions if the motivation to replace human drivers with AI computers is based on safety or profit grounds. Over ninety percent of all light vehicles in use on our roads and highways have no automated or assisted driving features at all (otherwise known as SAE Level 0) and there are clearly safer ways to improve safety than by turning public highways into testing grounds and pedestrians and other road users into human guinea pigs. It is also unknown if the residents of Tempe, and other cities in Arizona, ever agreed to participate in this experiment, or if this decision was made on their behalf at a city or state level.
Rather than jumping straight to AI, Semicast advocates making humans into better drivers, for example with the mandatory installation of in-car camera-based DMS, from suppliers such as Seeing Machines, Smart Eye and Xperi (FotoNation). Aftermarket solutions also exist for similar technology to be installed in trucks, from suppliers such as EDGE3 Technologies and Guardian. These changes alone would be a great start to reducing fatalities and making our roads and highways safer.
Having the most brilliant technical minds work on a problem cannot eliminate risk, chance and failure from an inherently dangerous process – the action of accelerating humans to sixty miles or more an hour inside two tons of metal mounted on four wheels is inherently dangerous, no matter how super any AI computers are. If those two tons of metal stop very suddenly, or collide with skin and bone, lives are lost, as events this week in Arizona show us. It is hoped the tech industry appreciates that the only issue which should stand in the path of the juggernaut of technological innovation is the roadblock of public safety. History teaches us that just because you can do something, that doesn’t mean you should. But if you must tech industry, then please, do it safely.
More information can be found at Semicast.
More information can be found at Semicast.