As we have stated in other posts, we are concerned that "self-driving"
(also known as "autonomous vehicle") technology is not as sophisticated
or as safe as some manufacturers would have you believe. Specifically,
we believe that problems with object
detection and object
recognition are the "Achilles Heel" of autonomous vehicle technology.
In today's post, the automobile accident injury lawyer at
The Doan Law Firm will review a recent, but largely unknown, study suggesting that "autonomous"
or "self-driving" vehicle technology still contains "bugs"
that the automotive industry simply fails to mention.
Are "autonomous" vehicles really "autonomous"?
A "driverless" or, more accurately, an
autonomous vehicle is a vehicle that is capable of
safely operating in all traffic conditions based solely on input received from
its various sensors and its preprogrammed computer
withoutrelying on the actions of a human driver. Although current autonomous vehicle
technology is sophisticated, it still lacks the ability of the human brain
in three critical areas:
- Object detection
- Object recognition
- Decision-making based on prior knowledge of the object's properties
We will take a look at these "must do's" in the following sections.
Seeing, but not seeing
Everyone is familiar with the phenomenon of perspective: large objects
that are far away appear much smaller than they actually are. Although
our brains "learned" to compensate for this optical illusion
while we were very young, most computers do not grasp this concept and
will "estimate" the distance to an object based on its apparent
size. For autonomous driving software, this could lead to problems.
Most autonomous vehicle compensates for this problem by restricting the
computer's "vision" to the area covered by its sensors,
such as its
LIDAR (Light Detection and Ranging), ultrasound, and infrared sensors. Since
this coverage usually cone-shaped and extends about 250 feet ahead of
the vehicle, as far as the computer is concerned, anything outside that
cone simply doesn't exist and therefore will not be detected.
In the next section we will learn about the problems that can arise if
the computer "sees" something but doesn't "recognize"
what it sees.
Is that a pedestrian, or a tree?
For the computer at the heart of an autonomous vehicle to "do its
must know its position relative to other objects. It must also know what
type of object it has detected and, based on information stored in its artificial
intelligence (AI) database, decide what action to take. If an object is
not detected, and then correctly identified, the autonomous driving software
could make a potentially disastrous decision, as explained below.
Georgia Tech recently demonstrated just such a problem when they were able to show
that the most commonly-used AI "object recognition" programs
could fail to correctly identify individuals with dark skin tones. For
those with a background in mathematics and probability theory, the full
text of that study, "Predictive Inequity in Object Detection" is available on the
arXiv.org website. For the rest of us, here is a summary of its findings:
The researchers used an industry-standard database consisting of digital
photographs taken of individuals standing in front of different backgrounds
and different times of day. The skin tone of each individual was assessed
using the classification system developed by
Fitzpatrick, where skin tone is assigned a value between 1 (lightest) to 6 (darkest).
- Eight different artificial intelligence (AI) system were then tested for
their ability to distinguish an individual from a tree or some other background object.
Regardless of the AI system being evaluated, and after all other variables
were taken into consideration, AI systems consistently
misidentified about 5% of those "pedestrians" whose skin tone was assessed
a value of from 4 to 6 on the Fitzpatrick scale.
At this point, we have learned two things about the computers used to make
an autonomous vehicle function "as advertised:"
- As far as the computer is concerned, anything that is outside the range
of its sensors does not exist.
- Once an object is detected, in some circumstances that object could be
We can now take a look at one example of how the above could lead to a
serious, if not fatal, accident.
What if …
Consider the following scenario:
In "real life,"
the events in this scenario would have taken place in 1.42 seconds (the
time that it would take for two vehicles traveling at a combined speed
of 120 mph to cover 250 feet). Modern computers are fast, but they aren't
to deal with all possible situations that may arise!"
Imagine a straight and level, two-lane highway, with no obstruction to
vision other than a sign that reads "Caution School Bus Stop."
Two vehicles, one operating in its "autonomous / driverless"
mode, are traveling at 60 mph and each vehicle is in its correct lane.
For some reason, the human driver of the non-autonomous drifts into the
oncoming lane when the vehicles are 300 feet apart. Unfortunately, the
this man) of the "driverless" vehicle has decided to take a nap.
The autonomous vehicle's sensors detect the oncoming vehicle when the
two are 250 feet apart and sounds an alert. Since its driver (being asleep)
does not quickly respond , the computer "defaults" to its pre-programmed
instructions for such a scenario and moves to its
right to avoid a head-on collision. Unfortunately, the computer did not recognize
the two first-graders who were waiting on their school bus and both are killed.
Why These Problems Are Important
This page has briefly discussed another of the shortcomings of "driverless"
vehicles. Since there are many more problems that are known to exist in
this "emerging" technology, we encourage you to visit this site
often to learn more about the legal issues that are certain to arise as
more and more autonomous vehicles take to our roadways.