Can You Sue A Robot? Part I
Who is responsible if a “driverless” vehicle causes an accident?
Believe it or not, the question posed in the title of this post isn’t
“silly” or a topic that lies within the realm of a science
fiction writer. In fact, it is likely that the answer will ultimately
come from an accident attorney! Consider how robotic technology is currently
affecting your life, or could be doing so in the near future:
- Robotic cars and trucks, also known as “driverless” or “autonomous”
(self-operating) vehicles, are being tested under road conditions that
you and I encounter every time we drive our cars.
- Robotic-assisted surgery is performed daily at most medical centers, while
the technology required to routinely perform remote surgery, also known
as “cybersurgery” or “telesurgery,” is is being
perfected even as you read this blog post!
- “Big Tech” companies such as Amazon and Google are developing
both the hardware and software that will allow “drones” to
make deliveries of products that are ordered online and, in the case of
Google, provide high-speed Internet service to remote areas.
In this and in later posts, we will take a look at an interesting problem
that is the subject of debate among accident lawyers and is a question
that is yet to be be addressed by the courts: Who is responsible if an
injury results from an accident caused by a “driverless” vehicle?
“Robots” in the “real world”
Many late model automobiles and trucks already make use of technology that,
although not usually considered to be robotic, does make driving safer
for us and for other drivers. Examples of such technology include:
- automatic braking systems (ABS) to help prevent rear-end collisions
- “blind spot” monitors to help prevent collisions while changing lanes
- traffic lane monitors to help prevent drifting into oncoming traffic
- monitors and cameras to help prevent accidents when backing up
- GPS systems to help a driver find their location and help a driver arrive
at their destination
Notice the emphasis that is placed on the word “help.” These
alarms, with the possible exception of an ABS, must still be acted upon
by the driver. It is the responsibility of the driver to 1) hear or see
that an alarm has been activated, 2) decide what course of action is the
appropriate response to an alarm, and 3) to implement a response. These
decisions, and responses, must usually be made within a fraction of a
single second if an accident is to be avoided. Fortunately, each vehicle
is usually under the control of the most sophisticated computer in existence:
the human brain!
Liability and driverless vehicles
Traditionally, in those countries whose civil codes have been derived from
the English common law (e.g. Canada or the United States) have assigned
responsibility for safe driving to the
driver of a vehicle. As the concept of liability for an accident developed, it
was recognized that some accidents could occur that were not related to
the actions of a driver. Such factors that were considered to lie beyond
a driver’s control included unforeseeable mechanical failure in
a critical control component, defects in road conditions and, of course,
the actions of other drivers.
For the greater part of the last century, liability was defined as a consequence of the
failure to provide some duty; an act or a behavior that was legally or morally
owed to someone else. As an example, each of us has a duty to drive our
automobiles in a safe manner. If we drive as if we were in the pole position
at the Daytona 500 when we are actually on our way to work, we have
breached our duty and can be subjected to criminal charges and/or a civil lawsuit.
We can now raise the question “Is there any technology that could
relieve someone of a duty?”
As far as the law is concerned,
nothing can relieve someone of a duty. Although there is some evidence that current
“smart vehicle” technology has played a role in reducing certain
types of accidents, no one is willing to claim that a “driverless”
or “autonomous” vehicle would be any safer than a vehicle
that is under human control. As a partial proof that, as the technology
exists at this time, “driverless” technology is not safer
than human control you need only consider the following.
Two states, and the District of Columbia, have passed laws that expressly
limit the liability of automobile manufacturers if a vehicle is modified
to enable autonomous operation. One of these, Nevada, is home to the branch
of Tesla Motors that manufactures batteries for that company’s line
of electrically-powered, and partially autonomous, automobiles. The other
state is Michigan, and we already know what products make up almost the
entire industrial output of Detroit and its suburbs. An opinion as to
what the District of Columbia produces is left to the reader.
Returning to our original question we, of course, cannot sue a robot. There
is, however, nothing that prevents us from suing the manufacturers of
“autonomous” or “driverless” vehicles on the grounds
that their products are “defective” in that they cannot be
used as a reliable substitute for human control or to relieve us of the
duty we owe to others.
In upcoming posts we’ll look at how robotic technology can compensate
for human shortcomings and how, in some cases, robotics can actually amplify