An hands off the wheel? How will vehicles

An autonomous automobile
is a vehicle that can guide itself without human conduction.  This kind of vehicle has become a concrete
reality and may pave the way for future systems where computers take over the
art of driving.  An autonomous car is
also known as a driverless car, robot car, self-driving car, or autonomous
vehicle.  Before autonomous driving
becomes reality, there is a wide range of legal and ethical questions to be
answered.  If motor vehicles are to be
truly autonomous and able to operate responsibly on our roads, they will need
to replicate-or do better than- the human decision-making process.  But some decisions are more than just a
mechanical application of traffic laws and plotting a safe path.  They seem to require a sense of ethics, and
this is a notoriously difficult capability to reduce into algorithms for a
computer to follow. 

In this essay I
will be discussing the history of automation, where I will be providing some
background information on what advancements in technology brought us to the
importance of this topic, as well as the main ethical questions that you may or
may not have considered before reading this. 
Can drivers take their hands off the wheel?  How will vehicles respond in unexpected
situations?  And who bears the
responsibility?  These are just a few of
the many questions regarding autonomous vehicles that still need to be
addressed. 

While this subject
is currently a hotbed of discussion, I am certain that autonomous vehicles are
here to stay.  Automation not only makes
driving cars more convenient, but also has the potential for lower emissions
and greater safety. It reduces stress on drivers during monotonous trips in
traffic jams or on the highway. At the same time, they would still be able to
take the wheel for routes that are more fun to drive.  While I will not be able to touch on all the
ethical topics, I will do my best to select a couple of the more difficult and
debatable scenarios to discuss.

“It may come as a
surprise to some that key breakthroughs in automation technology-covering land,
sea, and air- have been in development for hundreds of years, building the
foundation for the cars of tomorrow.”  It
all began in 1500, when da Vinci developed a self-propelled cart that used
springs under high tension to provide power that could move the cart along a
predetermined path.  Although this
technology is a distant precursor to the car, the device is sometimes
considered the world’s first robot.  It
wasn’t until the late 1800s that engineers started to reinvestigate the idea of
integrating automation into their technology. 
Automation can be found in technologies throughout history, such as: the
torpedo (1868), the aircraft autopilot (1933), cruise control (1945), and even
NASA’s land rover (1961).

In 1977, Tsukuba
Mechanical Engineering of Japan designed the first fully automated passenger
vehicle that could recognize street markings while traveling at nearly 20 miles
per hour.  Although this technology had
many issues and never took off, the use of cameras to assist in guidance
systems can still be found in modern autonomous vehicles today.  “In 1987, the VaMoRs, developed by the German
engineer Ernst Dickmanns, took camera technology one step further.  Dickmann’s key innovation was “dynamic
vision,” allowing the image system to filter out extraneous “noise” and focus
only on relevant objects.  Today, this
type of imaging is crucial in helping self-driving vehicles identify potential
hazards and their locations.”

“While we tend to
think of autonomous vehicles as a means of converting humans from drivers to
passengers, another class of autonomous devices are designed to travel
completely alone.  Nowhere is this more
visible than in the world of drones, the most noteworthy of which has been
General Atomics’ Predator (1995), an unmanned plane that for 20 years has been
piloting over global hotspots for 14 hours at a time.  Drones aren’t just military vehicles, of
course.  The predator is decked out with
technologies being adapted for cars, including radar that can see through smoke
or clouds and thermal imaging cameras that enable travel by night.”

“At this point, it
is important to clearly distinguish the tasks of technology from ethics and the
haunting challenges confronting each. On the technology side, an autonomous
car’s system of sensors and control algorithms must be able to substitute for a
human driver and even surpass a driver in relevant capacities. To achieve this,
normal signal processing techniques (sensor fusion, object classification, and
so on) are not enough. The algorithms must incorporate large parts of a human
driver’s accumulated experience. Current sensor data must be integrated and
processed together with the acquired understanding of complex contextual
relations. The result of this process will be a statistical situation
assessment with probabilities (risk and gain) for different possible reactions.
Given the complexity and diversity of possible scenarios, the “right” reaction
cannot be programmed in advance for every concrete situation, but has to be
calculated according to a defined algorithm.”

There is no doubt
that automation technology has come a long way and has become quite remarkable.  It is difficult to resist the idea that
something as simple as the daily inconvenience of rush hour could soon be a
thing of the past.  In 2015, Tesla was
able to grant their cars the ability to self-drive hands-free for highway and
freeway driving with a simple over the air system update.  Amazing. 
This brings me to the topic of our discussion: full automation.  Full automation is defined as full
performance of driving by an automated driving system under all roadway and
environmental conditions that can also be managed by a human driver, but human
intervention is not needed.  The big
question is, how should these automated systems react in different real-world
scenarios?  Nearly everything to be
discussed is still in front of the industry, so while I will be as objective as
possible, there are no definitive answers at such an early state of the
technology.

Let me begin my
ethics discussion with a simple scenario that illustrates the importance for
ethics when it comes to implementing autonomous cars into our society.  Imagine your autonomous car encounters a situation
that requires it to make a choice that leads to two terrible outcomes.  Your car must either swerve one direction and
strike a child or swerve the other direction and strike an elderly person.  Given the cars velocity, either victim would surely
be killed on impact.  If your car does
not swerve, both victims will be struck and killed; so, there is good reason to
think that you ought to swerve one way or another.  The question becomes, what is the ethically
correct decision in this scenario?  If
you were the company that was responsible for programming the self-driving
vehicle, how would you instruct it to behave if it encountered this choice,
even if the odds of it occurring were very low?

To some, the
choice of striking the elderly person could seem to be the lesser evil.  The thought behind this is that the child
still has their entire life in front of them, while the elderly person has
already lived a full life and their fair share of experiences.  I believe most of us would agree that the
elderly person has a right to life and just because they have lived longer doesn’t
mean their life is any less valuable than a child’s; but regardless, something
about saving a child’s life seems to outweigh the elder’s, if an accident is
unavoidable.  Regardless of your views on
this scenario, either choice would be considered ethically incorrect according
to the relevant professional codes of ethics. 
“Among its many pledges, the Institute of Electrical and Electronics
Engineers (IEEE), for instance, commits itself to treat fairly all persons and
to not engage in acts of discrimination based on race, religion, gender,
disability, age, national origin, sexual orientation, gender identity, or
gender expression.”  Therefore, if you
were to treat individuals differently based on their age, when age is not considered
a relevant factor, then it would be viewed as exactly the kind of
discrimination that IEEE prohibits.

We’ve hit a road
block and cannot ethically choose a path forward, so what should be done?  As previously discussed, one solution would
be to refuse the vehicles decision to swerve, killing both the child and the
elderly person; but this feels like a much worse decision than only having a
single pedestrian die, even if we are prejudiced against them.  A second solution would be to program the
vehicle to “flip a coin” and choose a victim at random, without prejudice to
either person.  This feels even more
ethically troubling because we are choosing between lives without any deliberation
at all- leaving the choice to chance, when there are potentially some reasons
to choose one victim over the other, as terrible and uncomfortable as the
reasons might be.  The truth is that
there is no easy solution to this scenario and therefore points to the need for
ethics when developing autonomous vehicles.

You may disregard
the scenario above, as well as others that follow, saying that there is no way
this would occur with autonomous vehicles. 
It could be suggested that these vehicles don’t need to confront hard
ethical choices, and that stopping the car or passing control back to the
driver is the easy way around ethics.  But
I will contend here that “braking and relinquishing control will not always be
enough.”  “These solutions may perhaps be
the best we have today, but if automated cars are to ever operate more broadly
outside of limited highway environments, they will need more response-options.”
 

One possible reply
is that, while imperfect, braking could avoid most emergency situations, even
if it regrettably makes things worse in a small number of cases.  From a cost-benefit perspective, the benefits
far outweigh the risk, and the numbers speak for themselves.  But, is this really the case?  Braking and other options won’t be enough for
crash-avoidance, because crash-avoidance by itself is not enough.  Some accidents are unavoidable, like if a
deer, or a pedestrian were to run in front of your moving car.  Therefore, autonomous vehicles will also need
to engage in some sort of crash-optimization as well.  What I mean by crash-optimization is that a
decision needs to be made for a course of action that will most likely lead to
the least amount of harm, and this could mean you have no choice but to choose
between two disasters, like the scenario when you had to strike either the
child or the elderly person.

There may be
reasons to possibly prefer choosing to run over the child rather than the elderly
person that I have not yet accounted for. 
If an autonomous vehicle was programmed to protect its occupants, then
it would make sense to choose a collision with the lightest object possible.  That means, if the choice were between two
vehicles, then the car should be programmed to strike a lighter vehicle rather than
a heavier one.

On the other hand, what if the car
was programmed to protect other drivers and pedestrians over its occupants?  Then that would mean that the car should be
programmed to prefer collision with the heavier vehicle over the lighter one.  “If vehicle-to-vehicle (V2V) and
vehicle-to-infrastructure (V2I) communications are rolled out (or V2X to refer
to both), or if an autonomous car can identify the specific models of other
cars on the road, then it seems to make sense to collide with a safer vehicle
(such as a Volvo SUV that has a reputation for safety) over a car not known for
crash-safety (such as a Ford Pinto that’s prone to exploding upon impact).”

The ethical point
I am trying to convey here is that no matter which strategy is adopted by a manufacturer,
choosing to collide with any specific type of object over another very much
resembles a targeting algorithm.  “Somewhat
related to the military sense of selecting targets, crash-optimization
algorithms may involve the deliberate and systematic discrimination of, say,
large vehicles or Volvos to collide into.  The owners or operators of these targeted vehicles
bear this burden through no fault of their own, other than perhaps that they
care about safety or need an SUV to transport a large family.” 

Thus, reasonable
ethical principles like aiming to save the greatest number of lives can be
stressed in the context of autonomous driving. 
“An operator of an autonomous vehicle, rightly or not, may very well
value his own life over that of everyone else’s, even that of 29 others; or he
may even explicitly reject consequentialism. Even if consequentialism is the best
ethical theory and the car’s moral calculations are correct, the problem may
not be with the ethics but with a lack of discussion about ethics. Industry,
therefore, may do well to have such a discussion and set expectations with the
public. Users—and news headlines—may likely be more forgiving if it is
explained in advance that self-sacrifice may be a justified feature, not a bug.”
 So, what if every manufacturer had a
proprietary targeting system?  Would
consumers choose which vehicle to purchase based on the way they are programmed
to react under these circumstances?  Or
would there become a standard created by the government that all manufacturers
had to abide by?

So here we are, on the verge of
autonomous driving, and we don’t really know what the future will look like,
but we can already see that much work and consideration needs to be done.  “Part of the problem is our lack of
imagination. Brookings Institution director Peter W. Singer observed,
“We are still at the ‘horseless carriage’ stage of this technology, describing
these technologies as what they are not, rather than wrestling with what they
truly are.”

If we don’t take
the time to have these ethical considerations and autonomous vehicles perform
poorly, a powerful case could be made that manufacturers were negligent when designing
their product, and that opens them up to tremendous legal liability, should
this ever occur.  Autonomous vehicles
promise great benefits with unintended effects that are difficult to predict,
and there is no stopping them now.  It is
impossible to avoid change, and that isn’t necessarily a bad thing.  But “major disruptions and new harms should
be anticipated and avoided where possible. “That is the role of ethics in
innovation policy: it can pave the way for a better future while enabling
beneficial technologies. Without looking at ethics, we are driving with one eye
closed.”

“As it applies
here, robots aren’t merely replacing human drivers, just as human drivers
in the first automobiles weren’t simply replacing horses: that would like
mistaking electricity as merely a replacement for candles. The impact of
automating transportation will change society in radical ways, and technology
seems to be accelerating. As Singer puts it, “Yes, Moore’s Law is operative,
but so is Murphy’s Law.”  When technology
goes wrong—and it will—thinking in advance about ethical design and policies
can help guide us responsibility into the unknown.”