In a world where flying cars represent an idealized version of the future, self-driving
cars seem like a great step in the right direction. They are the ultimate convenience,
allowing people to sit back, relax, and not worry - doesn’t that sound perfect? Not
only do they make travel time more productive, but they eliminate the possibility of
human error. Having hit my own garage with my car one too many times, this sounds
like a dream come true.
However, as Luciano Floridi says, with any new technology comes unforeseen
ethical issues, and self-driving cars are no exception. Right now, the technology
has moved into what James Moor calls the "permeation stage" in “Why we need
better ethics for emerging technologies.” It is no longer just a curiosity; rather, its
integration into society has begun. With this comes many unanswered questions.
If a self-driving car gets in an accident, who is liable? The owner of the car? The
manufacturer? The company who created the AI for the car? Further, what happens
when the car encounters an unavoidable accident and needs to decide what to do,
like deciding between hitting pedestrians or swerving around them and harming
the passengers?
ethical issues, and self-driving cars are no exception. Right now, the technology
has moved into what James Moor calls the "permeation stage" in “Why we need
better ethics for emerging technologies.” It is no longer just a curiosity; rather, its
integration into society has begun. With this comes many unanswered questions.
If a self-driving car gets in an accident, who is liable? The owner of the car? The
manufacturer? The company who created the AI for the car? Further, what happens
when the car encounters an unavoidable accident and needs to decide what to do,
like deciding between hitting pedestrians or swerving around them and harming
the passengers?
These are the kinds of challenges for which no policy has been developed, and are
exactly what Moor warns about. Since technology is developing so fast, policy
vacuums are created, meaning there is nothing in place to resolve these issues.
We need to think these things through before or while the technology is developed.
Otherwise, we risk losing control of our creations due to a lack of foresight.
exactly what Moor warns about. Since technology is developing so fast, policy
vacuums are created, meaning there is nothing in place to resolve these issues.
We need to think these things through before or while the technology is developed.
Otherwise, we risk losing control of our creations due to a lack of foresight.
Hey Jacey, reading this article makes me think of all the philosophical scenarios we get asked such as the trolley problem, that puts philosophical schools of though in clash with one another. Should we let the trolley run its course to kill 5 people or should we pull the lever to kill 1 person on the other track? What if the 5 people were convicts whereas that one person is a child?
ReplyDeleteThese problems are hard to answer on their own, but as technology evolves to the point of autonomy, someone has to explicitly program how to behave in certain scenarios, and there are countless scenarios where it will never be a clear right answer. If these problems are hard to answer on their own, but makes it much harder to make a decision and judge that decision when a non-human agent makes it.
Hi Jacey, I like your introduction. When you introduce Floridi's and Moore's ideas, you should explain what they are as if you're addressing someone unfamiliar with his work. You have good points about the possible complications that come with self driving cars. I think a good way to support your argument is by using real examples of ways the development of autonomous cars have been harmful. You could mention current ethical issues that have resulted due to autonomous parking features or the autonomous pizza delivery vehicles we see on campus.
ReplyDelete