Initially, I thought the concept of training the data being open sourced to the public was pretty cool and made sense. But after a group of us gave our input on a few different situations, it became clear to see that there was a heavy differing of opinions on many of these situations. Considering the number of self-driving cars is going to skyrocket in the near future, the actual decision making of the software should be of large emphasis because it could decide your fate on the road.
The type of ethical decision future cars will be faced with |
This type of thinking made me think of Phillip Brey's paper discussing emerging issues as software tends to expand. As Brey describes, with AI software in its baby phases, it's critical that the software is implemented with the correct ethics, because as it gets larger with years to come, it only gets more and more difficult to change.
I began to consider the perspective of the manufacturer. If I were designing an autonomous vehicle that I was ultimately selling to a consumer, I would probably want the consumer to feel that they were of the utmost priority at all times. Both as a buyer and seller of an autonomous vehicle, I feel that this would be the most intuitive priority of the software system.
Probable future of automobiles. |
Funny enough, while seeming logical, this perspective conflicts with the ethics that the Brey and the Moral Machine try to consider, where the actual driver of the car is taken out of the equation. Considering that AI software for automobile manufacturers is getting increasingly large, it seems fitting that there be some type of standard board in place for the ethics that the software must prioritize. Otherwise, for years to come it's possible that consumers of autonomous vehicles may be buying certain cars whose software is less safe than others. A standard board could bridge the gap between consumer safety and the interest of the manufacturer. This could be paramount for ensuring ethical AI and safe software for consumers of autonomous vehicles.
Great post, your revised version is certainly better and I can see a lot of changes you've made to make that happen. Adding pictures definitely makes it easier to gauge attention and your added pictures do a great job in that. The connection with Philip Brey and his theory on AI works well and I really like how you tied it up in the last paragraph as well. I agree that it will be really tough for autonomous cars to make the right decisions all the time and I would really like someone to research and find the data comparing accident chances of AI vs human driving cars and also analyzing the fatalities of those accidents. It's possible AI might make less accidents but what if those accidents are more severe.
ReplyDeleteHi Chrisboi, great improvements in the format of your post. Breaking up your paragraphs and incorporating more pictures made your post easier to read. However, I don’t think the second photo you chose was irrelevant to the previous and following paragraphs. You have good arguments for standardized ethics in autonomous cars. I think you could further improve by providing more explanation for Brey’s ideas. For example, you don’t explain how your perspective as a manufacturer conflicts with Brey’s ethics.
ReplyDeleteI enjoyed reading your blog post! I think you did a great job in incorporating Philip Brey's reading with the Moral Machine. I liked how you drew a connection through conflict in ideas. You did a great job in pointing out how your perspective actually goes against what Brey and the Moral Machine try to consider. I believe that it is important to think about this as our society and advancements are quickly changing and will continue to change even faster in the future.
ReplyDelete