At the beginning of 2018, there was a dispute inside Google about whether they should cooperate with the military to build a security tool. The executives are happy with this. On the one hand, Google needs to restore its reputation in society, especially in the US. In recent years, Google has been criticized a lot that as a US company that expands business around the world, Google becomes less and less patriotic. Cooperating with the military will help change the bad impression. Also, the military is generally generous about the costs and expenses: if the quality is good, they are willing to pay extra. However, scientists and employees in Google were against this deal. They think technology should be regarded as technology only. They should not become a tool for profits by certain people due to potential danger.
If we take a broad view, we may find it interesting that the improvement of technology seems to enter a dilemma. On the one hand, in recent years, the number of new theories and new findings have been soared. Taking these advantages, scientists and engineers reached a peak in several academic fields. Take aerospace science as an example, we can hardly see any huge steps in recent years because it is nearly mature. On the other hand, there is another category of technology just starting up: techs for people. Driverless cars and planes, new-generation electronic devices, more and more new concepts are broadening our horizons every day. However, it turns out that we still know too little about automation. Things become sensitive when we consider the relationship between humans and robots.
I believe engineers around the world, including those in Google, share an idea provided by Norbert Wiener, "Businesses are so eager to apply automation into their products. However, as things go, they can't resist selfishly misusing the powers to the detriment of our fellow humans and the planet; also, when they finally realize that, they cannot even stop the machine." Similar things are actually happening now. In 2017, AlphaGo, an artificial intelligence program developed by Google's DeepMind lab, beat the world's No.1 player, Ke Jie. There were also two robots produced by Facebook, talking with each other in their own language. These behaviors all based on learning from human lives. Though it seems like we can control what is happening now, we still do not know whether companies will overlook the potential consequences using those robots in the pursuit of money, and finally they cannot control the chaos when the world loses order.
Sadly, our ordinary people have no control over this. We can only place our hope on those large businesses that they will do comprehensive researches to certify their products are safe enough to use. All in all, companies should fully aware of the sensitivity of partly-known technologies, as Hill wrote in his article, "All of these insist that not only is sorcery a sin leading to Hell but it is a personal peril in this life. It is a two-edged sword, and sooner or later it will cut you deep."
I wasn't quite sure what I was getting into when I clicked on this post; the title is a little vague. Fortunately for me, it was an interesting topic!
ReplyDeleteComparing the post to your original, I see you removed the reference to the Manhattan project; this was disappointing, I found that comparison powerful and salient. That said, the improvements to the length and argumentative aspect were significant! I would also note, I think the video is too long (~5:00) to serve as anything other than a visual stopgap; perhaps this was intentional.
Hi,
ReplyDeleteInteresting topic. You brought up some interesting points about industry maturity and Norbert's quote. I thought you could better tie these points into the article and expand on them because the central focus is kind of fuzzy (perhaps clarifying it in the intro would be helpful/changing the title of the article).
Hi!
ReplyDeleteI also read and commented on your original post. It's nice to be able to see the development of the post and changes since I last read it.
I like that you made the text bigger, it was much easier to read. I also really like that you included a new example. Although with your introduction remaining roughly the same with this new content, the intro doesn't quite fit into the bigger picture as well as it did before. There is a theme of nationalism and patriotism that doesn't quite match.
With that being said, I like that you were exploring the idea of control and awareness as designers of technology. The example for this was nice. It was nice that you tried to strengthen your ideas and try to make your main idea come through more (with the exception of the introduction).
Overall great job on the edits and great post!