Samuel Teklemariam
David Leaton
TRU 110
15March 2020
AI as a threat to humanity
On February 12th, 2017 on the website Quora, Yann LeCun, Director of AI Research of Facebook and Professor at NYU, said that he doesn’t think that AI will become an existential threat to humanity. He says that if humans are smart enough to build machines with superhuman intelligence, then we will not be stupid enough to give them infinite power to destroy humanity. LeCun claims that the will to dominate is a very human one and even in humans, intelligence is not correlated with a desire for power. He also posits that the bad things that humans do to each other are specific to humans, so intelligent machines will not have these basic behaviors unless we explicitly build these behaviors into them.
Even though the points that LeCun brings up sound reasonable, I don’t agree. I am a believer that one-day artificial intelligence will control humanity. Looking at the way things are going right now, it seems inevitable. As technology keeps growing and becoming more advanced, we humans are becoming more and more reliant on it and bit by bit, giving technology control over our lives. It might not seem that extreme at this point. As time passes though, the little things that we rely on technology for like waking up in the morning, reminding us about the meeting we have in the afternoon or getting directions on where we are going are what will lead us to the point where there is no going back and it will seem impossible for us to live our lives without those crutches in our daily lives. These small things that we rely on technology for will become the more important things as technology grows, and that will be when we start giving up more control to the machines over our lives.
Yann LeCun’s second point was that if we are smart enough to build machines with superhuman intelligence then we will not be stupid enough to give them infinite power to destroy humanity. However, it will not be a question of whether we are smart enough to limit the power of the computers. I don’t think that is that hard of a task. After a while though, we will need the computers to do more and more complicated things and so the limits that we set for them, in the beginning, will have to be changed again and again. This can also happen when the competing companies who are working on their own version of an AI want their product to be capable of doing more than its competition. When the limits keep getting changed again and again, and the AIs become capable of complex thought they will be able to create their own AI or change the limits put on them by humans. With this type of capability, there is no telling what the final outcome will be.
Yann LeCun also posits that intelligence is not correlated with a desire for power. When we say power, it might not have the same meaning to a super-intelligent AI and a human. Power can be defined as “the capacity or ability to direct or influence the behavior of others or the course of events.”. This is only one definition so just as other people will have different definitions for it, a super-intelligent AI will have its own. When we come to an AI, power, or what we perceive as power might be the control that they have over our daily lives. This may not happen at all at once. Computers might not even intend on having complete control over a human being. Even though that was not the intention, when people give the machines control and the ability to influence their lives, AIs don’t need to have a desire for power. After individual humans give their lives over to computers, who is to say that entire governments won’t do the same.
The production of a fully functional AI in the future is something that a lot of people agree will happen, even though it might not be in the near future. Its threat to humanity is still a controversial topic but, in my opinion, it is very likely. The rate at which technology is growing is something no one can deny. The control we give to computers will keep growing as technology advances and who is to say that an AI that was given the task of solving the energy or waste crisis won’t come to the conclusion that humans are the problem. The AI will not be acting out of the desire for power or world domination but instead, it will just be doing the task for which it was made for and in doing so, it’s answer to the problem could very well be the elimination of the human race or to have complete control over the actions of humans.