Nowadays there are a lot of robots of all kinds. Some are made to bring food to the tables where people sit in restaurants while others on the other hand are made to kill and destroy basically everything what’s in their way. Every day robots become more and more intelligent and influential. Some difficult jobs (particularly in factories) are only done by robots which makes us very dependent on robots.
On the one hand robots can be crucial to our daily lives (like that they produce vital medicine in epidemics, but on the other hand they can get very dangerous and even deadly too because of the following reasons: 1. They don’t have the feelings like humans have (that means that they don’t feel guilty if they rip families apart for example) 2.they only make decisions based on economics (that means they decide that a robot is worth 300,000 dollar and a human only 100,000 dollar for example, which could be crucial for making decisions). The scientist Isaac Asimov stated the three laws of robotics. The three laws are as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings unless where such orders would conflict with the first law.
3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.
Isaac Asimov wrote this laws so that programmers pay attention to how much rights they give robots so that they don’t regret the consequence. Altough his statements were quite wise 80 years ago, they are not complete.
Firstly, imagine that you are in a world with Asimov’s rules and that someone wants to destroy an armed robot with a sword; this totally doesn’t make sense for the robot at all because it has to be careful not to injure him or her but it also has to be careful not to let the human being harm itself at the same time. So it was certainly new and wise at that time, but these days they aren’t much of any use because leave alone the big bug mentioned previously, artificial intelligence creates another problem.
The problem with AI is that the robots start thinking themselves like humans, but often not human. Certainly they would see each other as more complex organisms and us unusable. In my view robots act how we shape them, but when we give them the power of AI they could get dangerous although world conquest is the worst scenario, they could start their own army of robots.
Here you can read an article about killerbots.