Artificial Intelligence is moving so fast, it sometimes looks unbelievable.  But if we look at the other side and analyze the dangers made by Artificial Intelligence then it’s another discussion. We are talking about its unintended effects here. If someday a super-intelligent AI would be created and its algorithm could be used in criminal activity then it would be a massive destruction. What sort of future are we expecting? A jobless society and everybody would be enjoying machine-produced wealth. Can we control them or we have to obey them?

These are some of the questions which pop in mind whenever we discuss the AI’s future and its impact. Recently a research has been published from Future of Humanity Institute, the Centre for the Study of Existential Risk, and the Elon Musk-backed non-profit OpenAI that there so many people who want to use the AI for immoral, criminal, or malicious purposes.

Can AI prove a threat to us?

We need to believe these machines will reduce labor and manpower, it may outsmart us too. There might be some space left for entertainers, writers, and other creative people. The computer would become efficient to program themselves and absorb vast quantities of new information. One of the greatest danger would be towards the employment sector, AI and Robotics would greatly damage it.

Also, see New device to detect earthquakes.

We shouldn’t make a fuss about the future of AI’s takeover, the genuine danger is that we can put so much trust in the smart frameworks we are building. We should always recall that machine learning works via preparing the programme to spot designs in information. Once prepared, it is then given something to do the work, analyzing it and coming up with the unseen data. In any case, when the computer releases an answer, we are commonly unfit to perceive how it arrived.

Algorithms are being used to make life-changing decisions. But, since so much of the data that we feed AIs is imperfect, we should not expect perfect answers all the time. Recognizing that is the first step in managing the risk. Decision-making processes built on top of AIs need to be made more open to scrutiny. Since we are building artificial intelligence, it is likely to be both as brilliant and as flawed as we are. The bad part is we can’t even ban the super-intelligent computers altogether because the advantage we get through it in the field of economic, military, medicine is unmatchable and compelling.

See here Can artificial intelligence take over the world?

What if they take over?

The next question is if machines would overtake us, as virtually everyone in the A.I. field believes, the real concern would be about values as well. How one would be able to communicate and negotiate with those machines if and when their values are likely to differ greatly from our own. Like how could somebody judge in which direction machine is thinking when it’s thinking in dimensions you can’t conceive of? What if it goes one step ahead of Inverse Reinforced Learning.

Some of the tech experts have mentioned that it is important to realize that the goals of machines could change as they get smarter. See why Elon Musk wants to ban killer AI bots. Once computers can effectively reprogram and improve themselves; it could lead us to an intelligence explosion. Already there is a lot to analyze the Big Data thing and Internet advancement. Large amounts of data are being collected and then being fed to algorithms to make predictions In actual one does not have the ability to know when the data is being collected, or is it validated, updated with the correct information, etc.

Few people would have even dreamt of this risk of AI and its threats some years ago. What risks lie ahead? Nobody really knows actually, all of the judgments are based on the current situation.

Share.
%d bloggers like this: