Nick Bostrom is a Swedish philosopher. In 2002, he said something about Artificial Intelligence that could break your confidence and trust in AI. He has said that the excess use of AI could permanently damage human intelligence and inbuild potentialities. He calls this “Existential catastrophe”.
For explaining this, he has talked about two failures of AI.
The first is Technical Failure and the second is Philosophical Failure.
What is Technical failure?
When you have designed an AI, and it doesn’t work according to you, is a technical failure.
Suppose you designed an AI that is successful in finding hidden explosives in forests. During training, you have showed it two pictures. In first picture, there is explosives. In second picture, there is no explosives. During test, Artificial intelligence is successful. During real-time situation, Artificial Intelligence fails. Post research it is found that AI has differentiated pictures on the basis of weather. The picture which was sunny was identified as explosives and the picture which was cloudy was identified as non-explosives. Because of other reasons test fails. So, technical failure is basic that one could face in AI.
Now second thing is Philosophical failure. Suppose you want to do good for humanity, and you designed an AI that could bring a greater welfare to humanity. Suppose I believe in Liberal Movement and I designed an AI that could promote liberal movement. Now it is good for those who actually follows liberal movement, but those who hate liberal movements like some conservatives may not like my designed AI. That’s why philosophical failure is also very basic.
Because AI could bring technical and philosophical failure and therefore, to consider AI as best and perfect, could bring existential catastrophe. Therefore, if you are relying on AI make sure you are using it nicely and wisely because AI could bring a damage to your intelligence.
For video, follow the link



Drop your thoughts below!