Real Risk’s Of Artificial Intelligence

The Real Risks of AI

 The risks of AI often bark up the wrong tree and tend to be misleading.Basic misunderstandings of what AI is and isn’t are also creating unnecessary fear, and those fears in turn stifle adoption.

We potentially make our worst case scenario more likely, not less,  by running away from AI or making false assumptions about its evolution. We should also be as focused on understanding human intelligence as we are on AI.

One definition of AI, which is most frequently used in business and academic contexts, is a discipline of computer science that entails using computers in which the system  learns from data.

This definition is largely synonymous with machine learning and encompasses computationally complex tasks such as natural language processing, predictive analytics, pattern recognition, computer vision, robotics, and more.

AI include autonomous cars, robots, chatbots, trading systems, facial recognition, and virtual assistants. These use cases typically combine multiple computational tasks using machine learning to achieve seemingly miraculous results such as self-driving cars.

Virtually every “AI” article is about this type of AI which will be referenced here as “VAI” for Vertical Artificial Intelligence.

Faulty  systems is The most likely scenario for VAI danger.  Weapon systems are the worst-case scenario; they obviously have a much higher risk of killing humans by accident or in a large-scale way, compared to a domain like virtual assistants.

Autonomous cars is the most rapidly advancing VAIs that will likely will be the first real experience with this type of technology, also carry risk.

A second definition of AI, which is the more widely perceived pop-cultural meaning, is a self-aware or conscious system that is intelligent in a more profound sense.

AI is more accurately called an “AGI,” for Artificial General Intelligence. This type of AI is also sometimes called “HLI” for Human Level Intelligence, which helps further frame what it is and of what it might be capable.

AGI can be something that we have virtually no understanding or recognition of, but which may have a significant understanding of us if it is given access to the Internet or a significant data repository.

True intelligence moves past simple ideas like goal-seeking, which is often considered another cornerstone of varying levels of AI and as a potential control mechanism.

Almost all human progress is driven by curiosity, which is a short hop to ambition. Ironically it is likely that curiosity is the human trait most likely to lead not just to AGI, but past it to superintelligence.

Posted in AI

Leave a Reply

Your email address will not be published. Required fields are marked *

Next Post

IoT - Blessing In Disguise For Teleco's

Thu Aug 10 , 2017
The Real Risks of AI  The risks of AI often bark up the wrong tree and tend to be misleading.Basic misunderstandings of what AI is and isn’t are also creating unnecessary fear, and those fears in turn stifle adoption. We potentially make our worst case scenario more likely, not less,  by running […]

You May Like