How could artificial intelligence harm us?How could we define passion in a machine in reference to Artificial Intelligence?If IQ is used as a measure of intelligence in humans could it also be used as a measure of intelligence in machines?Could artificial intelligence cause problems for humanity after figuring out human behavior?Is there a theoretical maximum for intelligence?How could AGI build its own communication protocol?Will Human Cognitive Evolution Drown in Response to Artificial Intelligence?How can we create eXplainable Artificial Intelligence?Will we be able to build an artificial intelligence that feels empathy?Would an artificial general intelligence have to be Turing complete?

Talk about Grandpa's weird talk: Who are these folks?

Should I inform my future product owner that there are big chances that a team member will leave the company soon?

What the did the controller say during my approach to land (audio clip)?

Can we have a C++ function with multiple return types? ( C++11 and above)

Do household ovens ventilate heat to the outdoors?

Does Forgotten Realms setting count as “High magic”?

What does the Free Recovery sign (UK) actually mean?

Inquiry answerer

All numbers in a 5x5 Minesweeper grid

How to make classical firearms effective on space habitats despite the coriolis effect?

Amiga 500 OCS/ECS vs Mega Drive VDP

Who are the people reviewing far more papers than they're submitting for review?

Why don't airports use arresting gears to recover energy from landing passenger planes?

Where did Otto von Bismarck say "lying awake all night, hating"?

How do rulers get rich from war?

Story/1980s sci fi anthology novel where a man is sucked into another world through a gold painting

What is Cousin Itt in The Addams Family?

What's the word for a student who doesn't register but goes to a class anyway?

Tips for remembering the order of parameters for ln?

Unpredictability of Stock Market

Paradox regarding phase transitions in relativistic systems

Is there a theorem in Real analysis similar to Cauchy's theorem in Complex analysis?

What is the source of "You can achieve a lot with hate, but even more with love" (Shakespeare?)

Did slaves have slaves?



How could artificial intelligence harm us?


How could we define passion in a machine in reference to Artificial Intelligence?If IQ is used as a measure of intelligence in humans could it also be used as a measure of intelligence in machines?Could artificial intelligence cause problems for humanity after figuring out human behavior?Is there a theoretical maximum for intelligence?How could AGI build its own communication protocol?Will Human Cognitive Evolution Drown in Response to Artificial Intelligence?How can we create eXplainable Artificial Intelligence?Will we be able to build an artificial intelligence that feels empathy?Would an artificial general intelligence have to be Turing complete?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








7












$begingroup$


We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.



How could artificial intelligence harm us?










share|improve this question









New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$









  • 2




    $begingroup$
    Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
    $endgroup$
    – DuttaA
    12 hours ago






  • 1




    $begingroup$
    This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
    $endgroup$
    – Neil Slater
    12 hours ago






  • 3




    $begingroup$
    @NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
    $endgroup$
    – nbro
    3 hours ago

















7












$begingroup$


We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.



How could artificial intelligence harm us?










share|improve this question









New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$









  • 2




    $begingroup$
    Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
    $endgroup$
    – DuttaA
    12 hours ago






  • 1




    $begingroup$
    This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
    $endgroup$
    – Neil Slater
    12 hours ago






  • 3




    $begingroup$
    @NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
    $endgroup$
    – nbro
    3 hours ago













7












7








7





$begingroup$


We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.



How could artificial intelligence harm us?










share|improve this question









New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.



How could artificial intelligence harm us?







philosophy agi social superintelligence






share|improve this question









New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|improve this question









New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this question




share|improve this question








edited 3 hours ago









nbro

7,0644 gold badges17 silver badges36 bronze badges




7,0644 gold badges17 silver badges36 bronze badges






New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 12 hours ago









ManakManak

362 bronze badges




362 bronze badges




New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • 2




    $begingroup$
    Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
    $endgroup$
    – DuttaA
    12 hours ago






  • 1




    $begingroup$
    This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
    $endgroup$
    – Neil Slater
    12 hours ago






  • 3




    $begingroup$
    @NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
    $endgroup$
    – nbro
    3 hours ago












  • 2




    $begingroup$
    Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
    $endgroup$
    – DuttaA
    12 hours ago






  • 1




    $begingroup$
    This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
    $endgroup$
    – Neil Slater
    12 hours ago






  • 3




    $begingroup$
    @NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
    $endgroup$
    – nbro
    3 hours ago







2




2




$begingroup$
Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
$endgroup$
– DuttaA
12 hours ago




$begingroup$
Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
$endgroup$
– DuttaA
12 hours ago




1




1




$begingroup$
This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
$endgroup$
– Neil Slater
12 hours ago




$begingroup$
This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
$endgroup$
– Neil Slater
12 hours ago




3




3




$begingroup$
@NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
$endgroup$
– nbro
3 hours ago




$begingroup$
@NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
$endgroup$
– nbro
3 hours ago










4 Answers
4






active

oldest

votes


















7














$begingroup$

tl;dr



There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.



To better illustrate these concerns, I'll try to split them into three categories.



Conscious AI



This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).



The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).



In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.



Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.



Using AI with malicious intent



Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
The second category I want to focus a bit more on is several malicious uses of today's AI.



I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:



  • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3


  • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.


  • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.


  • Hacking.


  • Military applications, e.g. drone attacks, missile targeting systems.


Adverse effects of AI



This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:



  • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).


  • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.






share|improve this answer









$endgroup$










  • 2




    $begingroup$
    It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
    $endgroup$
    – Jerome
    32 mins ago


















1














$begingroup$

In addition to the other answers, I would like to add to nuking cookie factory example:



Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.



Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.



So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.






share|improve this answer










New contributor



Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





$endgroup$














  • $begingroup$
    Haha, nice example (+1)
    $endgroup$
    – Mary93
    21 mins ago


















1














$begingroup$

I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.






share|improve this answer








New contributor



Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





$endgroup$






















    0














    $begingroup$

    If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.



    In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.



    The connection between randomly controlled games and negative social impact was explained in the following sentence.




    quote: “In many traditional non-Western societies gamblers may pray to
    the gods for success and explain wins and losses in terms of divine
    will. “ Binde, Per. "Gambling and religion: Histories of concord and
    conflict." Journal of Gambling Issues 20 (2007): 145-165.







    share|improve this answer









    $endgroup$

















      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "658"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );







      Manak is a new contributor. Be nice, and check out our Code of Conduct.









      draft saved

      draft discarded
















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f15449%2fhow-could-artificial-intelligence-harm-us%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      7














      $begingroup$

      tl;dr



      There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.



      To better illustrate these concerns, I'll try to split them into three categories.



      Conscious AI



      This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).



      The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).



      In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.



      Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.



      Using AI with malicious intent



      Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
      The second category I want to focus a bit more on is several malicious uses of today's AI.



      I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:



      • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3


      • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.


      • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.


      • Hacking.


      • Military applications, e.g. drone attacks, missile targeting systems.


      Adverse effects of AI



      This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:



      • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).


      • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.






      share|improve this answer









      $endgroup$










      • 2




        $begingroup$
        It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
        $endgroup$
        – Jerome
        32 mins ago















      7














      $begingroup$

      tl;dr



      There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.



      To better illustrate these concerns, I'll try to split them into three categories.



      Conscious AI



      This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).



      The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).



      In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.



      Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.



      Using AI with malicious intent



      Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
      The second category I want to focus a bit more on is several malicious uses of today's AI.



      I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:



      • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3


      • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.


      • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.


      • Hacking.


      • Military applications, e.g. drone attacks, missile targeting systems.


      Adverse effects of AI



      This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:



      • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).


      • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.






      share|improve this answer









      $endgroup$










      • 2




        $begingroup$
        It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
        $endgroup$
        – Jerome
        32 mins ago













      7














      7










      7







      $begingroup$

      tl;dr



      There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.



      To better illustrate these concerns, I'll try to split them into three categories.



      Conscious AI



      This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).



      The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).



      In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.



      Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.



      Using AI with malicious intent



      Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
      The second category I want to focus a bit more on is several malicious uses of today's AI.



      I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:



      • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3


      • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.


      • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.


      • Hacking.


      • Military applications, e.g. drone attacks, missile targeting systems.


      Adverse effects of AI



      This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:



      • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).


      • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.






      share|improve this answer









      $endgroup$



      tl;dr



      There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.



      To better illustrate these concerns, I'll try to split them into three categories.



      Conscious AI



      This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).



      The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).



      In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.



      Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.



      Using AI with malicious intent



      Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
      The second category I want to focus a bit more on is several malicious uses of today's AI.



      I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:



      • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3


      • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.


      • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.


      • Hacking.


      • Military applications, e.g. drone attacks, missile targeting systems.


      Adverse effects of AI



      This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:



      • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).


      • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered 6 hours ago









      Djib2011Djib2011

      1,2312 silver badges11 bronze badges




      1,2312 silver badges11 bronze badges










      • 2




        $begingroup$
        It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
        $endgroup$
        – Jerome
        32 mins ago












      • 2




        $begingroup$
        It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
        $endgroup$
        – Jerome
        32 mins ago







      2




      2




      $begingroup$
      It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
      $endgroup$
      – Jerome
      32 mins ago




      $begingroup$
      It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
      $endgroup$
      – Jerome
      32 mins ago













      1














      $begingroup$

      In addition to the other answers, I would like to add to nuking cookie factory example:



      Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.



      Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.



      So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.






      share|improve this answer










      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      $endgroup$














      • $begingroup$
        Haha, nice example (+1)
        $endgroup$
        – Mary93
        21 mins ago















      1














      $begingroup$

      In addition to the other answers, I would like to add to nuking cookie factory example:



      Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.



      Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.



      So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.






      share|improve this answer










      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      $endgroup$














      • $begingroup$
        Haha, nice example (+1)
        $endgroup$
        – Mary93
        21 mins ago













      1














      1










      1







      $begingroup$

      In addition to the other answers, I would like to add to nuking cookie factory example:



      Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.



      Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.



      So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.






      share|improve this answer










      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      $endgroup$



      In addition to the other answers, I would like to add to nuking cookie factory example:



      Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.



      Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.



      So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.







      share|improve this answer










      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      share|improve this answer



      share|improve this answer








      edited 3 hours ago









      nbro

      7,0644 gold badges17 silver badges36 bronze badges




      7,0644 gold badges17 silver badges36 bronze badges






      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      answered 5 hours ago









      LustwelpintjeLustwelpintje

      411 bronze badge




      411 bronze badge




      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




      New contributor




      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.
















      • $begingroup$
        Haha, nice example (+1)
        $endgroup$
        – Mary93
        21 mins ago
















      • $begingroup$
        Haha, nice example (+1)
        $endgroup$
        – Mary93
        21 mins ago















      $begingroup$
      Haha, nice example (+1)
      $endgroup$
      – Mary93
      21 mins ago




      $begingroup$
      Haha, nice example (+1)
      $endgroup$
      – Mary93
      21 mins ago











      1














      $begingroup$

      I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.






      share|improve this answer








      New contributor



      Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      $endgroup$



















        1














        $begingroup$

        I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.






        share|improve this answer








        New contributor



        Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        $endgroup$

















          1














          1










          1







          $begingroup$

          I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.






          share|improve this answer








          New contributor



          Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.





          $endgroup$



          I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.







          share|improve this answer








          New contributor



          Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.








          share|improve this answer



          share|improve this answer






          New contributor



          Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.








          answered 17 mins ago









          Bill KBill K

          1111 bronze badge




          1111 bronze badge




          New contributor



          Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.




          New contributor




          Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.


























              0














              $begingroup$

              If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.



              In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.



              The connection between randomly controlled games and negative social impact was explained in the following sentence.




              quote: “In many traditional non-Western societies gamblers may pray to
              the gods for success and explain wins and losses in terms of divine
              will. “ Binde, Per. "Gambling and religion: Histories of concord and
              conflict." Journal of Gambling Issues 20 (2007): 145-165.







              share|improve this answer









              $endgroup$



















                0














                $begingroup$

                If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.



                In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.



                The connection between randomly controlled games and negative social impact was explained in the following sentence.




                quote: “In many traditional non-Western societies gamblers may pray to
                the gods for success and explain wins and losses in terms of divine
                will. “ Binde, Per. "Gambling and religion: Histories of concord and
                conflict." Journal of Gambling Issues 20 (2007): 145-165.







                share|improve this answer









                $endgroup$

















                  0














                  0










                  0







                  $begingroup$

                  If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.



                  In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.



                  The connection between randomly controlled games and negative social impact was explained in the following sentence.




                  quote: “In many traditional non-Western societies gamblers may pray to
                  the gods for success and explain wins and losses in terms of divine
                  will. “ Binde, Per. "Gambling and religion: Histories of concord and
                  conflict." Journal of Gambling Issues 20 (2007): 145-165.







                  share|improve this answer









                  $endgroup$



                  If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.



                  In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.



                  The connection between randomly controlled games and negative social impact was explained in the following sentence.




                  quote: “In many traditional non-Western societies gamblers may pray to
                  the gods for success and explain wins and losses in terms of divine
                  will. “ Binde, Per. "Gambling and religion: Histories of concord and
                  conflict." Journal of Gambling Issues 20 (2007): 145-165.








                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 10 hours ago









                  Manuel RodriguezManuel Rodriguez

                  1,6361 gold badge4 silver badges27 bronze badges




                  1,6361 gold badge4 silver badges27 bronze badges
























                      Manak is a new contributor. Be nice, and check out our Code of Conduct.









                      draft saved

                      draft discarded

















                      Manak is a new contributor. Be nice, and check out our Code of Conduct.












                      Manak is a new contributor. Be nice, and check out our Code of Conduct.











                      Manak is a new contributor. Be nice, and check out our Code of Conduct.














                      Thanks for contributing an answer to Artificial Intelligence Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f15449%2fhow-could-artificial-intelligence-harm-us%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      ParseJSON using SSJSUsing AMPscript with SSJS ActivitiesHow to resubscribe a user in Marketing cloud using SSJS?Pulling Subscriber Status from Lists using SSJSRetrieving Emails using SSJSProblem in updating DE using SSJSUsing SSJS to send single email in Marketing CloudError adding EmailSendDefinition using SSJS

                      Кампала Садржај Географија Географија Историја Становништво Привреда Партнерски градови Референце Спољашње везе Мени за навигацију0°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.340°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.34МедијиПодациЗванични веб-сајту

                      19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу