How could artificial intelligence harm us?How could we define passion in a machine in reference to Artificial Intelligence?If IQ is used as a measure of intelligence in humans could it also be used as a measure of intelligence in machines?Could artificial intelligence cause problems for humanity after figuring out human behavior?Is there a theoretical maximum for intelligence?How could AGI build its own communication protocol?Will Human Cognitive Evolution Drown in Response to Artificial Intelligence?How can we create eXplainable Artificial Intelligence?Will we be able to build an artificial intelligence that feels empathy?Would an artificial general intelligence have to be Turing complete?

Talk about Grandpa's weird talk: Who are these folks?

Should I inform my future product owner that there are big chances that a team member will leave the company soon?

What the did the controller say during my approach to land (audio clip)?

Can we have a C++ function with multiple return types? ( C++11 and above)

Do household ovens ventilate heat to the outdoors?

Does Forgotten Realms setting count as “High magic”?

What does the Free Recovery sign (UK) actually mean?

Inquiry answerer

All numbers in a 5x5 Minesweeper grid

How to make classical firearms effective on space habitats despite the coriolis effect?

Amiga 500 OCS/ECS vs Mega Drive VDP

Who are the people reviewing far more papers than they're submitting for review?

Why don't airports use arresting gears to recover energy from landing passenger planes?

Where did Otto von Bismarck say "lying awake all night, hating"?

How do rulers get rich from war?

Story/1980s sci fi anthology novel where a man is sucked into another world through a gold painting

What is Cousin Itt in The Addams Family?

What's the word for a student who doesn't register but goes to a class anyway?

Tips for remembering the order of parameters for ln?

Unpredictability of Stock Market

Paradox regarding phase transitions in relativistic systems

Is there a theorem in Real analysis similar to Cauchy's theorem in Complex analysis?

What is the source of "You can achieve a lot with hate, but even more with love" (Shakespeare?)

Did slaves have slaves?



How could artificial intelligence harm us?


How could we define passion in a machine in reference to Artificial Intelligence?If IQ is used as a measure of intelligence in humans could it also be used as a measure of intelligence in machines?Could artificial intelligence cause problems for humanity after figuring out human behavior?Is there a theoretical maximum for intelligence?How could AGI build its own communication protocol?Will Human Cognitive Evolution Drown in Response to Artificial Intelligence?How can we create eXplainable Artificial Intelligence?Will we be able to build an artificial intelligence that feels empathy?Would an artificial general intelligence have to be Turing complete?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








7












$begingroup$


We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.



How could artificial intelligence harm us?










share|improve this question









New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$









  • 2




    $begingroup$
    Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
    $endgroup$
    – DuttaA
    12 hours ago






  • 1




    $begingroup$
    This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
    $endgroup$
    – Neil Slater
    12 hours ago






  • 3




    $begingroup$
    @NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
    $endgroup$
    – nbro
    3 hours ago

















7












$begingroup$


We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.



How could artificial intelligence harm us?










share|improve this question









New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$









  • 2




    $begingroup$
    Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
    $endgroup$
    – DuttaA
    12 hours ago






  • 1




    $begingroup$
    This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
    $endgroup$
    – Neil Slater
    12 hours ago






  • 3




    $begingroup$
    @NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
    $endgroup$
    – nbro
    3 hours ago













7












7








7





$begingroup$


We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.



How could artificial intelligence harm us?










share|improve this question









New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




We often hear that artificial intelligence may harm or even kill humans, so it might prove dangerous.



How could artificial intelligence harm us?







philosophy agi social superintelligence






share|improve this question









New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|improve this question









New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this question




share|improve this question








edited 3 hours ago









nbro

7,0644 gold badges17 silver badges36 bronze badges




7,0644 gold badges17 silver badges36 bronze badges






New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 12 hours ago









ManakManak

362 bronze badges




362 bronze badges




New contributor



Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Manak is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • 2




    $begingroup$
    Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
    $endgroup$
    – DuttaA
    12 hours ago






  • 1




    $begingroup$
    This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
    $endgroup$
    – Neil Slater
    12 hours ago






  • 3




    $begingroup$
    @NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
    $endgroup$
    – nbro
    3 hours ago












  • 2




    $begingroup$
    Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
    $endgroup$
    – DuttaA
    12 hours ago






  • 1




    $begingroup$
    This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
    $endgroup$
    – Neil Slater
    12 hours ago






  • 3




    $begingroup$
    @NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
    $endgroup$
    – nbro
    3 hours ago







2




2




$begingroup$
Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
$endgroup$
– DuttaA
12 hours ago




$begingroup$
Artificial intelligence (specifically AGI) means it has its own sense of what to do and what not to do. Also weapons also don't have any sense, yet they are dangerous solely because they can be misused.
$endgroup$
– DuttaA
12 hours ago




1




1




$begingroup$
This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
$endgroup$
– Neil Slater
12 hours ago




$begingroup$
This is a bit broad, as there are many reasons and scenarios suggested in which AI could become dangerous. For instance as DuttaA suggests above, humans may design intelligent weapons systems that decide what to target, and this is a real worry as it is possible already using narrow AI. Perhaps give more context to the specific fears that you want to understand, by quoting or linking a specific concern that you have read (please use edit).
$endgroup$
– Neil Slater
12 hours ago




3




3




$begingroup$
@NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
$endgroup$
– nbro
3 hours ago




$begingroup$
@NeilSlater Yes, it might be too broad, but I think that this answer ai.stackexchange.com/a/15462/2444 provides some plausible reasons. I edited the question to remove the possibly wrong assumption.
$endgroup$
– nbro
3 hours ago










4 Answers
4






active

oldest

votes


















7














$begingroup$

tl;dr



There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.



To better illustrate these concerns, I'll try to split them into three categories.



Conscious AI



This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).



The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).



In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.



Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.



Using AI with malicious intent



Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
The second category I want to focus a bit more on is several malicious uses of today's AI.



I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:



  • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3


  • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.


  • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.


  • Hacking.


  • Military applications, e.g. drone attacks, missile targeting systems.


Adverse effects of AI



This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:



  • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).


  • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.






share|improve this answer









$endgroup$










  • 2




    $begingroup$
    It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
    $endgroup$
    – Jerome
    32 mins ago


















1














$begingroup$

In addition to the other answers, I would like to add to nuking cookie factory example:



Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.



Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.



So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.






share|improve this answer










New contributor



Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





$endgroup$














  • $begingroup$
    Haha, nice example (+1)
    $endgroup$
    – Mary93
    21 mins ago


















1














$begingroup$

I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.






share|improve this answer








New contributor



Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





$endgroup$






















    0














    $begingroup$

    If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.



    In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.



    The connection between randomly controlled games and negative social impact was explained in the following sentence.




    quote: “In many traditional non-Western societies gamblers may pray to
    the gods for success and explain wins and losses in terms of divine
    will. “ Binde, Per. "Gambling and religion: Histories of concord and
    conflict." Journal of Gambling Issues 20 (2007): 145-165.







    share|improve this answer









    $endgroup$

















      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "658"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );







      Manak is a new contributor. Be nice, and check out our Code of Conduct.









      draft saved

      draft discarded
















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f15449%2fhow-could-artificial-intelligence-harm-us%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      7














      $begingroup$

      tl;dr



      There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.



      To better illustrate these concerns, I'll try to split them into three categories.



      Conscious AI



      This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).



      The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).



      In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.



      Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.



      Using AI with malicious intent



      Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
      The second category I want to focus a bit more on is several malicious uses of today's AI.



      I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:



      • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3


      • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.


      • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.


      • Hacking.


      • Military applications, e.g. drone attacks, missile targeting systems.


      Adverse effects of AI



      This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:



      • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).


      • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.






      share|improve this answer









      $endgroup$










      • 2




        $begingroup$
        It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
        $endgroup$
        – Jerome
        32 mins ago















      7














      $begingroup$

      tl;dr



      There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.



      To better illustrate these concerns, I'll try to split them into three categories.



      Conscious AI



      This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).



      The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).



      In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.



      Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.



      Using AI with malicious intent



      Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
      The second category I want to focus a bit more on is several malicious uses of today's AI.



      I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:



      • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3


      • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.


      • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.


      • Hacking.


      • Military applications, e.g. drone attacks, missile targeting systems.


      Adverse effects of AI



      This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:



      • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).


      • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.






      share|improve this answer









      $endgroup$










      • 2




        $begingroup$
        It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
        $endgroup$
        – Jerome
        32 mins ago













      7














      7










      7







      $begingroup$

      tl;dr



      There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.



      To better illustrate these concerns, I'll try to split them into three categories.



      Conscious AI



      This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).



      The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).



      In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.



      Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.



      Using AI with malicious intent



      Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
      The second category I want to focus a bit more on is several malicious uses of today's AI.



      I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:



      • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3


      • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.


      • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.


      • Hacking.


      • Military applications, e.g. drone attacks, missile targeting systems.


      Adverse effects of AI



      This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:



      • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).


      • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.






      share|improve this answer









      $endgroup$



      tl;dr



      There are many valid reasons why people might fear (or better be concerned about) AI, not all involve robots and apocalyptic scenarios.



      To better illustrate these concerns, I'll try to split them into three categories.



      Conscious AI



      This is the type of AI that your question is referring to. A super-intelligent conscious AI that will destroy/enslave humanity. This is mostly brought to us by science-fiction. Some notable Hollywood examples are "The terminator", "The Matrix", "Age of Ultron". The most influential novels were written by Isaac Asimov and are referred to as the "Robot series" (which includes "I, robot", which was also adapted as a movie).



      The basic premise under most of these works are that AI will evolve to a point where it becomes conscious and will surpass humans in intelligence. While Hollywood movies mainly focus on the robots and the battle between them and humans, not enough emphasis is given to the actual AI (i.e. the "brain" controlling them). As a side note, because of the narrative, this AI is usually portrayed as supercomputer controlling everything (so that the protagonists have a specific target). Not enough exploration has been made on "ambiguous intelligence" (which I think is more realistic).



      In the real world, AI is focused on solving specific tasks! An AI agent that is capable of solving problems from different domains (e.g. understanding speech and processing images and driving and ... - like humans are) is referred to as General Artificial Intelligence and is required for AI being able to "think" and become conscious.



      Realistically, we are a loooooooong way from General Artificial Intelligence! That being said there is no evidence on why this can't be achieved in the future. So currently, even if we are still in the infancy of AI, we have no reason to believe that AI won't evolve to a point where it is more intelligent than humans.



      Using AI with malicious intent



      Even though an AI conquering the world is a long way from happening there are several reasons to be concerned with AI today, that don't involve robots!
      The second category I want to focus a bit more on is several malicious uses of today's AI.



      I'll focus only on AI applications that are available today. Some examples of AI that can be used for malicious intent:



      • DeepFake: a technique for imposing someones face on an image a video of another person. This has gained popularity recently with celebrity porn and can be used to generate fake news and hoaxes. Sources: 1, 2, 3


      • With the use of mass surveillance systems and facial recognition software capable of recognizing millions of faces per second, AI can be used for mass surveillance. Even though when we think of mass surveillance we think of China, many western cities like London, Atlanta and Berlin are among the most-surveilled cities in the world. China has taken things a step further by adopting the social credit system, an evaluation system for civilians which seems to be taken straight out of the pages of George Orwell's 1984.


      • Influencing people through social media. Aside from recognizing user's tastes with the goal of targeted marketing and add placements (a common practice by many internet companies), AI can be used malisciously to influence people's voting (among other things). Sources: 1, 2, 3.


      • Hacking.


      • Military applications, e.g. drone attacks, missile targeting systems.


      Adverse effects of AI



      This category is pretty subjective, but the development of AI might carry some adverse side-effects. The distinction between this category and the previous is that these effects, while harmful, aren't done intentionally; rather they occur with the development of AI. Some examples are:



      • Jobs becoming redundant. As AI becomes better, many jobs will be replaced by AI. Unfortunately there are not many things that can be done about this, as most technological developments have this side-effect (e.g. agricultural machinery caused many farmers to lose their jobs, automation replaced many factory workers, computers did the same).


      • Reinforcing the bias in our data. This is a very interesting category, as AI (and especially Neural Networks) are only as good as the data they are trained on and have a tendency of perpetuating and even enhancing different forms of social biases, already existing in the data. There are many examples of networks exhibiting racist and sexist behavior. Sources: 1, 2, 3, 4.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered 6 hours ago









      Djib2011Djib2011

      1,2312 silver badges11 bronze badges




      1,2312 silver badges11 bronze badges










      • 2




        $begingroup$
        It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
        $endgroup$
        – Jerome
        32 mins ago












      • 2




        $begingroup$
        It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
        $endgroup$
        – Jerome
        32 mins ago







      2




      2




      $begingroup$
      It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
      $endgroup$
      – Jerome
      32 mins ago




      $begingroup$
      It's interesting that while "Hollywood" seems to equate AI with robots and world domination (which is still in the realm of science-fiction), we are actually facing threats from AI even today. I would have preferred the category split to be something like "AI designed to cause harm" and "AI designed to do good but got out of hand". Anyway, nice answer!
      $endgroup$
      – Jerome
      32 mins ago













      1














      $begingroup$

      In addition to the other answers, I would like to add to nuking cookie factory example:



      Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.



      Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.



      So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.






      share|improve this answer










      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      $endgroup$














      • $begingroup$
        Haha, nice example (+1)
        $endgroup$
        – Mary93
        21 mins ago















      1














      $begingroup$

      In addition to the other answers, I would like to add to nuking cookie factory example:



      Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.



      Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.



      So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.






      share|improve this answer










      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      $endgroup$














      • $begingroup$
        Haha, nice example (+1)
        $endgroup$
        – Mary93
        21 mins ago













      1














      1










      1







      $begingroup$

      In addition to the other answers, I would like to add to nuking cookie factory example:



      Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.



      Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.



      So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.






      share|improve this answer










      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      $endgroup$



      In addition to the other answers, I would like to add to nuking cookie factory example:



      Machine learning AIs basically try to fulfill a goal described by humans. For example, humans create an AI running a cookie factory. The goal they implement is to sell as many cookies as possible for the highest profitable margin.



      Now, imagine an AI which is sufficiently powerful. This AI will notice that if he nukes all other cookie factories, everybody has to buy cookies in his factory, making sales rise and profits higher.



      So, the human error here is giving no penalty for using violence in the algorithm. This is easily overlooked because humans didn't expect the algorithm to come to this conclusion.







      share|improve this answer










      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      share|improve this answer



      share|improve this answer








      edited 3 hours ago









      nbro

      7,0644 gold badges17 silver badges36 bronze badges




      7,0644 gold badges17 silver badges36 bronze badges






      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      answered 5 hours ago









      LustwelpintjeLustwelpintje

      411 bronze badge




      411 bronze badge




      New contributor



      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




      New contributor




      Lustwelpintje is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.
















      • $begingroup$
        Haha, nice example (+1)
        $endgroup$
        – Mary93
        21 mins ago
















      • $begingroup$
        Haha, nice example (+1)
        $endgroup$
        – Mary93
        21 mins ago















      $begingroup$
      Haha, nice example (+1)
      $endgroup$
      – Mary93
      21 mins ago




      $begingroup$
      Haha, nice example (+1)
      $endgroup$
      – Mary93
      21 mins ago











      1














      $begingroup$

      I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.






      share|improve this answer








      New contributor



      Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      $endgroup$



















        1














        $begingroup$

        I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.






        share|improve this answer








        New contributor



        Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        $endgroup$

















          1














          1










          1







          $begingroup$

          I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.






          share|improve this answer








          New contributor



          Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.





          $endgroup$



          I would say the biggest real threat would be the unbalancing/disrupting we are already seeing. The changes of putting 90% the country out of work are real, and the results (which will be even more uneven distribution of wealth) are terrifying if you think them through.







          share|improve this answer








          New contributor



          Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.








          share|improve this answer



          share|improve this answer






          New contributor



          Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.








          answered 17 mins ago









          Bill KBill K

          1111 bronze badge




          1111 bronze badge




          New contributor



          Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.




          New contributor




          Bill K is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.


























              0














              $begingroup$

              If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.



              In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.



              The connection between randomly controlled games and negative social impact was explained in the following sentence.




              quote: “In many traditional non-Western societies gamblers may pray to
              the gods for success and explain wins and losses in terms of divine
              will. “ Binde, Per. "Gambling and religion: Histories of concord and
              conflict." Journal of Gambling Issues 20 (2007): 145-165.







              share|improve this answer









              $endgroup$



















                0














                $begingroup$

                If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.



                In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.



                The connection between randomly controlled games and negative social impact was explained in the following sentence.




                quote: “In many traditional non-Western societies gamblers may pray to
                the gods for success and explain wins and losses in terms of divine
                will. “ Binde, Per. "Gambling and religion: Histories of concord and
                conflict." Journal of Gambling Issues 20 (2007): 145-165.







                share|improve this answer









                $endgroup$

















                  0














                  0










                  0







                  $begingroup$

                  If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.



                  In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.



                  The connection between randomly controlled games and negative social impact was explained in the following sentence.




                  quote: “In many traditional non-Western societies gamblers may pray to
                  the gods for success and explain wins and losses in terms of divine
                  will. “ Binde, Per. "Gambling and religion: Histories of concord and
                  conflict." Journal of Gambling Issues 20 (2007): 145-165.







                  share|improve this answer









                  $endgroup$



                  If a robot is similar to a human machine interface the device is the same like a remote controlled car. It's possible to discuss with the operator behind the joystick and negotiate about a wishful behavior. Remote controlled robots are safe inventions because their actions can be traced back to humans and their motivation can be anticipated. They can be used to improve the daily life, and it's funny to play with them.



                  In contrast, some robots aren't controlled by joysticks but are working with an internal dice generator. The dice toy is known from it's social role in gambling but it has also a mystical meaning. Usually, a random generator is strongly connected with chaotic behavior which is controlled by dark forces outside the influence of humans. An electronic dice built into a robot and improved with learning algorithm is the opposite of a human machine interface, but it's potential troublemaker because the randomly controlled robot will play games with humans which can't be anticipated. It's not possible to predict the next number of a dice, therefore the robot will behave abrupt as well.



                  The connection between randomly controlled games and negative social impact was explained in the following sentence.




                  quote: “In many traditional non-Western societies gamblers may pray to
                  the gods for success and explain wins and losses in terms of divine
                  will. “ Binde, Per. "Gambling and religion: Histories of concord and
                  conflict." Journal of Gambling Issues 20 (2007): 145-165.








                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 10 hours ago









                  Manuel RodriguezManuel Rodriguez

                  1,6361 gold badge4 silver badges27 bronze badges




                  1,6361 gold badge4 silver badges27 bronze badges
























                      Manak is a new contributor. Be nice, and check out our Code of Conduct.









                      draft saved

                      draft discarded

















                      Manak is a new contributor. Be nice, and check out our Code of Conduct.












                      Manak is a new contributor. Be nice, and check out our Code of Conduct.











                      Manak is a new contributor. Be nice, and check out our Code of Conduct.














                      Thanks for contributing an answer to Artificial Intelligence Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f15449%2fhow-could-artificial-intelligence-harm-us%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу

                      Israel Cuprins Etimologie | Istorie | Geografie | Politică | Demografie | Educație | Economie | Cultură | Note explicative | Note bibliografice | Bibliografie | Legături externe | Meniu de navigaresite web oficialfacebooktweeterGoogle+Instagramcanal YouTubeInstagramtextmodificaremodificarewww.technion.ac.ilnew.huji.ac.ilwww.weizmann.ac.ilwww1.biu.ac.ilenglish.tau.ac.ilwww.haifa.ac.ilin.bgu.ac.ilwww.openu.ac.ilwww.ariel.ac.ilCIA FactbookHarta Israelului"Negotiating Jerusalem," Palestine–Israel JournalThe Schizoid Nature of Modern Hebrew: A Slavic Language in Search of a Semitic Past„Arabic in Israel: an official language and a cultural bridge”„Latest Population Statistics for Israel”„Israel Population”„Tables”„Report for Selected Countries and Subjects”Human Development Report 2016: Human Development for Everyone„Distribution of family income - Gini index”The World FactbookJerusalem Law„Israel”„Israel”„Zionist Leaders: David Ben-Gurion 1886–1973”„The status of Jerusalem”„Analysis: Kadima's big plans”„Israel's Hard-Learned Lessons”„The Legacy of Undefined Borders, Tel Aviv Notes No. 40, 5 iunie 2002”„Israel Journal: A Land Without Borders”„Population”„Israel closes decade with population of 7.5 million”Time Series-DataBank„Selected Statistics on Jerusalem Day 2007 (Hebrew)”Golan belongs to Syria, Druze protestGlobal Survey 2006: Middle East Progress Amid Global Gains in FreedomWHO: Life expectancy in Israel among highest in the worldInternational Monetary Fund, World Economic Outlook Database, April 2011: Nominal GDP list of countries. Data for the year 2010.„Israel's accession to the OECD”Popular Opinion„On the Move”Hosea 12:5„Walking the Bible Timeline”„Palestine: History”„Return to Zion”An invention called 'the Jewish people' – Haaretz – Israel NewsoriginalJewish and Non-Jewish Population of Palestine-Israel (1517–2004)ImmigrationJewishvirtuallibrary.orgChapter One: The Heralders of Zionism„The birth of modern Israel: A scrap of paper that changed history”„League of Nations: The Mandate for Palestine, 24 iulie 1922”The Population of Palestine Prior to 1948originalBackground Paper No. 47 (ST/DPI/SER.A/47)History: Foreign DominationTwo Hundred and Seventh Plenary Meeting„Israel (Labor Zionism)”Population, by Religion and Population GroupThe Suez CrisisAdolf EichmannJustice Ministry Reply to Amnesty International Report„The Interregnum”Israel Ministry of Foreign Affairs – The Palestinian National Covenant- July 1968Research on terrorism: trends, achievements & failuresThe Routledge Atlas of the Arab–Israeli conflict: The Complete History of the Struggle and the Efforts to Resolve It"George Habash, Palestinian Terrorism Tactician, Dies at 82."„1973: Arab states attack Israeli forces”Agranat Commission„Has Israel Annexed East Jerusalem?”original„After 4 Years, Intifada Still Smolders”From the End of the Cold War to 2001originalThe Oslo Accords, 1993Israel-PLO Recognition – Exchange of Letters between PM Rabin and Chairman Arafat – Sept 9- 1993Foundation for Middle East PeaceSources of Population Growth: Total Israeli Population and Settler Population, 1991–2003original„Israel marks Rabin assassination”The Wye River Memorandumoriginal„West Bank barrier route disputed, Israeli missile kills 2”"Permanent Ceasefire to Be Based on Creation Of Buffer Zone Free of Armed Personnel Other than UN, Lebanese Forces"„Hezbollah kills 8 soldiers, kidnaps two in offensive on northern border”„Olmert confirms peace talks with Syria”„Battleground Gaza: Israeli ground forces invade the strip”„IDF begins Gaza troop withdrawal, hours after ending 3-week offensive”„THE LAND: Geography and Climate”„Area of districts, sub-districts, natural regions and lakes”„Israel - Geography”„Makhteshim Country”Israel and the Palestinian Territories„Makhtesh Ramon”„The Living Dead Sea”„Temperatures reach record high in Pakistan”„Climate Extremes In Israel”Israel in figures„Deuteronom”„JNF: 240 million trees planted since 1901”„Vegetation of Israel and Neighboring Countries”Environmental Law in Israel„Executive branch”„Israel's election process explained”„The Electoral System in Israel”„Constitution for Israel”„All 120 incoming Knesset members”„Statul ISRAEL”„The Judiciary: The Court System”„Israel's high court unique in region”„Israel and the International Criminal Court: A Legal Battlefield”„Localities and population, by population group, district, sub-district and natural region”„Israel: Districts, Major Cities, Urban Localities & Metropolitan Areas”„Israel-Egypt Relations: Background & Overview of Peace Treaty”„Solana to Haaretz: New Rules of War Needed for Age of Terror”„Israel's Announcement Regarding Settlements”„United Nations Security Council Resolution 497”„Security Council resolution 478 (1980) on the status of Jerusalem”„Arabs will ask U.N. to seek razing of Israeli wall”„Olmert: Willing to trade land for peace”„Mapping Peace between Syria and Israel”„Egypt: Israel must accept the land-for-peace formula”„Israel: Age structure from 2005 to 2015”„Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990–2013: quantifying the epidemiological transition”10.1016/S0140-6736(15)61340-X„World Health Statistics 2014”„Life expectancy for Israeli men world's 4th highest”„Family Structure and Well-Being Across Israel's Diverse Population”„Fertility among Jewish and Muslim Women in Israel, by Level of Religiosity, 1979-2009”„Israel leaders in birth rate, but poverty major challenge”„Ethnic Groups”„Israel's population: Over 8.5 million”„Israel - Ethnic groups”„Jews, by country of origin and age”„Minority Communities in Israel: Background & Overview”„Israel”„Language in Israel”„Selected Data from the 2011 Social Survey on Mastery of the Hebrew Language and Usage of Languages”„Religions”„5 facts about Israeli Druze, a unique religious and ethnic group”„Israël”Israel Country Study Guide„Haredi city in Negev – blessing or curse?”„New town Harish harbors hopes of being more than another Pleasantville”„List of localities, in alphabetical order”„Muncitorii români, doriți în Israel”„Prietenia româno-israeliană la nevoie se cunoaște”„The Higher Education System in Israel”„Middle East”„Academic Ranking of World Universities 2016”„Israel”„Israel”„Jewish Nobel Prize Winners”„All Nobel Prizes in Literature”„All Nobel Peace Prizes”„All Prizes in Economic Sciences”„All Nobel Prizes in Chemistry”„List of Fields Medallists”„Sakharov Prize”„Țara care și-a sfidat "destinul" și se bate umăr la umăr cu Silicon Valley”„Apple's R&D center in Israel grew to about 800 employees”„Tim Cook: Apple's Herzliya R&D center second-largest in world”„Lecții de economie de la Israel”„Land use”Israel Investment and Business GuideA Country Study: IsraelCentral Bureau of StatisticsFlorin Diaconu, „Kadima: Flexibilitate și pragmatism, dar nici un compromis în chestiuni vitale", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 71-72Florin Diaconu, „Likud: Dreapta israeliană constant opusă retrocedării teritoriilor cureite prin luptă în 1967", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 73-74MassadaIsraelul a crescut in 50 de ani cât alte state intr-un mileniuIsrael Government PortalIsraelIsraelIsraelmmmmmXX451232cb118646298(data)4027808-634110000 0004 0372 0767n7900328503691455-bb46-37e3-91d2-cb064a35ffcc1003570400564274ge1294033523775214929302638955X146498911146498911

                      Кастелфранко ди Сопра Становништво Референце Спољашње везе Мени за навигацију43°37′18″ СГШ; 11°33′32″ ИГД / 43.62156° СГШ; 11.55885° ИГД / 43.62156; 11.5588543°37′18″ СГШ; 11°33′32″ ИГД / 43.62156° СГШ; 11.55885° ИГД / 43.62156; 11.558853179688„The GeoNames geographical database”„Istituto Nazionale di Statistica”проширитиууWorldCat156923403n850174324558639-1cb14643287r(подаци)