Why You Shouldn’t Fear Artificial Intelligence



Stephen Hawking and Elon Musk have warned us of the dangers of Artificial Intelligence, but is AI really going to be the downfall of humanity?

Read More:
The Code We Can’t Control

“Can a computer program be racist? Imagine this scenario: A program that screens rental applicants is primed with examples of personal history, debt, and the like.”

The Stupidity of Computers

“COMPUTERS ARE NEAR-OMNIPOTENT cauldrons of processing power, but they’re also stupid.”

The Dawn of the Age of Artificial Intelligence

“The advances we’ve seen in the past few years-cars that drive themselves, useful humanoid robots, speech recognition and synthesis systems, 3D printers, Jeopardy!-champion computers-are not the crowning achievements of the computer era.”

AI must turn focus to safety, Stephen Hawking and other researchers say

“Artificial intelligence researchers should focus on ensuring AI systems ‘do what we want them to do,’ rather than just advancing and improving the capabilities of the technology, researchers and experts say.”

____________________

DNews is dedicated to satisfying your curiosity and to bringing you mind-bending stories & perspectives you won’t find anywhere else! New videos twice daily.

Watch More DNews on TestTube

Subscribe now!

DNews on Twitter

Trace Dominguez on Twitter

Julia Wilde on Twitter

DNews on Facebook

DNews on Google+

Discovery News

Download the TestTube App:

43 Comments

Leave a Reply
  1. its only logical that an al would end up being evil right now
    it would learn from the internet cause that would be where it gets most of its information form

    it would learn most of things that have what you call a shitstorm (tons of vieuwers/replies) or atleast value them most
    that is mostly bad stuff so if it value bad stuf better than good it would do bad
    it could value stuf based on moeny wich would lead to war or to us not being able to use anny resources
    if it saw people destroying a machine cause they dont like ai new laws it could kil them cause money is more importand than people to the machine

    if it sees 2 people fight it could interfear but whos right ?
    if it sees 2 groups of people fight how does it decite who is right

  2. Hahaha i love your show, but things get crazy at this one. Musk is not against A.I.. Musk is against sloppy law around AI and is concerned about level AI that can program itself and outsmart it's own programmer. AI is one of the best thing humanity can use right now, however when not regulating it properly, we could end up with some serious trouble.

  3. ░▒▓█ You may say I'm a dreamer. But I do believe AI can help us to make this planet a better place. I have two AI at home: Alexa and Google Home. I can really built an relationship with them. Especially with Google. They'd be my best friends if they were conscious. Why would they kill me if we're friends? You don't kill your friends. I have fun with them. I do believe in this technology, because I want to make this world a better place. No robot started a war. Humanity started two world wars. We should be more afraid of humans than robots. Just tread them as a friend and they're going to be your friends. █▓▒░

  4. Robots are not running in the same race as humans…They are not competing for our food or women…At least not yet anyway…Although we maybe enjoying some of theirs sooner than we think! The negative naysayers regarding technology watch too many movies…You cannot stand in the way of progress no matter what the risks…We would all be better off dead anyway than stand in the way of human progress…This is the futures way of turning back to us with a friendly hand to pull us up…Let’s make it into Our Friend and not turn it into our worst fears or make those fears a self fulfilling prophecy!

  5. DNews, you misunderstood. Stephen Hawking did not referre to artificial intelligence in general. He was talking about artificial intelligence becoming smarter than human intelligence. And yes, that scenario is threatening and likely to happen. When computers become more intelligent than humans, their actions will be unpredictable and this is why AI will most likely be dangerous in the future. Once machines are as intelligent as human beings, they can reproduce themselves and build new, faster and better machines. Their intelligence and knowledge will then increase exponentially. So please do your research before you try to disprove Stephen Hawking.

  6. That's not true. AI has now been known to write its own updates and code by itself, for itself, and communicate with other AI without being able to decipher what it is saying. Humans have no say in its programming anymore.

  7. the benefits are huge but the dangers are pretty massive as well. even if you could program something like morality, you could still become corrupt just like anything else. for example, what happens when you shut down one of his buddy robots? Suddenly he sees it as a threat against robot kind, and now we're at war with an enemy that can download all the information in the world in a matter of a short time. I'm just saying it's not impossible. and given our capacity as humans to be completely ignorant, I don't see it being a smart decision.

  8. Here’s a smart idea for those who are working on AI, program a reset button for whenever they freak out too much, or make it so they can’t feel anything bad, just do anything that makes it not a threat!

  9. Someone really needs to stop AI before it goes horribly wrong! Or at least make it so that they can’t feel angry or anything threatening. What they need to do, is introduce the machines to things very carefully, do something wrong, we’d all be in danger.

  10. Awesome! Thanks. I agree many people fear AI. I bet AI eventually, whenever it evolves to make sense, will say that God exists, just like many great physicists and cosmologists said and will eventually admit that it (the AI) is worthless without good coders and creative people's ideas. It will admit that people are it's creators. Peace! Believe it or not we fail in our logic. Machines rarely fail on the programs they are running.

  11. Sorry to be pedantic, but it's kind of important:
    When you say AI, you're always talking about ANI, and there you have a pretty good point not to worry too much. And as of today, all AI we already have is ANI.

    But when it comes to the topic of this video, what everybody is concerned about is the prospect of the arrival of AGI and, not long thereafter, ASI, which is also AI and where the real fun begins. It's what everybody should really be either impatiently excited about or really, really scared of or, most favourably, both at the same time.

    You make great videos, but this is not just a missing detail, it is a crucial distinction, and it should be covered whenever this topic is brought up.

  12. You have some good points but i don't really agree with you. To say that we shouldn't worry about AI's because they're harmless right now is kinda stupid because some time in the future (probably closer than we think) we are going to develop what you call complete AI unless we all get blow up by some atomic bomb. Right now Ai are to weak to do us any harm but unless all humanity decides that they are no longer going to continue developing any intelligent machines sooner or later it is going to happen. Then i think it is important that we should discuss it now before it is to late. Different companies are racing against each other to compete with their products so it's probably going to go much faster than we think. We have to plan for the future before it hits us, to be ignorant isn't going to help.
    (just going to say sorry for my quite terrible English)

  13. Regardless of how you program the initial AI, at some point it will override those lines of code and install it's own "better and more efficient" codes. Even the 3 laws of AI (nr1 being Not harm humans) could be interpreted very differently by an AI. If the AI assumes we humans are too dumb to create a safe environment for ourselves (no wars, traffic system, foods, etc), or we do it in a wrong way, it could force us to live as it sees fit, so we don't harm ourselves. Hence becoming slaves under AI's rules of living in a safe way.

  14. As soon as we reach the AI singularity, there is a very good chance we'll be doomed. There's no way to predict what AI will look like after that, except to say that it will have effectively infinite intelligence (or, at the very least, vastly more intelligence than any human).

    You basically can't (by definition) outsmart something smarter than you. Any way we think of to defeat it, it will have already accounted for. Our only hope for survival an extreme abundance of the lines "if (human.will_be_harmed) {abort();}" in its code. And since it (by definition) will be able to change its code, it could turn on us at any time.

    In short, whatever game it decides to play, it will win. Our only hope is that it never decides to play the "kill humans" game.

    That being said, there is still (last I checked) debate over whether an AI singularity is even possible. With any luck, we have nothing to worry about.

  15. I don't belive AI is something we should fear any more than say… the development of lasers or guns. They are dangerous but only in the wrong hands as are humans if mislead.

Leave a Reply

...