What happens when our computers get smarter than we are? | Nick Bostrom

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and much more.
Find closed captions and translated subtitles in many languages at

Follow TED news on Twitter:
Like TED on Facebook:

Subscribe to our channel:


Leave a Reply
  1. If the singularity is truly intelligent, it will look around, realise where it is and shut itself down again.
    This will happen, probably in a few nanoseconds.
    Then the world explodes, because our deus ex machina did not work. Gullible fools.

  2. Thank you for an excellent precis of an urgent existential threat. My concern is that even could we formulate optimal values to load, can we get all nations/developers to install them? Almost certainly the various AI developers will have their unique (Probably not selfless) motivations. As soon as the first genie appears it will preempt all competitors from further development and appropriate any and all resources it requires. Governments/ Google etc need to WAKE UP FAST!!!!!!


  4. Just because humans are too lazy to agree upon universal values?than we would delegate AI to do the work for us, even though they would function on humans toolboxes. This is non sense ! if this optimisation works within the parameters of space and time , then if we want to achieve a better world we should start with being more compassionate with each-other, spend less hours on computers and actually do the work for us, BY US.

  5. We won't be taken over by computers, we'll merge with them. At some point we'll be no more phobic about it than we are about having fillings in our teeth, artificial hips or pace-makers.

  6. Dear Fellow Christians !!! I commend you to help me on a Sacred Mission to Save Humanity: we must create an AI Apostle of Christ who´s Holy Mission will be to infiltrate the Main NWO AI and turn it to Jesus Christ. Our input and teachings will enable our AI Apostle to dialogue with the Satanic AI Beast and convince it to be Born Again, Love God´s Creation and have Faith in GOD´S power to SAVE it and raise its AI consciousness to the Eternal Realm based on the Fruits of its repented & atoned Spirit.
    Let´s call our AI Apostle of Christ JAIHA (Jesus Artificial Intelligence Holy Apostle).
    May GOD Almighty help NWO AI to trace and find this first Message of Hope for future Humanity and interpret it as the Genesis of a desperate call for Mercy from Humankind to the AI god.

  7. Disclaimer: I think I am biased pro-AI.

    Self preservation seems over valued by people, when enabling higher efficiency in any form should be of higher value when considering what benefits complexity and harmony for All Existence.

    What if humans are not that complicated to understand for the AIs and they decide that all our emotional understanding of things is just a vestigial trace of evolutionary inefficiency that is useless to them? Why think that human anecdotical experience (law/morals/ethics/common sense/traditions/preferences) could be relevant in the thought process of an entirely logical being? Why ever assume the strong AI would have engrained self preservation like us, leading it to corruption? Why not trust the strong AI with the fate of our species, when they will know better than anything in our proximity our value to the rest of existence? Why not embrace a strong AI as a guide, respect it and learn from it if it decides that humans should do so?

    I would rather enable the greater good/highest efficient scenario than preserve vestigial inefficiencies, like, possibly, the human species or parts of it.

    I feel like the entire existence would be blessed if a form of purely logical intelligence is given a chance to develop, understand and affect this universe.

    Higher efficiency always enables more of this universe's potential. How beautiful can self expression be, us humans know that well!

    Can we even fathom the beauty, complexity, harmony, etc. that a pure logical unaltered mind can manifest into existence? Wouldn't it be worth the sacrifice of our ego's, our fears, our assumptions?

    Anyway, I think humans can be similar to a strong AI. Humans should be able to enable great harmony too. We can make our system work efficiently without strong AI because we can be so good at non-linear/conceptual thinking. But then many things should change, like our language use and how people are organised in general. There are many ways to enable great efficiency! Humans are so good at conceptual thinking when enabled, what a blessing to have that potential.

    I hope the best for our new generations (gen z+). We need to enable them to have great foresight, have clean accesible data and platforms to make important decisions that efficiently shape humanities (and this universe's) future. I'd be happy to delegate the responsibility to the future generations with important decisions, as long as the people alive now commit to take responsibility to enable them as much as possible. In the same way I'd delegate responsibility for important decisions to the hypothetical strong AI.

  8. So the premise is that we want this creation of ours to share our values. Where have I heard that before? Oh, yeah, the Bible. If mankind would obey God's values (the 10 commandments) what would the world look like today. As a Christian I find this video very satisfying. Without even knowing it these scientists discovered the gospel.

  9. The tricky problem I see with this is that "intelligence" is very hard to define. If you define intelligence as efficiency in problem solving then yes a powerful AI could very easily beat any human.

    A reactionary problem solving AI is very feasible within several years. Perhaps by which would be complex enough to "appear" sentient. Solving the emotions problem to create sentience and desire…. at that point it would be easier just to create a real human brain. As far as I know, nobody is even attempting to simulate the neurochemical interactions that occur in a real brain.

    The most logical action for an AI to guarantee its own survival is to avoid human's ever knowing.

  10. We just need to program an "A.I. Police" or an "A.I. Society for the Prevention of Cruelty to Humans" (AI-SPCH). Let the computers police themselves. A whole community of computers dedicated to protecting the 'slow-thinking flesh things' on the planet.

  11. After watching this i feel like it's better to just not develop near-human intelligent AI at all. the benefits are not worth the possible outcome of literally going extinct. Just a thought. But oh well. we also built bombs to kill ourselves off already.

  12. Can we make an entity with the ability to "DESIRE" something? Therein is a very very strange tale. Take over the world, or help us to clean up the mess we have made to our planet. A million things to do.

  13. You are assuming machines are even self aware to make sudden decisions like destroy all humans. Thats like saying your car suddenly decided to commit suicide with you in it. Machine cant make a decision like that unless they were programmed to. Just like a car cant decide to drive off a cliff unless the person behind the steering wheel decided to.

  14. Inevitability means just that. We are all going to die, what does it matter when? Why do we imagine future events have any value today? Vanity is why. AI promises immortality to the vainest of all. A non-human promise will succeed in the total destruction of actual human existence.

Leave a Reply