In this short segment on the future after the Singularity, Ray Kurzweil, author, computer scientist, inventor, futurist and a director of engineering at Google posits that through AI we will by the end of this century be able to harness the stored potential of a rock (or any other physical substance for that matter) and transmute it computronium, “turning this rock into a trillion trillion times more powerful than all human brains today”. An AI with a goal of unlimited intelligence ...
Archive for month: June, 2018
Superintelligence author Nick Bostrom talks to Googlers
OHF Admin, , Artificial Intelligence, Existential Threat, Singularity, 0Oxford Professor Nick Bostrom’s book Superintelligence has raised a lot of awareness about the existential risks associated with super-intelligent AI. Can you hear Ray Kurzweil in the front row?
AI Governance Talk with Professor Allan Defoe
OHF Admin, , AI Governance, Artificial Intelligence, 0Professor Allan Dafoe’s from the Governance of AI Program, based at the University of Oxford’s Future of Humanity Institute, interviewed on 80,000 Hours Podcast on the governance of artificial intelligence.
Meet Norman the psychopathic AI
OHF Admin, , Artificial Intelligence, Existential Threat, Singularity, 0In this article from BBC News they report on a psychopathic algorithm that was created by a team at MIT as part of an experiment to see what training AI on data from “the dark corners of the net” would do to its world view. We can see that a key issue is the data sets that AI use for training/learning. But what happens when AI are given free reign on the internet? Both mainstream news and alternative news is well known to have a negative and sensationalistic bias- which attracts hi...
What happens when computers are more intelligent than humans?
OHF Admin, , Artificial Intelligence, Existential Threat, Singularity, 0Oxford philosopher and top 100 global thinker, Nick Bostrom presents some terrifying questions, and presents a sold argument for dealing with the “alignment issue”- the question of how to align the values of a super-intelligent AI with human values. The solution is not as easier as first appears, and there is everything to lose if we can not get this right.
Recent Comments