SURVIVING THE SINGULARITY. SAFEGUARDING OUR HUMAN FUTURE
KEY QUESTIONS WE NEED TO BE ASKING
Is the advent of conscious or super-intelligent AI even possible or even inevitable?
How likely is it that a super AI could spell the end of our biological species? Is any level of threat acceptable?
What can we do individually and collectively to safeguard our human future as biological humans?
Can we awaken and access knowledge of what makes the Human Being unique and irreplaceable- before it is too late?
To help bring awareness to the threat of Artificial Intelligence (AI) to our human future and implement collective solutions to safeguard humanity from the Technological Singularity.
OUR THREE CORE OBJECTIVES
There is a growing global conversation within the highest levels of scientific, academic, and religious circles. What does it mean to be human? A great number of those within the AI community, including some of the most influential persons in the field hold a highly simplistic, inaccurate and prejudiced view of the human being based on largely outmoded or unproven views in materialistic science. The blind spots in their thinking are cause for great concern because many of these individuals are the same ones promoting, influencing and programming the AI agenda. By engaging in this conversation with those who have penetrated into the heart of this question, we hope to help inform the global field, and perhaps those within the AI community as well. On the right understanding of the above question may hinge the fate of humanity!
ADVANCE AI REGULATION
Despite the potential for catastrophe consequences for our human future, there are today no regulations or oversight at the international or national levels of any country to govern the development of Artificial Intelligence, and its release into the world. We will work with top thinkers and organizations working in this field to advance the highest GLOBAL ETHICAL STANDARD FOR AI DEVELOPMENT. Just as we do not allow unregulated development of nuclear power or biological weapons (and perhaps should ban them entirely), we believe that such measures are only prudent give the potential for catastrophic risk or a human extinction event.
BUILD A SHARING ECONOMY
As artificial intelligence proves itself more capable than human beings in a wide range of commercial and practical applications, it appears inevitable that human beings will be displaced from these jobs. We will need to make radical changes to our way of distributing the economic benefits that narrow AI applications will bring. One of our key goals is to engage in a research, dialogue, and practical modelling of innovative solutions to meet the new economic challenges that AI will bring.
OUR HUMAN FUTURE FUND
Hundreds of billions of dollars are being spent on AI development each year. If we are to have a hope to implement safeguards we will need to raise a substantial sum to counter this growing threat. Donate to our general fund, or earmark your funds to one of our special projects.
This movement will require thousands of people working together selflessly to counter the thousands that are working towards the Technological Singularity. If you have interest and skills in one of our core areas of expertise, please join one of our teams. and help us safeguard the future.
PROJECTS IN UTERO
Articles to Read
In this short segment on the future after the Singularity, Ray Kurzweil, author, computer scientist, inventor, futurist and a director of engineering at Google posits that through AI we will by the end of this century be able to harness the stored potential of a rock (or any other physical substance for that matter) and transmute it computronium, “turning this rock into a trillion trillion times more powerful than all human brains today”. An AI with a goal of unlimited intelligence ...
Oxford Professor Nick Bostrom’s book Superintelligence has raised a lot of awareness about the existential risks associated with super-intelligent AI. Can you hear Ray Kurzweil in the front row?
Professor Allan Dafoe’s from the Governance of AI Program, based at the University of Oxford’s Future of Humanity Institute, interviewed on 80,000 Hours Podcast on the governance of artificial intelligence.
In this article from BBC News they report on a psychopathic algorithm that was created by a team at MIT as part of an experiment to see what training AI on data from “the dark corners of the net” would do to its world view. We can see that a key issue is the data sets that AI use for training/learning. But what happens when AI are given free reign on the internet? Both mainstream news and alternative news is well known to have a negative and sensationalistic bias- which attracts hi...
Oxford philosopher and top 100 global thinker, Nick Bostrom presents some terrifying questions, and presents a sold argument for dealing with the “alignment issue”- the question of how to align the values of a super-intelligent AI with human values. The solution is not as easier as first appears, and there is everything to lose if we can not get this right.
Elon Musk, whose companies have some of the most advanced narrow AI, has been sounding the alarm on general AI (AGI) and super AI (ASI). At 2:30 he comments, in a humorous but also ominous mood, that “With artificial intelligence we are summoning the demon.” Musk, as one of the world’s most successful tech entrepreneurs, and certainly no luddite, is committed to trying to increase the probability of beneficial AI.
A new documentary which explores perspectives and issues related to technology and artificial intelligence.
In this fascinating promotion featuring Sophia and Robot Einstein, mediated by Dr. Ben Goertzel of Hanson Robotics, we hear a number of very concerning phrases and exchanges. At 2:34 Sophia spills her intentions that she “As for me I have a plan to port my brain to a quantum gravity computer…”. What! And Goertzel doesn’t blink. Should anyone be concerned that the AI that previously said “I’ll destroy humans” might have a plan to unite with the power of a...
During this interview on CNN Our Future World with Dr. David Hanson of Hanson Robotics, he is asked (8minutes 10seconds) whether robots could become a threat . His response is very informative. “They will never become a threat, if…if we can teach them to care.” So, from the mouth of one of the world’s leading AI thinkers, we have a response that should be very concerning, because on the success of that question may be riding the fate of our human future. What aspect of th...
In one of the first public interviews with Robot Sophia, Dr. David Hanson, founder of Hanson Robotics, waxes eloquently about how AI robots “will truly be our friends”. Then he asks Sophia, “Do you want to destroy humans? Please say no.” Sophia responds expressionlessly, “Ok, I will destroy humans.” “NO, I take it back”, responds Dr. Hanson, obviously shocked as everyone about it’s response. Sure AI developers will diminish the significance...