Berto Jongman: Philosophers and Ubber-Technos Get Funding to Announce End of Humanity — And They Get It WRONG.

Cultural Intelligence, Earth Intelligence
0Shares
Berto Jongman
Berto Jongman

But What Would the End of Humanity Mean for Me?

Preeminent scientists are warning about serious threats to human life in the not-distant future, including climate change and superintelligent computers. Most people don't care.

Sometimes Stephen Hawking writes an article that both mentions Johnny Depp and strongly warns that computers are an imminent threat to humanity, and not many people really care. That is the day there is too much on the Internet. (Did the computers not want us to see it?)

Hawking, along with MIT physics professorMax Tegmark, Nobel laureateFrank Wilczek, and Berkeley computer science professor Stuart Russell ran a terrifying op-ed a couple weeks ago in The Huffington Post under the staid headline “Transcending Complacency on Superintelligent Machines.” It was loosely tied to the Depp sci-fi thriller Transcendence, so that’s what’s happening there. “It's tempting to dismiss the notion of highly intelligent machines as mere science fiction,” they write. “But this would be a mistake, and potentially our worst mistake in history.”

And then, probably because it somehow didn’t get much attention, the exact piece ran again last week in The Independent, which went a little further with the headline: “Transcendence Looks at the Implications of Artificial Intelligence—but Are We Taking A.I. Seriously Enough?” Ah, splendid. Provocative, engaging, not sensational. But really what these preeminent scientists go on to say is not not sensational.

“An explosive transition is possible,” they continue, warning of a time when particles can be arranged in ways that perform more advanced computations than the human brain. “As Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a ‘singularity.'”

Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a singularity?

“Experts are surely doing everything possible to ensure the best outcome, right?” they go on. “Wrong. If a superior alien civilization sent us a message saying, ‘We'll arrive in a few decades,’ would we just reply, ‘Okay, call us when you get here–we'll leave the lights on?' Probably not. But this is more or less what is happening with A.I.”

More or less? Why would the aliens need our lights? If they told us they’re coming, they’re probably friendly, right? Right, you guys? And then the op-ed ends with a plug for the organizations that these scientists founded: “Little serious research is devoted to these issues outside non-profit institutes such as theCambridge Centre for the Study of Existential Risk, theFuture of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.”

So is this one of those times where writers are a little sensational in order to call attention to serious issues they really think are underappreciated? Or should we really be worried right now?

In a lecture he gave recently at Oxford, Tegmark named five “cosmocalypse scenarios” that will end humanity. But they are all 10 billion to 100 billion years from now. They are dense and theoretical; extremely difficult to conceptualize. The Big Chill involves dark energy. Death Bubbles involve space freezing and expanding outward at the speed of light, eliminating everything in its path. There's also the Big Snap, the Big Crunch, or the Big Rip.

But Max Tegmark isn’t really worried about those scenarios. He’s not even worried about the nearer-term threats, like the concept that in about a billion years, the sun will be so hot that it will boil off the oceans. By that point we’ll have technology to prevent it, probably. In four billion years, the sun is supposed to swallow the earth. Physicists are already discussing a method to deflect asteroids from the outer solar system so that they come close to Earth and gradually tug it outward away from the sun, allowing the Earth to very slowly escape its fiery embrace.

Tegmark is more worried about much more immediate threats, which he calls existential risks. That’s a term borrowed from physicist Nick Bostrom, director of Oxford University’s Future of Humanity Institute, a research collective modeling the potential range of human expansion into the cosmos. Their consensus is that the Milky Way galaxy could be colonized in less than a million years—if our interstellar probes can self-replicate using raw materials harvested from alien planets, and we don’t kill ourselves with carbon emissions first.

“I am finding it increasingly plausible that existential risk is the biggest moral issue in the world, even if it hasn’t gone mainstream yet,” Bostrom told Ross Andersen recently in an amazing profile in Aeon. Bostrom, along with Hawking, is an advisor to the recently-established Centre for the Study of Existential Risk at Cambridge University, and to Tegmark’s new analogous group in Cambridge, Massachusetts, theFuture of Life Institute, which has a launch event later this month. Existential risks, as Tegmark describes them, are things that are “not just a little bit bad, like a parking ticket, but really bad. Things that could really mess up or wipe out human civilization.”

The single existential risk that Tegmark worries about most is unfriendly artificial intelligence. That is, when computers are able to start improving themselves, there will be a rapid increase in their capacities, and then, Tegmark says, it’s very difficult to predict what will happen.

Read full article.

Phi Beta Iota: These are not intelligence professionals, nor even multi-disciplinary academics. Not a single one of them has a clue as to what the ten high-level threats to humanity are, nor how to go about orchestrating an open source everything alternative to the present top down hierarchical system that concentrates power and gold at the top while externalizing all costs to the public and the Earth. They are evidently unaware of just how screwed up “modern” information technology actually is — the chances of a cascading series of nuclear power plant meltdowns is vastly more near-term than any possible artificial intelligence run amok scenario. Artificial Intelligence (AI) was retarded in 1990, remained retarded in 2000-2014, and is likely to continue being retarded into the foreseeable future, in part because engineers do not do hueristics and try to shut the human out of the loop. Now, to their credit, there is absolutely no question but that we must do much better at oversight of computational mathematics and models and algorithms. Some of the stuff being done today is not just out of control, it explodes economies — high speed trading, for example. Google being wrong is a lesser example. While we lack full disclosure of the alien technologies and selected black programs  that could be unleashed (think Stuxnet on steriods wiping out all electrical power stations worldwide), what we see in this initiative is a distressing combination of people with too much money and people with too little practical common sense. As the author of the above article notes, this is a white-male cabal — no people of color or other gender need apply. It is a club, not a community. It is a real shame there is no money to be found for educating the public about imminent threats — imminent tangible threats — such as the United Nations High-Level Panel so ably identifed in priority order. The human brain continues to be a million times more efficient than any computer, and with five billion human brains being ignored by this etherially eclectic ecology of eminent experts, we within the Phi Beta Iota network continue to call for solutions that nurture and elevate humanity, rather than striving to create alarm about the artificial. AI is not a threat. Idiot humans making rotten assumptions are one threat — idiot humans and genetic tampering are another threat — idiot humans refusing to embrace ethical evidence-based decision-support are a third threat. One would have thought philosphers would focus on the essence of humanity and morality, but that is not where the easy money is…. Integrity. Morality. Humanity. Plenty of work to be done on these topics, but evidently not at Cambridge or MIT.

Related:

2013 Why We Should Think About the Threat of Artificial Intelligence

Genetic invasion: distorting the human future

See Especially:

Review: A More Secure World–Our Shared Responsibility–Report of the Secretary-General’s High-level Panel on Threats, Challenges and Change

Review: Collapsing Consciously – Transformative Truths for Turbulent Times

Review: Designing a World that Works For All: Solutions & Strategies for Meeting the World’s Needs

Review: Ending the Male Leadership Myth – How Women Can Save Us from Destroying Ourselves

Review: Enough Is Enough: Building a Sustainable Economy in a World of Finite Resources

Review: High Noon–Twenty Global Problems, Twenty Years to Solve Them

Review: Holistic Darwinism: Synergy, Cybernetics, and the Bioeconomics of Evolution

Review: Managing Nano-Bio-Info-Cogno Innovations: Converging Technologies in Society

Review: Nonzero–The Logic of Human Destiny

Review: Other Inconvenient Truths Beyond Global Warming

Review: The Future of Life

Review: The Vanishing of a Species? A Look at Modern Man’s Predicament by a Geologist (Hardcover)

See Generally:

Worth a Look: Book Review Lists (Positive Future-Oriented)
Worth a Look: Book Review Lists (Negative Status-Quo)
Worth a Look: Recent Books on 10 High Level Threats
Worth a Look: Recent Books on 12 Core Policies
Worth a Look: Recent Books on 12 Major Political Players
Worth a Look: Recent Books on True Cost Economics
Worth a Look: Recent Books on All the Opens

Financial Liberty at Risk-728x90




liberty-risk-dark