Yoda: After Language, Whole Earth Mind

Cultural Intelligence
0Shares
Got Crowd? BE the Force!
Got Crowd? BE the Force!

What Will Come After Language?

Ben Goertzel

December 27, 2012

A few weeks ago now I gave a talk, via Skype from Hong Kong, at the Humanity+ San Francisco conference…. Here are some notes I wrote before the talk, basically summarizing what I said in the talk (though of course, in the talk I ended up phrasing many things a bit differently…).

. . . . . . . .

My suggestion is simple but radical: In the future, the distinction between linguistic utterances and minds is going to dissolve.

In the not too distant future, a linguistic utterance is simply going to be a MIND with a particular sort of cognitive focus and bias.

I came up with this idea in the course of my work on the OpenCog AI system.  OpenCog is an open-source software system that a number of us are building, with the goal of  eventually turning it into an artificial general intelligence system with capability at the human level and beyond.  We’re using it to control intelligent video game characters, and next year we’ll be working with David Hanson to use it to control humanoid robots.

. . . . . . . .

One way is to create a sort of “standard reference mind” — so that, when mind A wants to communicate with mind B, it first expresses its idiosyncratic concepts in terms of the concepts of the standard reference mind.   This is a scheme I invented in the late 1990s — I called it “Psy-nese.”   A standard reference mind is sort of like a language, but without so much mess.  It doesn’t require thoughts to be linearized into sequences of symbols.  It just standardizes the nodes and links in semantic graphs used for communication.

Read full post.

Deep Comment by Eray Ozkural on December 28, 2012 at 12:57 am.

If we can create a sufficiently high bandwidth interface between two brains (I’m thinking gigabit or more), then I think the brains may adapt to sharing semantic content on their own. If the right regions are connected, we might find that content in all modalities may be shared, it would be a big improvement to be able to just share sensory/actuator maps, perhaps an AI program could help tune all the codes to a common format/scheme. It’d be great if you could calibrate once, and then just use it with anyone :)

In the end, a terabit/sec interface may be required (I once made a serious calculation even, yes :) . Logically you want to match at least the bandwidth of corpus callosum.

If all modalities could be shared, *then* it would be a simple matter to share natural language semantics, syntax, or semantic context, but also things like giving the control of your arm to another person, or looking through the eye of another, or gauging his emotional state, or accessing his memory, however freaky these might sound. Obviously, the ultimate brain2brain interface would need much privacy control (luckily such complex software can’t be crafted by web hobos).

OTOH, simpler modes of communication will be possible long before such ultimate interfaces. Telepathy through coupling of vocalization decoders and projection onto audio cortex would be norm, but visual interfaces may also be possible, and it is possible that using such interfaces complex computer data may be interchanged using appropriate visual UI’s.

Furthermore, going beyond two brains would likely require artificial neural space that would allow multiplexing of neural code, perhaps combining them in a kind of “ensemble system” in which a neuro-based artificial intelligence would form the top-level control, so that a coherent “self” would emerge from the co-operation of n biological brains.

More simply, such a system can be thought of the extension of voting based ensemble systems often used in machine learning. Possibly, the participants would only expose some parts of their wetware to the system, and would like to monitor both collective and individual decision making.

Financial Liberty at Risk-728x90




liberty-risk-dark