A new system developed by researchers at Massachusetts Institute of Technology (MIT) and Artificial Intelligence Laboratory (CSAIL) can surpass the smartest people in the world. It aims to exclude the human element out of the data analysis, said its makers. The new system has been named ‘Data Science Machine’, which could be considered as a breakthrough in the field of artificial intelligence. The new system of AI aims to take humans out of data analysis.
ROBERT STEELE: All ying, no yang. Yoda nails it. MIT is smart on many things, and limited on some things. Apart from the natural hype and misrepresentation that seems to characterize all announcements these days, including academic announcements, the MIT break-through — acknowledged as such — has three major short-falls.
01 There is no residual processing capacity necessary to get beyond the 1% of the Big Data that we process now. Never mind that most of that Big Data is marginally useful data from the past, or that the Internet of Things is going to overwhelm what we have now in the way of processing capacity by multiple orders of magnitude.
02 MIT seems to be emulating IBM (Watson) in hyping a narrow definition of data within a narrow data set, against a limited number of humans. The key is in the data definition. Numerical indicators are indeed a form of data, and one particularly susceptible to brute power processing. How they became numerical indicators (any humans in that process?) and where humans matter all along the process is not discussed.
03 Patterns are nice — as are anomalies — but they are also context dependent, which is where humans excel. Numbers and patterns (including words in text) are structured data. Validity and value are still very much in the human domain. Stephen E. Arnold and Norman Lee Johnston still know way more about Big Data and the value of the human against the computer than anyone at MIT that I am aware of.
What occurs to me as I reflect on this excellent advance by MIT in a very narrow area is that MIT is the epitome of scientific reductionism. Artificial intelligence in its present incarnation is the ultimate in scientific reductionist thinking. MIT (and Harvard, and Stanford, and everyone else) are still not willing to take on the hard but by no means unachievable challenge of multidiscplinary multidomain education, decision-support, and research that must be done via holistic analytics, true cost economics, and open source everything engineering that is affordable, inter-operable, and scalable.
By the by, in relation to the alleged utility of the applied artificial intelligence, only four percent complete massive open online courses. It took me less than three seconds to figure that out. And cost nothing. Here is the part, from my most recent article, that MIT is not getting:
Mindful of Ervin Laszlo’s signal contributions, particularly his sense of quantum consciousness – we are all one, all energy within a larger cosmos, where Tom and I come together, and where we connect most fruitfully with the work of others, is in distinguishing between data, tools, and humans. Data without human receptors is irrelevant; data without tools is unprocessable; tools without data or humans are waste; humans without data are retarded; humans without tools are incapacitated. The Global Brain / World Brain demands a holy (holistic) trinity of all three: all data, all tools, all humans – and all open.
To speak of taking humans out of analysis is to demonstrate an infantile naivete about the craft of intelligence with integrity that should trouble any president, dean, or donor. It’s time we stop creating widgets in isolation and get on with the grand challenge of enabling intelligent life on Earth such that we create a prosperous world at peace, a world that works for all.