Robert Steele: Why Big Data is Stillborn (for Now) + Comments from EIN Technical Council

IO Impotency
0Shares
Robert David STEELE Vivas
Robert David STEELE Vivas

Big Data 101

Terabyte a day from a single sensor is a big deal. Put enough of them together and you get a petabyte that would take three years to transfor over existing legacy pipes. The “cloud” is fiction — picture using a straw to suck on the ocean.

01 Most pipes are in the gigabyte range. There are number in the terabyte range but they tend to be hogged by either secret intelligence or secret finance (e.g. between UK and US). Most “big data” has to be moved in physical containers. Most data centers do not have excess capacity to handle petabyte level simultaneous search and pattern discovery.

02 The big data endeavors that ARE successful at distributing massive amounts of data (in the multi terabyte range per day) over legacy networks are successful because they were designed from sratch to do exactly that. This cannot be said of most if not all intelligence collection programs.

03 Persistent surveillance is a pig. A really big pig. Most persistent surveillance offerings have software optimized for the one pig, not for many little pigs contributing to one big pig pen. Quality source-independent software is a HUGE differentiator and most Contracting Officers and their Technical Representatives (COTR) do not appear to understand this. On top of that is the analytic mindset and training that goes with making the most of many little pigs penned together under one analytic software umbrella.

All ***very*** interesting. We are indeed on the verge of a new era in the craft of intelligence, but there are some very nasty financial and performance failures clearly visible in the near term. There is no over-arching leadership across the USA for all applications, and the secret world tail in wagging the USG science dog which is not cool at all. To its credit US IC needs and financial commitments are considerable, but they are also managed in a vacuum without regard to the great good that could be achieved in OMB were able to manage Whole of Government big data planning, programming, and budgeting — while integrating the good of commercial big data!

Success stories include the Hadron Collider and eventually, the Square Kilometer Array Telescope — they are addressing data provenance, movement, and the pre-planned use of tools such as BitTorrent. However, when you are generating 1 petabyte per second, you cannot save more than a tiny fraction of this.

Simulations are fiction — however high our resolution — and however intelligent our models and assumptions — nature and humanity are vastly more high resolution and complicated. This is why human intelligence matters — technology is not a substitute for thinking. Persistent surveillance is not a substitute for human intelligence. We need both, thoughtfully managed and artfully nurtured.

The totality of the natonal network defines the intelligence and integrity of the nation. Bandwidth, throughput, and how “real time” is defined all come down to the weak link in the chain and we have many ***very weak*** links across the chain and especially in Washington, D.C. The bottom line is always “who benefits?” The FCC decision to destroy net neutrality is in error. The citizen, not the corporation, is “root” in a Smart Nation. We who understand all this stuff must continue to fight for an Autonomous Internet, Open Source Everything, and the eventual transparent integration of education, intelligence, and research away from the strait-jackets that now bind them so tightly.

CLICK HERE FOR INTERACTIVE CABLE MAP

COMMENTS FROM EIN TECHNICAL COUNCIL

IDEN A: A single strand of fiber can be lit to 88 100 gigabyte wave lengths..  That is in aggregate an 8.8 terabyte pipe.  It is also the internet 2 backbone and I believe they finally have it built and lit. You might call a wavelength a “pipe” and if there is anyone designing more than 100 gigabit wavelengths i'd like to hear about it.

IDEN B: A lot of undersea fiber is 10+ years old, so not able to handle wave division multiplexing as well as modern fiber.  Here's a wikipedia article about FLAG which I'm sure is not 100% accurate, but mentions 10 and 80Gb links, and I think that is using WDM. The link to Alaska, by way of example, is 6 pairs but each pair was only rated for 2Gb.  I think they were hopeful to get it to 4 times 2Gb with WDM. All I'm saying is that the bandwidth of a new fiber exceeds old fiber.  It's not just the WDM endpoints, but also the regen. One of the newest fibers (scheduled for testing in 2015) is in the 20Tb range, even more than you mentioned, and is for high frequency trading between London & Tokyo (routed via the Northwest Passage, which shaves 100 msec from the route).  http://arcticfibre.com/ For Internet2, I think their max long-haul links are still 100Gb, and that is mostly at the switching centers – only a few endpoint connections are at 100Gbps, so far.  You might be thinking also of National Lambda Rail [SHUT DOWN] and a few other projects, which pushed to 40Gbps end-to-end.

IDEN C: Good here.

IDEN D: Rock on, dude!  They do NOT want to hear this!

See Especially:

2012 Integrity, Reflexivity, & Open Everything – Presentation to the Washington Academy of Sciences

1976+ Intelligence Models 2.1

1957+ Decision Support Story

See Also:

Big Data @ Phi Beta Iota

Smart Nation @ Phi Beta Iota

Financial Liberty at Risk-728x90




liberty-risk-dark