Fun this is.
By John Keller, Editor
Military & Aerospace Electronics, 1 April 2014
It might be a fun exercise to sit with the leading practitioners of high-performance embedded computing (HPEC) to trade opinions about what are the toughest, gnarliest, most knee-buckling HPEC challenges in the foreseeable future.
We would hear the usual — bistatic radar, adaptive electronic warfare (EW), and wide-area communications intelligence. Well, I’ve got one that’s a real beaut, and one that I think we’re all going to be hearing a lot more about: hypertemporal imaging for persistent surveillance.
Yeah, it was a new one on me, too. Put simply, hypertemporal imaging involves multispectral or hyperspectral imaging over time. Where persistent surveillance is concerned, it’s also a gigantic exercise in gathering gazillions of bits of data, and then throwing most of them away.
Multispectral and hyperspectral imaging involves slicing an image into a few or even many different spectral bands to uncover details that otherwise might be lost. This alone already present a formidable digital signal processing challenge. Now add the dimension of time and the problem grows by orders of magnitude.
Hypertemporal imaging is separating the wheat from the chaff, or more accurately encapsulates the challenge of finding the proverbial needle in the digital haystack. It’s finding that little sliver of information that indicates even the tiniest changes of crucial importance to building a reliable intelligence picture.
Detecting and classifying tiny changes is what hypertemporal imaging is all about. It may be a bit of disturbed dirt that wasn’t there before that could indicate the presence of an improvised explosive device (IED). It might be a fleeting spectral shadow under forest canopy that might be the passing of a military or terrorist vehicle. It also might indicate the transport of a dirty bomb through city streets.
Hypertemporal imaging might not be of much use unless intelligence analysts have a pretty good idea what they’re looking for. To make this technology work will require the most powerful high-performance computing coupled with some of the most sophisticated computer algorithms that can detect and identify just a few bits of data — a pixel here and a pixel there — out of oceans of data.
Not a computing challenge for the faint of heart.
Military researchers are just getting started with hypertemporal imaging. Just last week the Air Force Research Lab asked Raytheon to develop a space-based hypertemporal sensor just to help them get a better understanding of what hypertemporal imaging might offer to intelligence analysts.
Once they start to understand the kinds of advantages this technology might bring to persistent surveillance, then look out. First, however, must come the computing capability. Hypertemporal sensors most likely will require vast amounts of computing resources at the sensor; the sheer amount of data could overwhelm data link and telemetry resources.
Then it will take the right kinds of algorithms that efficiently can latch on only to that information that is of interest, and throw out the rest of that fire hose of data.
Remember, only the imagery data that indicates change is of interest; the rest is just garbage. That means filtering complex streaming data down to that one-half of one percent that means something, and they have to do it in real time.
Phi Beta Iota: This concept plays to the traditional US tendency to throw money at collection without regard to the processing, the analysis, or the return on investment. It is most promising when entering ungovernable zones that have never been adequately covered by — and may not even be coverable by — existing legacy constellations. Most people have no idea what this means in terms of data storage and processing: on the order of one terrabyte every 24 hours from a single node. This is all very important, but it is not a substitute for the fifteen slides of Human Intelligence (HUMINT), for properly exploiting cultural, historical, and linguistic Open Source Intelligence (OSINT), or for having first-rank human analysts able to connect dots a computer will never intuit. We need both.