Full text with Phi Beta Iota comments and additional links below the fold.
I know that Googlers and Xooglers are absolutely the best. I read “Ex-Google Engineer to Lead Fix-It Team for Government Websites.” I am confident that the Xoogler will bring high magic to the problematic Web sites from numerous Federal entities and quasi-government entities. In year 2000, there were 36,000 of these puppies. I don’t recall how many were not working the way the developers intended.
I don’t know how many US government Web sites there are today because the nifty free tools I used in 2000 and 2001 the way they did a decade ago.
How long will it take to address the backend issues of HealthCare.gov or get the other sites with glitches working “just like Google”? I think USA.gov might warrant a quick look too. I suppose one could check out the performance metrics for America Online or Yahoo, two outfits run by Xooglers. There may be some data that help in predicting the fix time.
Phi Beta Iota: Google only covers the shallow end of the pool (2% at best) and it cannot be trusted to actually secure “enterprise” information not intended to be exported to Google’s private cloud. The absolute worst thing the US Government could do right now is transfer custody of its l;egacy data to Google. What is should be doing is going open source everything and creating a new open cloud – open data – open everything solution for all public information, one that includes reliable by name security for individual pieces of data.
I love quotes about Big Data. “Big” is relative. You have heard a doting patent ask a toddler, “How big are you?” The toddler puts up his or her arms and says, “So big.” Yep, big at a couple of years old and 30 inches tall.
“If You Think Big Data’s Big Now, Just Wait” contains a quote attributed to a Big Data company awash in millions in funding money. Here’s the item I flagged for my Quote to Note file:
“The promise of big data has ushered in an era of data intelligence. From machine data to human thought streams, we are now collecting more data each day, so much that 90% of the data in the world today has been created in the last two years alone. In fact, every day, we create 2.5 quintillion bytes of data — by some estimates that’s one new Google every four days, and the rate is only increasing…
I like the 2.5 quintillion bytes of data.
I am confident that Helion, IBM’s brain chip, and Google’s sprawling system can make data manageable. Well, more correctly, fancy systems will give the appearance of making quintillions of whatevers yield actionable intelligence.
If you do the Samuel Taylor Coleridge thing and enter into a willing suspension of disbelief, Big Data is just another opportunity.
How do today’s mobile equipped MBAs make decisions? A Google search, ask someone, or guess? I suggest you consider how you make decisions. How often do you have an appetite for SPSS style number crunching or a desire to see what’s new from the folks at Moscow State University.
Yep, data intelligence for the tiny percentage of the one percent who paid attention in statistics class. This is a type of saucisson I enjoy so much. Will this information find its way into a Schubmehl-like report about a knowledge quotient? For sure I think.
Phi Beta Iota: We analyze less than 1% of the big data we have today, and there is no way, with current mind-sets and investments, we will get to exascale processing in the next decade. The above article is, as Brother Stephen so gently puts it, utter crap. To arrive at a multi-disciplinary, multi-lingual, multi-medium big data solution demands an open source everything approach that is affordable, inter-operable across all boundaries, and scalable from local to global. No one, anywhere, is championing this approach, that we know of.