Stephen E. Arnold: Elasticsearch Insights — and IDC Steals Arnold Report Costing $4, Sells for $3500

Commerce, Corruption, IO Impotency, IO Tools
Stephen E. Arnold
Stephen E. Arnold

Elasticsearch Disrupts Open Source Search

I did a series of reports about open source search. Some of these were published under mysterious circumstances by that leader of the azure chip consultants, IDC. You can see the $3,500 per report offers on the IDC site. Hey, I am not getting the money, but that’s what some of today’s go go executives do. The list of [misappropriated] titles appears below my signature.

Elasticsearch, a system that is based on Lucene, evolved after the still-in-use Compass system. What seems to have happened in the last six months is one of those singularities that Googlers seek.

In January 2014, GigaOM, a “real news” outfit reported that Elasticsearch had moved from free and open source to a commercial model. You can find that report in “6 million Downloads Later, Elasticsearch Launches a Commercial Product.” The write up equates lots of downloads with commercial success. Well, I am not sure that I accept that. I do know that Elasticsearch landed an additional $24 million in series B funding if Silicon Angle’s information is correct. Elasticsearch, armed with more money than the now aging and repositioning Lucid Works (originally Lucid Imagination) has. (An interview with one of the founders of Lucid Imagination, the precursor of Lucid Works is at http://bit.ly/1gvddt5. Mr. Krellenstein left Lucid Imagination abruptly shortly after this interview appeared.)

imageI noted that in February  2014, InfoWorld, owned by the publisher of the $3,500 report about Elasticsearch, called the company “ultra hip.” I don’t see many search companies—proprietary or open source—called “hip.” “Ultra Hip Elasticsearch Hits Commercial Release.” The write up asserts (although I wonder who provided the content):

Continue reading “Stephen E. Arnold: Elasticsearch Insights — and IDC Steals Arnold Report Costing $4, Sells for $3500”

Berto Jongman: Germany, France to mastermind European data network – bypassing US

07 Other Atrocities, 08 Wild Cards, Corruption, Government, Idiocy, IO Deeds of War, IO Impotency, Military
Berto Jongman
Berto Jongman

Germany, France to mastermind European data network – bypassing US

Angela Merkel and Francois Hollande will review plans to build up a trustworthy data protection network in Europe. The challenge is to avoid data passing through the US after revelations of mass NSA spying in Germany and France.

Merkel has been one of the biggest supporters of greater data protection in Europe since the revelations that the US tapped her phone emerged in a Der Spiegel news report in October, based on information leaked by former NSA contractor Edward Snowden.

Earlier, France learned from reports in Le Monde that the NSA has also been recording dozens of millions of French phone calls, including those of the French authorities. According to the report, in just one month between December 10, 2012 and January 8, 2013, the NSA recorded a total of 70.3 million French phone calls.

Continue reading “Berto Jongman: Germany, France to mastermind European data network – bypassing US”

Howard Rheingold: Multiplexing vs. Multitasking – the Human Computer Interface Enhancing or Degrading?

Cultural Intelligence, IO Impotency
Howard Rheingold
Howard Rheingold

While Google Glass is what most of the world hears about wearable info-devices these days, Steve Mann and Thad Starner were experimenting with (much bulkier!) wearable devices at the Media Lab more than a decade ago. I interviewed Tharner back then. He had a head-mounted display and he also communicated wirelessly with his networks through a one-handed keyboard (“twiddler”), sometimes asking questions about conversations he was engaged in face to face. In this blog post, Kevin Kelly picks out a key passage from an interview with Starner in a book by Michael Chorost. While Cliff Nass' work pretty clearly showed that most (not all!) media multitaskers were degrading rather than enhancing their performance on their tasks, Nass, in conversation with me, noted that he had NOT studies instances in which the multitaskers were working with multiple relevant information streams. Starner calls this multiplexing. We need more research about whether everybody can learn to do this and whether it enhances or degrades performance.

Multiplexing vs Multitasking

Thad Starner is one of several pioneers who have been personally experimenting with continuous visual input devices, sometime called wearable computing. To most people it looks like he has a screen attached to his eyeball. Starner wore his for years (as has others like Steve Mann, who started doing this earlier). They are living the dream/nightmare of being on the web 24/7, even while walking. So what is it like?

 

The main question: If your brain is connected to the internet, can you think of anything else? Michael Chorost interviewed Starner (below) in World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet, p.142,160) As far as I can tell, research with the population at large to date suggests that our ability to multitask is not as great as we think it is. In other worlds, when we multitask we do less well on more tasks. When Chorost asked him about this, Starner makes an interesting counter claim:

Read full article.

Eagle: Huge Hack Shows Weakness of Internet

IO Impotency
300 Million Talons...
300 Million Talons…

Huge hack ‘ugly sign of future' for internet threats

A massive attack that exploited a key vulnerability in the infrastructure of the internet is the “start of ugly things to come”, it has been warned.

Hosting and security firm Cloudflare said it recorded what was the “biggest ever” attack of its kind on Monday.

Hackers used weaknesses in the Network Time Protocol (NTP), a system used to synchronise computer clocks, to flood servers with huge amounts of data.

The technique could potentially be used to force popular services offline.

Read full article.

See Also:

World's largest DDoS strikes US, Europe

Mini-Me: Poverty Facts are Estimates – Time for a Data Revolution?

Corruption, Government, IO Impotency
Who?  Mini-Me?
Who? Mini-Me?

Huh?

Development data: how accurate are the figures?

The numbers we use in development, and most of what we think of as facts, are actually estimates. It's time for a data revolution

Claire Melarned

The Guardian, 31 January 2014

You know a lot less than you think you do. Around 1.22 billion people live on less than a $1.25 (75p) day? Maybe, maybe not. Malaria deaths fell by 49% in Africa between 2000 and 2013? Perhaps. Maternal mortality in Africa fell from 740 deaths per 100,000 births in 2000 to 500 per 100,000 in 2010? Um … we're not sure.

These numbers, along with most of what we think of as facts in development, are actually estimates. We have actual numbers on maternal mortality for just 16% of all births, and on malaria for about 15% of all deaths. For six countries in Africa, there is basically no information at all.

In the absence of robust official systems for registering births and deaths, collecting health or demographic data, or the many other things that are known by governments about people in richer countries, the household survey is the foundation on which most development data is built. Numbers from the surveys are used to estimate almost all the things we think we know – from maternal mortality to school attendance to income levels. Household surveys are run by governments or by external agencies such as the World Bank, USAid or Unicef.

But it's a shaky foundation. First, to make the survey representative of the population, you need to know a lot about the population to make a good sampling frame. This knowledge comes from a population census. But only around 12 of the 49 countries in sub-Saharan Africa have held a census in the past 10 years. So there might be large population groups missing – especially in countries undergoing rapid change. There are likely to be big urban informal settlements, for example, which are not included in the most recent census, and therefore don't exist for sampling purposes. They also don't happen very often – 21 African countries haven't had a survey in the past seven years.

And they're not all done in the same way, which makes comparing countries or combining data from different countries very difficult – and illustrates how hard it is to know the “real” number. There are, for example, seven perfectly acceptable ways of asking questions in surveys about how much people eat. A recent experiment by World Bank researchers in Tanzania, comparing results from the different methods, found that estimates of how many people in the country are hungry varied from just under 20% to nearly 70%, depending on the method chosen.

Read full article with more links.

Stephen E. Arnold: IBM Flails at Cloud and Machine Learning

IO Impotency
Stephen E. Arnold
Stephen E. Arnold

Watson with its Head in the Cloud

IBM’s Watson is proceeding to the cloud. Apparently, though, the journey is proving more challenging than expected. The Register reports, “IBM’s Watson-as-a-Cloud: Is it a Bird? Is it a Plane? No, it’s Another Mainframe.” Writer Jack Clark peers through the marketing hype, maintaining that Watson does not translate to the cloud as easily as IBM would have us believe.

The key to Watson’s functionality is its DeepQA analysis engine, which uses an amalgam of Apache‘s Hadoop, Apache’s UIMA, and other tools to achieve machine learning. This means, says Clark, that more work than one might expect must be done to get set up with the cloudy Watson.

He specifies:

“Applying DeepQA to any new domain requires adaptation in three areas:

Continue reading “Stephen E. Arnold: IBM Flails at Cloud and Machine Learning”