Bottom Line Up Front: Indexing systems have a spotty record of identifying weaponized, shaped, or distorted information.
This morning I thought briefly about “Profanity Laced Academic Paper Exposes Scam Journal.” The Slashdot item comments about a journal write up filled with nonsense. The paper was accepted by the International Journal of Advanced Computer Technology. I have received requests for papers from similar outfits. I am not interested in getting on a tenure track. The notion of my paying someone to publish my writings does not resonate. I either sell my work or give it away in this blog or one of the others I have available to me.
The question in my mind ping ponged between two different ways to approach this “pay to say” situation.
First, the authors who are involved in academic pursuits: “Are these folks trying to get the prestige that comes from publishing in an academic journal?” My hunch is that the motivation is similar to the force that drives the fake data people.
Second, has the search engine optimization crowd infected otherwise semi-coherent individuals that a link—any link—is worth money?
Indexing systems have a spotty record of identifying weaponized, shaped, or distorted information. The fallback position for many vendors is that by processing large volumes of information, the outliers can be easily tagged and either ignored or disproved.
Sounds good. Does it work? Nope. The idea that open source content is “accurate” may be a false assumption. You can run queries on Bing, iSeek, Google, and Yandex for yourself. Check out information related to the Ebola epidemic or modern fighter aircraft. What’s correct? What’s hoo hah? What’s downright craziness? What’s filtered? Figuring out what to accept as close to the truth is expensive and time consuming. Not part of today’s business model in most organizations I fear.
Stephen E Arnold, November 23, 2014