And this is why so many people are wary of handing too much power to algorithms. TechDirt reports, “School Security Software Decided Innocent Parent Is Actually a Registered Sex Offender.” That said, it seems some common sense on the part of the humans involved would have prevented the unwarranted humiliation. The mismatch took place at an Aurora, Colorado, middle school event, where parent Larry Mitchell presumably just wanted to support his son. When office staff scanned his license, however, the Raptor system flagged him as a potential offender. Reporter Tim Cushing writes:
Full text and strong comment below the fold.
“Not only did these stats [exact name and date of birth] not match, but the photos of registered sex offenders with the same name looked nothing like Larry Mitchell. The journalists covering the story ran Mitchell’s info through the same databases — including Mitchell’s birth name (he was adopted) — and found zero matches. What it did find was a 62-year-old white sex offender who also sported the alias ‘Jesus Christ,’ and a black man roughly the same age as the Mitchell, who is white. School administration has little to say about this botched security effort, other than policies and protocols were followed. But if so, school personnel need better training… or maybe at least an eye check. Raptor, which provides the security system used to misidentify Mitchell, says photo-matching is a key step in the vetting process….
We also noted:
“Even if you move past the glaring mismatch in photos (the photos returned in the Sentinel’s search of Raptor’s system are embedded in the article), neither the school nor Raptor can explain how Raptor’s system returned results that can’t be duplicated by journalists.”
This looks like a mobile version of the PEBCAK error, and such mistakes will only increase as these verification systems continue to be implemented at schools and other facilities across the country. Cushing rightly points to this problem as “an indictment of the security-over-sanity thinking.” Raptor, a private company, is happy to tout its great success at keeping registered offenders out of schools, but they do not reveal how often their false positives have ruined an innocent family’s evening, or worse. How much control is our society willing to hand over to AIs (and those who program them)?
Cynthia Murrell, January 1, 2018
ROBERT STEELE: False positives are inherently defamatory. Certainly false positives with respect to being mistakenly identified as a sexual predator in a situation where human intelligence and integrity could have negated artificial stupidity is the worst false positive (along with being falsely accused based on falsified evidence, something law enforcement seems to do regularly) but there are others including unjustified declines on credit cards, and false arrest. The government is broken and not protecting We the People from false positives in all their forms. I have thought about this a lot, and concluded that #UNRIG election reform is “root,” along with a truth channel and the outing of elite pedophiles. The truth at any cost lowers all other costs. That is not something the US Government represents today, but I am not giving up on America the Beautiful, and hope the day comes when we can achieve an honest congress that makes holistic evidence-based decisions rooted in true cost economics and mindful of Open Source Everything Engineering (OSEE). False positives are Satan’s laughter.