Well folks, we’ve already talked a bit about some ways various artisanal and digitally-printed masks and other costume stuff can fool facial recognition. (And even eyewitness recognition, if it’s done well enough…). Now, some intrepid tech researchers have demonstrated that you might not even need to get so fancy to beat some common state-of-the-art facial recognition software.
A small team of super-nerds from Carnegie Mellon and Chapel Hill just “demonstrated techniques for generating accessories in the form of eyeglass frames that, when printed and worn, can effectively fool state-of-the-art face-recognition systems….” In their article, they “…showed that our eyeglass frames enable subjects to both dodge recognition and to impersonate others.”
This is all possible because in most cases, you only need to obscure or “perturb” about 6%-10% of a facial image to throw off the facial recognition software. Facial recognition software is basically geared toward taking incredibly precise measurements of facial landmarks. It’s not easy to physically change these landmarks, and that’s why biometrics generally works: you’re probably not changing your fingerprints or facial bone structure anytime soon, but the software is only looking at a very specific set of data points. If you know how to alter enough of those critical data points, you may not be fooling a human being, but you will fool the software.
With some clever analysis to figure out exactly what the software is looking at and how it authenticates against its data points, these researchers figured out that you only really need to obscure, alter (or “perturb”) about 6%-10% of the facial image to spoof the software in most cases.
Figuring out exactly how to spoof the system is hard, but once you know how to perturb the image enough to throw off the scanner, the implementation is surprisingly simple. You don’t even need a 3D printer, or high-quality realistic costume mask, or any of that other cool stuff from our earlier articles. You just need some hipster-nerd spectacles and an inkjet printer. (The researchers used an Epson XP-830, so like, upscale…)
Because only a relatively small amount of the face needs to be altered, screwing around with the area covered by moderately-big eyeglasses will do the trick. The interesting thing is not only that you can stop the system from recognizing you, but you can use this technique to spoof the imagery so the system thinks you’re someone else.
The researchers were able to run these spoofs in a white box environment AND a black box environment, meaning they also fooled software they didn’t have full access to. If you check out the article, you’ll see their success rates were quite good. As the researchers wrote in their article, “As our reliance on technology increases, we sometimes forget that it can fail. In some cases, failures may be devastating and risk lives.” That’s pretty much what we’re talking about here at MakingCrimes.com. Somebody gets paid every time we roll out some new and improved surveillance or security technology that’s supposedly going to improve security in ways humans can’t. But we’d be wise not to rely too much on security technology. Stuff like this illustrates why we can’t let the human element get soft while multimillion dollar machines and software packages run all our security for us. When we do, we risk having huge investments and amazing technologies invalidated by Urkel glasses.
Check out the original article:
Sharif, Mahmood, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528-1540. ACM, 2016, https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf