The Urkel Hack: Facial Recognition and the Perturbed 10%

By Mark

Well folks, we’ve already talked a bit about some ways  various artisanal and digitally-printed masks and other costume stuff can fool facial recognition. (And even eyewitness recognition, if it’s done well enough…). Now, some intrepid tech researchers have demonstrated that you might not even need to get so fancy to beat some common state-of-the-art facial recognition software.

A small team of super-nerds from Carnegie Mellon and Chapel Hill just “demonstrated techniques for generating accessories in the form of eyeglass frames that, when printed and worn, can effectively fool state-of-the-art face-recognition systems….” In their article, they “…showed that our eyeglass frames enable subjects to both dodge recognition and to impersonate others.”

This is all possible because in most cases, you only need to obscure or “perturb” about 6%-10% of a facial image to throw off the facial recognition software. Facial recognition software is basically geared toward taking incredibly precise measurements of facial landmarks. It’s not easy to physically change these landmarks, and that’s why biometrics generally works: you’re probably not changing your fingerprints or facial bone structure anytime soon, but the software is only looking at a very specific set of data points. If you know how to alter enough of those critical data points, you may not be fooling a human being, but you will fool the software.

With some clever analysis to figure out exactly what the software is looking at and how it authenticates against its data points, these researchers figured out that you only really need to obscure, alter (or “perturb”) about 6%-10% of the facial image to spoof the software in most cases.

ladyface
For example, the image on the left is Eva Longoria. The image on the right is NOT Eva Longoria according to facial recognition software, because the data points have been “perturbed” just enough to fool recognition. If you ask me, they both look fine (Image and attribution in Accessorize to a Crime)

Figuring out exactly how to spoof the system is hard, but once you know how to perturb the image enough to throw off the scanner, the implementation is surprisingly simple. You don’t even need a 3D printer, or high-quality realistic costume mask, or any of that other cool stuff from our earlier articles. You just need some hipster-nerd spectacles and an inkjet printer. (The researchers used an Epson XP-830, so like, upscale…)

glasses

Basically, just print out the right “perturbed” imagery and stick it on existing eyeglass frames, and voila! (Image and attribution in Accessorize to a Crime)

Because only a relatively small amount of the face needs to be altered, screwing around with the area covered by moderately-big eyeglasses will do the trick. The interesting thing is not only that you can stop the system from recognizing you, but you can use this technique to spoof the imagery so the system thinks you’re someone else.

reese

According to the researchers, if Reese Witherspoon wore exactly these glasses, she’d show up as Russell Crowe. However, I’m assuming the behavioral profiling filters would alert when they realize “Russell Crowse” isn’t beating the shit out of somebody (Image and attributions in Accessorize to Crime)

 

manyfaces

The researchers were able to make themselves appear invisible to the software, impersonate celebrities, and impersonate each other. In the left most column, the researchers’ glasses spoof the system to make the software think it’s not looking at a face at all (aka an invisibility attack). In the 3 columns on the right, the eyeglasses make the top person appear as the bottom person. So, it’s really a question of whether you’d rather have the system see you as Milla Jovovivh, Carson Daly, or Sonia from Accounts Payable (Images and attributions in Accessorize to a Crime)

The researchers were able to run these spoofs in a white box environment AND a black box environment, meaning they also fooled software they didn’t have full access to. If you check out the article, you’ll see their success rates were quite good. As the researchers wrote in their article, “As our reliance on technology increases, we sometimes forget that it can fail. In some cases, failures may be devastating and risk lives.” That’s pretty much what we’re talking about here at MakingCrimes.com Somebody gets paid every time we roll out some new and improved surveillance or security technology that’s supposedly going to improve security in ways humans can’t. But we’d be wise not to rely too much on security technology. Stuff like this illustrates why we can’t let the human element get soft while multimillion dollar machines and software packages run all our security for us. When we do, we risk having huge investments and amazing technologies invalidated by Urkel glasses.

Check out the original article:

Sharif, Mahmood, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528-1540. ACM, 2016, https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s