As facial recognition software has spread into the public sphere, some people have relied on masks to protect themselves but new research from Northeastern University suggests a graphic t-shirt could also do the job.
Described as an ‘adversarial t-shirt,’ the piece of clothing works by scrambling the facial recognition software’s capacity to single out individuals in the area it’s recording.
The team behind the t-shirt have so far tested it on two common neural network AIs used for facial recognition software and found it successfully prevents identification a little over half of the time.
‘Deep neural networks are very powerful, but also can be vulnerable to adversarial attacks,’ Northeastern’s Shelley Lin told the university news blog.
‘When you wear the T-shirt, it’s highly possible that the deep neural network won’t identify you in an image.’
Before facial recognition software can begin analyzing a person’s face, it first need to identify an individual person within the frame.
Most programs do this by drawing a bounding box around individual people or objects to separate it from the general background.
The t-shirt’s abstract, pixellated pattern was specifically designed to prevent the facial recognition software from drawing a bounding box around the individual wearing it, meaning the AI can’t move forward to the next step of analyzing the person’s face.
In preliminary testing, the team found the t-shirt had a success rate of between 11 and 63 percent with two specific pieces of object recognition software, YOLOv2 and Faster R-CNN.
One of the initial challenges was accounting for the fact that t-shirts bend and deform as a person moves, meaning the disruptive effects might not be consistent in every frame of video.
To avoid this effect, the team first recorded a test subject walking through a camera wearing a t-shirt with a standard checkboard pattern.
The team then analyzed how each square on the checkboard was shifted or distorted and recalibrated their graphic accordingly.
‘We still have difficulties in making it work in the real world because there’s that strong assumption that we know everything about the detection algorithm,’ Lin told Wired.
‘It’s not perfect, so there may be problems here or there.’
The team says they hope their research will ultimately be useful for facial recognition software companies trying to fix technical loopholes in their products.