Face Recognition on Facebook: What Is it? Why Bother?
Face recognition tech Facebook has been rolling out since the early 2010’s is getting a lot of attention in the media due to yet another portion of privacy issues with the company’s actions. While the newly offered functions are by all means nice and useful, privacy enthusiasts, regulators, and numerous casual users expressed significant concerns about the power Facebook can potentially harness with a database of profiles on over 2 billion users, including their browsing habits, social connections, commutes, and, from now on, their faces.
While announcing the new face recognition features, Facebook made it clear that the data representing user’s facial features will be collected and used only with their own consent on an opt-in basis. As stated in Facebook Help Center: “This technology is currently only available in certain locations, and will only appear in your profile if you are at least 18 years old.” The new features aren’t available in Canada or Europe where the company doesn’t “currently offer face recognition technology,” probably, due to more strict data protection laws in place.
The company claims that it will use the data to improve privacy protection, aid visually impaired users, and tackle identity fraud on the platform.
The actual new features, as mentioned in the official blog post, are as follows:
- Facebook will notify users when they are in an uploaded photo and are included in the audience specified for that particular photo (everybody, friends only, etc.), even if they aren’t tagged yet. As the company puts it: “You’re in control of your image on Facebook and can make choices such as whether to tag yourself, leave yourself untagged, or reach out to the person who posted the photo if you have concerns about it.”
- Facebook will also alert users in case their faces appear on someone else’s profile picture. The company will “begin using face recognition technology to let people know when someone else uploads a photo of them as their profile picture. We’re doing this to prevent people from impersonating others on Facebook.”
- The new features will let visually impaired users know who appears on the photos in their timeline: “Two years ago, we launched an automatic alt-text tool, which describes photos to people with vision loss. Now, with face recognition, people who use screen readers will know who appears in photos in their News Feed even if people aren’t tagged.”
But, of course, these will be available only to the people who allowed Facebook to get, store, and use the data.
A generalized face recognition process consists of two stages: face detection, when an algorithm finds the part of an image that represents a face; and face recognition, which implies analyzing that particular part and, say, comparing the collected data to some previously collected samples.
Traditional face recognition algorithms follow either geometric (feature-based) or photometric (appearance-based) approach.
- Feature-based face recognition algorithms rely on scanning an image of a face and determining particular distinguishing features and biometrics, like the shape of the ears, the distance between pupils, and so.
- Appearance-based algorithms, in broad terms, analyze a face as a single object and distill its picture into sets of values, comparing the extracted values with a template that represents “the most mathematically average face” to eliminate possible variances. One of the most notable appearance-based methods is the Eigenface method introduced by Matthew Turk and Alex Pentland in 1991.
The hardest task for most algorithms is to recognize a face and its certain features in the first place. Since a human face is 3D, it looks different under different lighting and may be photographed from different angles, posing a problem for recognition algorithms.
Early solutions in face recognition, namely the work of automated face recognition pioneers Woody Bledsoe, Helen Chan Wolf, and Charles Bisson in the mid-sixties, involved a human operator who would manually extract the coordinates of features such as corners of the eyes, centers of pupils, etc. The coordinates would then be converted into 20 parameters, normalized to reduce the bias related to head position and compared to the respective parameters extracted from sample images.
Modern face recognition systems typically employ machine learning, meaning that humans “teach” algorithms to pick facial features using a series of training images. The most popular algorithms for the cause are neural networks, namely convolutional neural networks (CNN). To some extent, these algorithms mimic the inherent processes that allow human brain to distinguish and recognize faces. Basically, a neural network is an algorithm that can be taught to perform certain tasks while remaining somewhat of a “black box” even for its own creators.
In other words, a “teacher” willing to train an algorithm to distinguish cats from dogs shows it several thousand pictures of cats and several thousand pictures of dogs, telling which is which. The neural network, in turn, “remembers” the patterns and features associated with both “cat” and “dog” results. When put to an actual test, such a network would analyze the given image, find familiar elements in it, and produce an approximate answer, such as “the probability of that it’s a puppy is 80 per cent.” The “teacher” in this case is unaware of the actual processes occurring inside the algorithm’s “black box” and knows only the input and output data.
Facebook uses a similar system — DeepFace — based on a convolutional neural network. Evidently, all the details disclosed about the system are available in the research paper released in 2014. Facebook claims that as soon as the user with “tag suggestions” or “face recognition” options enabled gets tagged in a photo, its machine learning algorithms run through the picture and create a so-called “template” — a unique string of numbers associated with that particular user’s face.
According to the paper, DeepFace uses its “imagination” and a 3D model of an “average” human face as a reference to correct the alignment of the face in the image. Then the algorithm converts the image of a properly aligned face, which, as any other digital image, is just a set of numerical values for each pixel, into one long string of numbers — the template for this particular face.
For each newly uploaded picture or video Facebook will run similar algorithms and compare the machine-readable face templates on it to the templates in its database, suggesting people to tag if there is a match. The company also claims that the templates are created only for the users who enabled the relevant features, and are deleted after the users decide to turn those off.
Introduction of face recognition is effectively contributing to the growing pool of concerns around the amount and scope of personal information harvested by Facebook and other tech giants. Leakage of user information that includes biometric data would have been the greatest scandal Facebook has ever experienced, and put the Cambridge Analytica one to shame. In this case, things could potentially go way beyond targeting people with unsolicited advertising, and put a lid on privacy as we know it. As for the suggestion to use biometric data for security purposes, Adam Harvey, a counter-surveillance expert, told Gizmodo:
“When any information is co-opted for security purposes it becomes less secure to share. For example, sharing your mother’s maiden name online would not be a good idea. Likewise, Facebook’s proposed facial recognition product would make sharing your face online a security issue, even more so than it already is.”
In addition, the users who didn’t bother to check their Facebook privacy settings are all automatically signed up for facial recognition features. Facebook has confirmed the fact that initially facial recognition was turned on by default for all users in serviceable locations, processing and stockpiling their data until turned off, which provoked quite a negative public response.
Accused of collecting and storing users’ biometric data without their consent, Facebook is now being sued for the alleged violation of Illinois’ Biometric Information Privacy Act.
As Bloomberg reports, the Illinois residents involved in this case claimed that BIPA rules “give them a ‘property interest’ in the algorithms that constitute their digital identities,” providing the grounds to accuse Facebook of a real damage. If the consumers win the case, apart from the fines of up to $5,000 for each image it scanned, the social network will potentially face new restrictions on its use of biometric data in the US.
Facebook argued that the attempts to enforce Illinois law do not comply with its user agreement that requires resolution under the laws of California, but U.S. District Judge James Donato rejected it and let the case proceed. The company also claimed that its users “hadn’t suffered a concrete injury such as physical harm, loss of money or property; or a denial of their right to free speech or religion”, referring to a “concrete injury” standard for privacy suits, set by the U.S. Supreme Court in the case Spokeo, Inc. v. Robins, 578 U.S. (2016).
“When an online service simply disregards the Illinois procedures, as Facebook is alleged to have done, the right of the individual to maintain her biometric privacy vanishes into thin air,” Judge Donato wrote. “The precise harm the Illinois legislature sought to prevent is then realized.”
Judge Donato concluded that with its alleged actions against the consent requirements Facebook may have violated “the very privacy rights the Illinois legislature sought to protect.” Comparing the current situation to the aforementioned Spokeo case, involving mistakes in zip codes, Judge Donato added that this time the “injury is worlds away from the trivial harm of a mishandled zip code or credit card receipt.”
With the EU’s upcoming General Data Protection Regulation and the emerging details about the four-year old controversy involving Facebook and Cambridge Analytica the question of privacy and sharing one’s personal data with the tech giants is getting a lot of attention.
Enthusiasts and activists seek to overthrow the corporations’ reign over people’s privacy and call for clear regulation. Yet, some of them pick somewhat different approaches and work on individual countermeasures, such as anti-recognition wearables and makeup to hide from the “Big Brother.” The corporations themselves, namely Facebook, revise their policies and release soothing manifests, not without some regulatory stimuli.
Among lots of vague forecasts regarding the technocratic future of humanity one thing stands clear: those willing to protect themselves and their privacy from Facebook, Google, or whatever other tech giant whose products are so deeply intertwined with people’s lives, should be even more vigilant while uploading their data to the web, reading yet another EULA, or even walking outside their own homes.
Follow us on Facebook to stay tuned on the recent developments in regulation of new technologies, and be the first to read expert opinions and editorials.