Columns

ECLECTIC RANT: The Privacy Implications of Facial Recognition Technology

By Ralph E. Stone
Friday March 09, 2012 - 08:19:00 AM

By using facial recognition software, a law enforcement agency can use its video surveillance system to pull an image of an individual, run that image through the database to find a match, and identify the person. It can pick someone's face out of a crowd, extract the face from the rest of the scene and compare it to a database of stored images. 

According to How Facial Recognition Systems Work by Kevin Bonsor and Ryan Johnson— in a nutshell—it is based on the ability to recognize a face and then measure the various features of the face. Every face has numerous, distinguishable facial features, such as the distance between the eyes; width of the nose; depth of the eye sockets; the shape of the cheekbones; and the length of the jaw line. These features create a numerical code, called a two-dimensional faceprint, representing the face in the database. A newly-emerging trend in facial recognition technology uses a three-dimensional model. 

Law enforcement agencies have been using this technology for years. However, as the technology became less expensive, banks and airports now use this system. 

Facial recognition software is freely available online, and the technology is fast making its way into mobile phones. It is increasingly used in a variety of ways such as photo tagging on social networking sites, targeting advertisements in stores or public places, and for security and authentication. 

"Photo tagging" is when photos are uploaded, tags or words are added that allow searchers to find uploaded photos. Once a photo has been tagged, only the uploader can remove the tag. 

Someday, you may walk past a sandwich shop and a voice will call your name and invite you in for a sandwich and a cold drink. 

Unless facial recognition software is used with the permission or at least with the knowledge of the individual, then the use of such software raises privacy concerns. 

In U.S. v. Jones, decided January 23, 2012, the Supreme Court held that the use of a GPS device to monitor the defendant's car’s whereabouts was a “search” for purposes of the Fourth Amendment and required a warrant. However, Justice Sotomayer, in her concurring opinion, called into question the longstanding doctrine that individuals have no expectation of privacy over information voluntarily disclosed to third parties. She wrote, "This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks. People disclose the phone numbers that they dial or text to their cellular providers; the URLs that they visit and the e-mail addresses with which they correspond to their Internet service providers; and the books, groceries, and medications they purchase to online retailers." This seems to be a call to revisit what it means to have privacy in public spaces. Maybe, in the case of facial detection and recognition, people should have rights over who can take and use their faceprints. 

Users on Facebook, for example, upload about 200 million photos every day. Facebook uses facial recognition software, matching images to identities. Facebook uses the technology to recognize faces in photos and suggests who should be tagged in them. 

Recently, in a case brought by the consumer organization Verbraucherzentrale Bundesver against Facebook, a German regional court judge ruled that the ownership of data uploaded to Facebook belongs to the user, and Facebook can only use this data with the consent of the user. And further, Facebook users should be better informed about what happens to their personal data. 

Previously, in November 2011, Facebook settled with the Federal Trade Commission on charges the social network failed to keep consumer information private. The settlement requires Facebook to warn users about privacy changes and to get their permission before sharing their information more broadly. Facebook has also agreed to 20 years of privacy audits,. 

In developing a policy on facial recognition that addresses privacy and security concerns, the U.S. might look to the European Union, which has for many years had a formalized system of privacy legislation, regarded as more rigorous than that found in many other areas of the world. The US-EU Safe Harbor is a streamlined process for U.S. companies to comply with the EU Directive on the protection of persoanl data. It is intended for organizations within the EU or U.S. that store customer data. The Safe Harbor Principles are designed to prevent accidental information disclosure or loss. U.S. companies can opt into the program as long as they adhere to the seven principles outlined in the Directive. The seven principles include:  

* Notice - Individuals must be informed that their data is being collected and about how it will be used. 

* Choice - Individuals must have the ability to opt out of the collection and forward transfer of the data to third parties. 

* Onward Transfer - Transfers of data to third parties may only occur to other organizations that follow adequate data protection principles. 

* Security - Reasonable efforts must be made to prevent loss of collected information. 

* Data Integrity - Data must be relevant and reliable for the purpose it was collected for. 

* Access - Individuals must be able to access information held about them, and correct or delete it if it is inaccurate. 

* Enforcement - There must be effective means of enforcing these rules. 

Under the Federal Trade Commission Act an organization's failure to abide by commitments to implement the Safe Harbor Privacy Principles might be considered deceptive and actionable by the FTC. 

On December 8, 2011, the FTC held a public workshop , "Face Facts: A Forum on Facial Recognition Technology," which "focused on the current and future commercial applications of facial detection and recognition technologies, and explored an array of current uses of these technologies, possible future uses and benefits, and potential privacy and security concerns." An archival webcast of the proceedings can be found here. The public was invited to submit comments by January 31, 2012. 

Some of the questions raised at the workshop and commented upon inclluded: 

* What are the current and future commercial uses of these technologies? 

* How can consumers benefit from the use of these technologies? 

* What are the privacy and security concerns surrounding the adoption of these technologies, and how do they vary depending on how the technologies are implemented? 

* Are there special considerations that should be given for the use of these technologies on or by populations that may be particularly vulnerable, such as children? 

* What are best practices for providing consumers with notice and choice regarding the use of these technologies? 

* Are there situations where notice and choice are not necessary? By contrast, are there contexts or places where these technologies should not be deployed, even with notice and choice? 

* Is notice and choice the best framework for dealing with the privacy concerns surrounding these technologies, or would other solutions be a better fit? If so, what are they? 

* What are best practices for developing and deploying these technologies in a way that protects consumer privacy? 

Clearly, facial recognition technology has raised privacy concerns that calls for a reexamination of the doctrine that individuals have no expectation of privacy over information voluntarily disclosed to third parties.