Government Can Use Advanced Facial Recognition Software and Photos on Social Websites to Identify You
This is how the Golden Shield will work: Chinese citizens will be watched around the clock through networked CCTV cameras and remote monitoring of computers. They will be listened to on their phone calls, monitored by digital voice-recognition technologies. Their Internet access will be aggressively limited through the country’s notorious system of online controls known as the "Great Firewall." Their movements will be tracked through national ID cards with scannable computer chips and photos that are instantly uploaded to police databases and linked to their holder’s personal data.This is the most important element of all: linking all these tools together in a massive, searchable database of names, photos, residency information, work history and biometric data. When Golden Shield is finished, there will be a photo in those databases for every person in China: 1.3 billion faces. [Obama and the Biometric National ID]Cloud-Powered Facial Recognition is Terrifying
By harnessing the vast wealth of publicly available cloud-based data, researchers are taking facial recognition technology to unprecedented levelsSeptember 29, 2011
The Atlantic -...The cloud never forgets ('cloud' basically means the Internet).
That's the logic behind a new application developed by Carnegie Mellon University's Heinz College that's designed to take a photograph of a total stranger and, using the facial recognition software PittPatt, track down their real identity in a matter of minutes.
Facial recognition isn't that new -- the rudimentary technology has been around since the late 1960s -- but this system is faster, more efficient, and more thorough than any other system ever used. Why? Because it's powered by the cloud.[...]
With Carnegie Mellon's cloud-centric new mobile app, the process of matching a casual snapshot with a person's online identity takes less than a minute. Tools like PittPatt and other cloud-based facial recognition services rely on finding publicly available pictures of you online, whether it's a profile image for social networks like Facebook and Google Plus or from something more official from a company website or a college athletic portrait.
In their most recent round of facial recognition studies, researchers at Carnegie Mellon were able to not only match unidentified profile photos from a dating website (where the vast majority of users operate pseudonymously) with positively identified Facebook photos, but also match pedestrians on a North American college campus with their online identities.
The repercussions of these studies go far beyond putting a name with a face. Researchers Alessandro Acquisti, Ralph Gross, and Fred Stutzman anticipate that such technology represents a leap forward in the convergence of offline and online data and an advancement of the "augmented reality" of complementary lives.
With the use of publicly available Web 2.0 data, the researchers can potentially go from a snapshot to a Social Security number in a matter of minutes:
We use the term augmented reality in a slightly extended sense, to refer to the merging of online and offline data that new technologies make possible. If an individual's face in the street can be identified using a face recognizer and identified images from social network sites such as Facebook or LinkedIn, then it becomes possible not just to identify that individual, but also to infer additional, and more sensitive, information about her, once her name has been (probabilistically) inferred.
In our third experiment, as a proof-of-concept, we predicted the interests and Social Security numbers of some of the participants in the second experiment. We did so by combining face recognition with the algorithms we developed in 2009 to predict SSNs from public data.
SSNs were nothing more than one example of what is possible to predict about a person: conceptually, the goal of Experiment 3 was to show that it is possible to start from an anonymous face in the street, and end up with very sensitive information about that person, in a process of data "accretion." In the context of our experiment, it is this blending of online and offline data -- made possible by the convergence of face recognition, social networks, data mining, and cloud computing -- that we refer to as augmented reality.
Jason Mick at DailyTech notes that PittPatt started as a Carnegie Mellon University research project, which spun off into a company post 9/11.
"At the time, U.S. intelligence was obsessed with using advanced facial recognition to identify terrorists," writes Mick. "So the Defense Advanced Research Projects Agency (DARPA) poured millions into PittPatt."While Google purchased the company in July, the potential for such intrusive technology to be used against law-abiding citizens is cause for concern.
England saw this in the wake of the rioting, looting, and arson that swept across the country when a Google group of private citizens called London Riots Facial Recognition emerged with the aim of using publicly available records and facial recognition software to identify rioters as a form of digital vigilantism. The group eventually abandoned its efforts when its experimental app, based on the much maligned photo-tagging facial software Face.com, yielded disappointing results.
"Bear in mind the amount of time and money that people like Facebook, Google, and governments have put into work on facial recognition compared to a few guys playing around with some code," the group's organizer told Kashmir Hill at Forbes. "Without serious time and money we would never be able to come up with a decent facial recognition system."
Alessandro Acquisti told Steve Hann at Marketwatch after a demonstration that the prospect of selling his new app or making it available to the public "horrifies him."
And while there are certainly limits to what software like PittPatt can distill from the cloud, the closing gap between life offline and life in the cloud is becoming more observable with each progressive breakthrough:
So far, however, these end-user Web 2.0 applications are limited in scope: They are constrained by, and within, the boundaries of the service in which they are deployed. Our focus, however, was on examining whether the convergence of publicly available Web 2.0 data, cheap cloud computing, data mining, and off-the-shelf face recognition is bringing us closer to a world where anyone may run face recognition on anyone else, online and offline -- and then infer additional, sensitive data about the target subject, starting merely from one anonymous piece of information about her: the face.
I think judgment matters. If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place. If you really need that kind of privacy, the reality is that search engines -- including Google -- do retain this information for some time and it's important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.
These little bits of information exist like digital detritus. With software like PittPatt that can glean vast amounts of cloud-based data when prompted with a single photo, your digital life is becoming inseparable from your analog one. You may be able to change your name or scrub your social networking profiles to throw off the trail of digital footprints you've inadvertently scattered across the Internet, but you can't change your face. And the cloud never forgets a face.
No comments:
Post a Comment