May 6, 2010

Face.com API Can Use Scanned Images to Identify the Location, Orientation, and Identity of Human Faces; Your Face May Become an Access Point to All the Digital Data About You on the Web

New API Takes Facial Recognition From Facebook and Puts It Everywhere

Face.com has launched the alpha of their new API. Now, almost any site could find faces on photos.

May 4, 2010

Singularity Hub - Face.com, the company responsible for Facebook applications Photo Tagger and Photo Finder, lets you take any photo and quickly identify who is in it and where they are in the photo.

This facial recognition is a boon to those tagging photos, and now Face.com is ready to bring a similar capability to the rest of the internet. May 3rd saw the launch of their new open API capable of scanning images and rapidly identifying the location, orientation, and identity of human faces.

The API platform is meant for web designers who want to include a facial recognition feature on their own website. With this API, any company could let you upload a photo of yourself and find other photos of you in their database.

Now in alpha testing, registering to try the API is free and very quick. Face.com, operated by Israel-based Vizi Labs, is looking to share the API with the developer community to see if the next killer application for facial recognition will arise organically. Eventually, platforms like this one may help your face become an access point to all the digital data about you on the web ...

Facebook's "Evil Interfaces"

April 29, 2010

Electronic Frontier Foundation - Social networking companies don't have it easy. Advertisers covet their users' data, and in a niche that often seems to lack a clear business model, selling (or otherwise leveraging) that data is a tremendously tempting opportunity. But most users simply don't want to share as much information with marketers or other "partners" as corporations would like them to. So it's no surprise that some companies try to have it both ways.

Monday evening, after an exasperating few days trying to make sense of Facebook's bizzare new "opt-out" procedures, we asked folks on Twitter and and Facebook a question:

The world needs a simple word or term that means "the act of creating deliberately confusing jargon and user-interfaces which trick your users into sharing more info about themselves than they really want to." Suggestions?
And the suggestions rolled in! Our favorites include "bait-and-click", "bait-and-phish", "dot-comfidence games", and "confuser-interface-design".

Although we didn't specifically mention Facebook in our question, by far the most popular suggestions were variations on this one from @heisenthought on Twitter:

How about "zuck"? As in: "That user-interface totally zuckered me into sharing 50 wedding photos. That kinda zucks"
Other suggestions included "Zuckermining", "Infozuckering", "Zuckerpunch" and plenty of other variations on the name of Facebook's Founder and CEO, Mark Zuckerberg. Others suggested words like "Facebooking", "Facebaiting", and "Facebunk".

It's clear why folks would associate this kind of deceptive practice with Zuckerberg. Although Zuckerberg told users back in 2007 that privacy controls are "the vector around which Facebook operates," by January 2010 he had changed his tune, saying that he wouldn't include privacy controls if he were to restart Facebook from scratch. And just a few days ago, a New York Times reporter quoted a Facebook employee as saying Zuckerberg "doesn't believe in privacy".

Despite this, we'd rather not use Zuckerberg's name as a synonym for deceptive practices. Although the popularity of the suggestion shows how personal the need for privacy has become for many Facebook users, we'd prefer to find a term that's less personal and more self-explanatory.

No, our favorite idea came from Twitter user @volt4ire, who suggested we use the phrase "Evil Interfaces". The name refers to a talk by West Point Professor Greg Conti at the 2008 Hackers On Planet Earth conference.

Here's Conti explaining Evil Interfaces to a puppet named Weena:

Privacy info. This embed will serve content from youtube.com.

As Conti describes it, a good interface is meant to help users achieve their goals as easily as possible. But an "evil" interface is meant to trick users into doing things they don't want to. Conti's examples include aggressive pop-up ads, malware that masquerades as anti-virus software, and pre-checked checkboxes for unwanted "special offers".

The new Facebook is full of similarly deceptive interfaces. A classic is the "Show Friend List to everyone" checkbox. You may remember that when Facebook announced it would begin treating friend-lists as "publicly available information" last December, the change was met with user protests and government investigation. The objections were so strong that Facebook felt the need to take action in response. Just one problem: Facebook didn't actually want to give up any of the rights it had granted itself. The result was the obscure and impotent checkbox pictured here. It's designed to be hard to find — it's located in an unlikely area of the User Profile page, instead of in the Privacy Settings page. And it's worded to be as weak as possible — notice that the language lets a user set their friend-list's "visibility", but not whether Facebook has the right to use that information elsewhere.

A more recent example is the process introduced last week for opting out of Instant Personalization. This new feature allows select Facebook partner websites to collect and log all of your "publicly available" Facebook information any time you visit their websites. We've already documented the labyrinthine process Facebook requires users to take to protect their data, so I won't repeat it here. Suffice to say that sharing your data requires radically less work than protecting it.

Of course, Facebook is far from the only social networking company to use this kind of trick. Memorably, users of GMail were surprised last February by the introduction of Google Buzz, which threatened to move private GMail recipients into a public "frequent contacts" list. As we noted at the time, Buzz's needlessly complex "opt-out" user-interface was a big part of the problem.

OK, perhaps the word "evil" is a little strong. There's no doubt that bad user-interfaces can come from good intentions. Design is difficult, and accidents do happen. But when an accident coincidentally bolsters a company's business model at the expense of its users' rights, it begins to look suspicious. And when similar accidents happen over and over again in the same company, around the same issues, it's more than just coincidence. It's a sign something's seriously wrong.

No comments:

Post a Comment