Wednesday, May 26, 2010

Facebook and Privacy Part II

Attached is a PDF generated from Notable about my thoughts regarding Facebook and Privacy settings. As I've written previously in posts regarding privacy, the landscape is changing, morphing by the millisecond so anything I post in this context will probably be old news before I click the submit button.

Regardless, from an experience and design and business perspective, I noticed many things that fail to provide the (assumed) user with effective ways of not only configuring settings but understanding the configuration(s) and/or setting(s) in and of themselves.

High Level Observations:

- Why does a user have to go to a dashboard or a full-blown state/mode to configure content display models, content access or screen configuration? In other words, it would be so much more understandable and valuable to users if the settings for privacy where accessible in the context of interacting with the content.

- Why does the "preview" state have to be a state? Why can't it be a "resolution model" which shows me a real-time feedback loop of how what I choose or select impacts the "default view" of my profile from

- multiple perspectives. If you're going to force me into the "Only me, friends, and everyone" model of grouping, at least give me the option to define my own groups and ways of naming them/specifying access control. Facebook has always felt more like an application or platform as opposed to a website made of pages and page turns. Yet they insist on staying "simple and elegant" (which means they are too lazy to think about some fundamental design issues).

- Still seeing a lot of fine print, abstraction, and obfuscation burying more fine print behind links in sub or supporting copy blocks. An organization like Facebook is responding to public outcry. The experience in and of itself is a "brand message" and wholly effects "perception". It's not good enough to simply offer access anymore. What is vital if Facebook plans on retaining users or limiting attrition is to be completely transparent in policy and effect/input by the user.

- How do my privacy settings affect the use of my "social graph" in the form of several syndicatable streams, including Facebook? How does OpenID get affected? How can I manage OpenID/FBConnect privacy settings in this context? Can I?

Also stated before is the fact that social networking sites were not built to retain or protect a person's sense of privacy because they are about public (or specified as private) interactions via a channel called the "internet". In the end, these settings are a knee-jerk and quick panic response by what I assume to be c-class and legal fighting some made-up time limitation with the intent to "get something up" as opposed to provide real value (i.e. Clear understanding) to the user. The troubling pattern I am seeing here is that facebook is in a loose-loose situation. They are trying to control something that is at the core of their value proposition both to themselves and the people who use the website. Without the "social graph" and "data trail" people leave, FB diminishes in value returns in terms of relevancy and experience. By answering to public outcry, facebook has abandoned this core value structure capitulating to advertising and revenue streams due to its market position.

We all know that when the user is happy, the company will be too. I wonder when the companies of tomorrow will start realizing that this "game" has changed. That the user is in control now and that the system is expected to provide this control. It's no longer let's build it and let the user figure it out. It's the user dictates everything and I provide the tools to enable him or her or it to do so. Still, I see many companies, even as new as Facebook, holding tightly to old and failed models, repeating mistakes in favor of the business as opposed to listening to customers. This leaves a great gap for opportunity and competition, if not the death of Facebook to come (at least as we know it today).

My prediction for identity and privacy on the web: user beware and user controls. More and more pieces of our online identity have been moving to the "cloud" which means a syndicated and consistently synced identity that the user chooses where and what information is accessible to whom and when and how. We're not there yet. And the war is with the usual suspects who most of the time want to be given information without giving anything other than a bad user experience back. The value to all gets lost in the battle when the solution seems simple to those with experience: be transparent or don't do anything at all when it comes to my data and my privacy and a risk of me being harmed or vulnerable to harm through use or a system. Liability will always be an issue when it comes to privacy because the entire definition and concept of privacy is dependent on multiple people or parties. There are negotiations, norms, implicit and non-implicit rules of behavior. There are also policies in place that can be leveraged if harm does happen. In the end, it's all about personal responsibility and vigilance by the user to manage what data is provided and when and how.

Wednesday, May 19, 2010

Tomorrowland by Daniel D. Castro

I can't repeat this enough: your microwave will be speaking to your tires in
the somewhat near future. Sensory input (aka passive influence) into systems
will automate much of what we angst over about "privacy" online. Still, I
can't help but think back to classes in 1998 and prior where my esteemed
professors would speak of such things being common by "2010" (this is when
people scoffed at an "expert" proposition that over half of all households
in the US would have "broadband" access - ADSL within the next five years).
Point is that predicting the future is AIMING an arrow towards a target
while reading factors like wind speed and direction etc. If you focus on the
target, you usually miss, like in pool when you look at the cue ball (a
no-no) when lining up the shot. Businesses seem to think in shorter-term
intervals (like yesterday, I need this yesterday) without considering the
path walk, the journey and perhaps a change, constant change in plans along
the way. That's not to say that some businesses get lucky by blindly
charging forward in knee-jerk reaction ways as second movers or fast
followers or strange (interpretations) ways of "following" via a complete
lack of understanding in regards to stuff like user experience or design or
programming/software engineering...

We used to refer to this as "ubiquitous computing" where you would gain
"peripheral awareness" of activity by and from your servant machines. Isn't
it ironic that in AI and machine learning people are spending tons of money
on understanding concepts of "empathy" over data aggregation or cleansing?
Just some thoughts.

Thursday, May 13, 2010

Facebook and Privacy

(this is a blog post... waiting rooms)

http://www.allfacebook.com/2010/05/infographic-the-history-of-facebooks-default-privacy-settings/

This is very interesting and clearly shows default settings over time. I'd love to see a side by side as well as callouts to policies related to the shifts in their default settings. Regardless it does serve as a metaphor for the fluidity of the policies in place, as witnessed with recent court cases with the FCC and EFF, among other banal acronyms. Harkening to the blog post - expecting privacy in a "social network" without actively learning how to manage it (i.e. spending time and calories) is like getting into a taxi in Chicago and expecting not to pay. There is an implicit understanding implied by the very nature of the website, clearly broadcast in "advertising" often featuring real-time "social graph" threads (posts, photos). What is troubling to me is the belief that regulation is the solution, that our government or someone else can make some very personal and important choices for us when we ourselves have no idea what choice we would make if the situation arose (because we have not experienced it yet). 

Defaults (i.e. just in case someone doesn't take the time to review policy, preferences, settings, etc...):

My wallet is private. What I spend my money on is not public knowledge for obvious security reasons. Besides, banks hate fraud and scams and spam (if they are legitimate). When that stuff happens to my money, I can sleep safely (sort of) knowing that my bank wants that info secure as much as I. Mint.com was able to "open up" their platform in ways extremely useful to their core offering without risking even the perception of risk. Why is it so hard to do this on eCommerce sites? How can eCommerce providers reassure their customers that the information collected will not put someone at risk of theft or harm but will enhance their experience through gained understanding (Amazon claims this but I have yet to stop seeing stuff so far out of the realm of what I am interested in getting in the way of what I am that I fail to see the logic working). While my wishlist may reflect what I like and perhaps am able to spend money on, what I own and have purchased from them are not public knowledge unless I "opt-in" to identify and "rate"...

I partake in "social networks" because I want to connect with people (friends or otherwise). Whatever my intentions, it has never been anything other than clear to me that what I do will be shared with a "network" of people. When the network was small and limited to those "inside" (logged into) the platform, I feared little about violations to my privacy. It seems that as the network opened up and the whole world could scrutinize my "data-shadow" I began to worry that, say, some ill-intentioned organization or individual will recontextualize and repurpose my data for evil or harmful means. This has never happened to me or any of the people I know. Sure, there has been "drama" between a friend or former lover or family member, some spam, some spam from me to others I had to apologize for...

Most of the time when I experience harm from being active on a social site it is when I do something that breaks a collective "norm" of behavior. If I post something inappropriate, if I say something shocking, I get a response, negative or positive. Someone I know posted something to the effect of "why would god do such horrible things to a child if he has so much power?" and I was alarmed and checked in because I was concerned. One time I posted some comments about an agency I did work for and later regretted the rant and took it down. All of this stuff is so new (YouTube, for example, turned FIVE YEARS OLD yesterday). And when things are new they are "disruptive".

Ultimately and unfortunately, the harms that people experience through violations of their privacy will result in remediation to address and asses risk. We will come up with new ways of monitoring and managing information rapidly in the coming years due to increased connectivity, higher "bandwidth", better devices and infrastructures... But since humans are using all of this, there are social and cultural (and emotional) considerations and frameworks in place that could help in the development of systems and processes that ensure safety online, even on social networks.

It is not the website, it is the PERSON who is responsible for how s/he/they use the website. When we click those EULAs we agree to this. No social networking site wants people to live in terror or fear when they use their services. If someone gets into a car wreck, the car is seldom to blame (except for Toyota...). In other words, when someone uses information they should not have access to in the first place to cause harm and harm is caused, there is usually a consequence to the action. If the harm is widespread and severe enough, there is usually a policy-level reaction. Maybe I'm naive but I don't know anyone who would maliciously "phreak" someone on a social network and do harm to someone else. Luckily I've not been a victim; nor have I heard about any.

That's not to say sites like Facebook shouldn't be a little more empathetic to the lives of people, more consoling in their response to questions about their policies. No matter the "channel" in communications, there are always structures, vernaculars and syntax. Some are less obvious than others. In addition to various levels of channel noise there are understandings about how to behave or act. Otherwise, there would be no continuity, nothing to engage with. I refer to stuff that is "private" unless asked as an example here. Like who you may be dating and the status of your relationship. Again, it's hard to blame Facebook in my opinion when the "user" has the ability to not fill those fields out. I don't recall, having used Facebook for a long time now, those fields being "required".

In the end, eBay comes to mind the most when it comes to "liability" and "policy" on a "social network" where the risk of harms are many due to that leap of faith humanity must take in any marketplace transaction. "Buyer Beware" was a byline mantra when the site took off. From day one there were reputation management tools that allowed people to flag and file complaints and provide eBay with invaluable feedback to manage changes to their platform before wide-spread disaster or harm struck.

There are all these models outside of their intended use we can draw upon to render "defaults" for how privacy is managed in an increasingly "connected" age. Behavior will be the ultimate judge of how privacy will shape itself in the coming weeks, days, decades. People won't participate in social networks that deprive them of their expected right to security and safety of self and their "data-trails". Those that throw their hands in the air and claim naive when ignorance is more appropriately applied should reconsider why they are participating in a social network or providing information that, no matter what, is at risk of being used in a malicious or harmful manner due to the impossibility of completely securing a "channel" through which information is transmitted.

The coming mantra for 2010-2012 or so will be "User Beware". Not because people or companies are bad but because no one has an answer right now, the stuff is new, we are still shaping it all. Social networks in this sense could be used to share information and awareness about privacy and policy and ways to manage it via the "users" themselves. Which is something we're already seeing.

Monday, May 3, 2010

Thoughts about "The Data Driven Lifesyle" article in the New York Times Weekend Magazine May 2, 2010

From the Author:
"People are not assembly lines. We cannot be tuned to a known standard, because a universal standard for human experience does not exist."

ME: which is why User Experience professionals tend to get frustrated (and designers, but that is an older story and much richer). Pat Whitney said it well when he spoke to the fact that "user research" and "data" based on behavior and sensor input automation has driven down costs and effort. Further, relying on older models that service older media channels (like television and radio advertising) will not provide the awareness or understanding it would take to create competitive experiences in the very near future (see now).


Comments:
"The map is not the territory." — Alfred Korzybski

ME: Richard Saul Wurman speaks to this. Maps are polical artifacts that speak to policy while the lives of people and culture etc form the basis of communities. We're used to looking at the map and the map is becoming less and less relevant with the rise of what we call "globalism".
"I think the loss of our human-ness is more the result of inadequate tools that make us adapt to them instead of the tools adapting to us.

The philosophical paradigm shift this represents is on a scale with the spread of written language, the development of agriculture, or the Enlightenment. Whether we like it or not, integrating the computer into the minutia of our daily lives means we are changing the game - externalizing the computing power of our own brains. The terror and the excitement people feel at this more and more obvious change is the most convincing evidence I can think of that it's real and it's accelerating."

ME: Jaron Lanier speaks to this in "You Are Not a Gadget". We tend to praise interfaces these days that would have been scoffed at 10 years ago in favor of the flash and glitter of the glint. It still amazes me that wiki is like the bomb these days. Still referred to as radical etc. Seems like we get lost in the end game and end result (or what we want it to be) rather than step back, as Pat Whitney said at AIGA's "The Death of Advertising", and abstract the real problems and human needs, intent, agendas... Further, bad interfaces that we are forced to rely on alter our workflow, our epistemology, our mental constructs; not to mention cause great inefficiency in workflow. The last point is a great one. The fact that its happening and being openly discussed means it's too late to stop it? Do we wish to stop it? Can we slow it down? No. Moore's Law - it applies to us as well as machines.
"I've met people like this.

I usually find them very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very...

BORING."

ME: LMAO!!!! I wonder who wrote that... Anyway, Kurt Vonnegut was asked if he spends most of his time with his writer buddies and communities. His reply was short and sweet: No. When asked why he said something to the effect that it would be extremely boring and he would gain little in terms of the insight and awareness he relies on when he crafts stories for people who are not writers (like most of the world). When I attended graduate school, I always counted my fortunes when my life outside of the campus was not spent with other "human-centered designers". My mom always said "no one is more right or knows more than a graduate student". Not only can they be boring but offensively ignorant of the world outside of their own, specialized realms.
"literacy was once a threat to humanity because of the way it "represented" the vagaries of human life. (I am reminded of the belief in some cultures that photographing the human form is kind of theft of the soul.) I am sure you are right that we will eventually find humanity in data, as we have in the written word.

However, it is not honest or responsible to confidently assert, for example, that early critics of the written word were simply wrong. History does not show that. History shows, rather, that the written word made its wielders more powerful. Don't forget: the written word has often been used to oppress. Think of Martin Luther and the early Protestantism--it was largely a response to the way the Church had used literacy as a tool of oppression. Our idea that literacy liberates is basically a function of the fact that it equalizes the weak with their oppressors, not that it is "inherently" liberating.

Self-tracking will undoubtedly be used to oppress. It will wend its way into mainstream culture, eventually becoming something that employers expect of you as a matter of course. The temporal "productivity gaps" which we use to daydream, think about politics or other non-work related ideas, or simply consolidate memories, will be targeted and eliminated. Also, it is almost inconceivable that self-tracking data will avoid eventually going public.
Only by grasping the subtle seriousness of this issue will we give ourselves a chance at actualizing a future that does not involve blanketing ourselves in highly granular control mechanisms.

It's probably inevitable but that doesn't make it good. Look at it this way: we will never know what the world would be like today if writing hadn't been invented, and conversely, there are an indefinite number of technologies that weren't invented hundreds of years ago, and we will never know what the world would be like today if they had been invented."

ME: Yeah, people's initial reaction to change, usually when it is inevitable and will disrupt current behavior, is to shoot it down. We in the digital innovation group experience this daily. Especially when we're right-on in our response to a problem or thinking about something. I know we've done a great job when the reaction to our work is WTF!? Even if it's wrong the presentation serves as a "probe" to gain insight into what people think would be "right".

Further, what was missing from the comments and the article itself was mentioning about how much of the input AND analysis of the "data" about us will be automated so it won't require a "second life" of "reflection" to make sense, make use or, or find value in the "personal data stream". They also missed the point about personal control and our tendency to not use stuff we can't control - especially when it has to do with our ability to deny or ignore various aspects of our inner lives.