(this is a blog post... waiting rooms)
http://www.allfacebook.com/2010/05/infographic-the-history-of-facebooks-default-privacy-settings/
This is very interesting and clearly shows default settings over time. I'd love to see a side by side as well as callouts to policies related to the shifts in their default settings. Regardless it does serve as a metaphor for the fluidity of the policies in place, as witnessed with recent court cases with the FCC and EFF, among other banal acronyms. Harkening to the blog post - expecting privacy in a "social network" without actively learning how to manage it (i.e. spending time and calories) is like getting into a taxi in Chicago and expecting not to pay. There is an implicit understanding implied by the very nature of the website, clearly broadcast in "advertising" often featuring real-time "social graph" threads (posts, photos). What is troubling to me is the belief that regulation is the solution, that our government or someone else can make some very personal and important choices for us when we ourselves have no idea what choice we would make if the situation arose (because we have not experienced it yet).
http://www.allfacebook.com/2010/05/infographic-the-history-of-facebooks-default-privacy-settings/
This is very interesting and clearly shows default settings over time. I'd love to see a side by side as well as callouts to policies related to the shifts in their default settings. Regardless it does serve as a metaphor for the fluidity of the policies in place, as witnessed with recent court cases with the FCC and EFF, among other banal acronyms. Harkening to the blog post - expecting privacy in a "social network" without actively learning how to manage it (i.e. spending time and calories) is like getting into a taxi in Chicago and expecting not to pay. There is an implicit understanding implied by the very nature of the website, clearly broadcast in "advertising" often featuring real-time "social graph" threads (posts, photos). What is troubling to me is the belief that regulation is the solution, that our government or someone else can make some very personal and important choices for us when we ourselves have no idea what choice we would make if the situation arose (because we have not experienced it yet).
Defaults (i.e. just in case someone doesn't take the time to review policy, preferences, settings, etc...):
My wallet is private. What I spend my money on is not public knowledge for obvious security reasons. Besides, banks hate fraud and scams and spam (if they are legitimate). When that stuff happens to my money, I can sleep safely (sort of) knowing that my bank wants that info secure as much as I. Mint.com was able to "open up" their platform in ways extremely useful to their core offering without risking even the perception of risk. Why is it so hard to do this on eCommerce sites? How can eCommerce providers reassure their customers that the information collected will not put someone at risk of theft or harm but will enhance their experience through gained understanding (Amazon claims this but I have yet to stop seeing stuff so far out of the realm of what I am interested in getting in the way of what I am that I fail to see the logic working). While my wishlist may reflect what I like and perhaps am able to spend money on, what I own and have purchased from them are not public knowledge unless I "opt-in" to identify and "rate"...
I partake in "social networks" because I want to connect with people (friends or otherwise). Whatever my intentions, it has never been anything other than clear to me that what I do will be shared with a "network" of people. When the network was small and limited to those "inside" (logged into) the platform, I feared little about violations to my privacy. It seems that as the network opened up and the whole world could scrutinize my "data-shadow" I began to worry that, say, some ill-intentioned organization or individual will recontextualize and repurpose my data for evil or harmful means. This has never happened to me or any of the people I know. Sure, there has been "drama" between a friend or former lover or family member, some spam, some spam from me to others I had to apologize for...
Most of the time when I experience harm from being active on a social site it is when I do something that breaks a collective "norm" of behavior. If I post something inappropriate, if I say something shocking, I get a response, negative or positive. Someone I know posted something to the effect of "why would god do such horrible things to a child if he has so much power?" and I was alarmed and checked in because I was concerned. One time I posted some comments about an agency I did work for and later regretted the rant and took it down. All of this stuff is so new (YouTube, for example, turned FIVE YEARS OLD yesterday). And when things are new they are "disruptive".
Ultimately and unfortunately, the harms that people experience through violations of their privacy will result in remediation to address and asses risk. We will come up with new ways of monitoring and managing information rapidly in the coming years due to increased connectivity, higher "bandwidth", better devices and infrastructures... But since humans are using all of this, there are social and cultural (and emotional) considerations and frameworks in place that could help in the development of systems and processes that ensure safety online, even on social networks.
It is not the website, it is the PERSON who is responsible for how s/he/they use the website. When we click those EULAs we agree to this. No social networking site wants people to live in terror or fear when they use their services. If someone gets into a car wreck, the car is seldom to blame (except for Toyota...). In other words, when someone uses information they should not have access to in the first place to cause harm and harm is caused, there is usually a consequence to the action. If the harm is widespread and severe enough, there is usually a policy-level reaction. Maybe I'm naive but I don't know anyone who would maliciously "phreak" someone on a social network and do harm to someone else. Luckily I've not been a victim; nor have I heard about any.
That's not to say sites like Facebook shouldn't be a little more empathetic to the lives of people, more consoling in their response to questions about their policies. No matter the "channel" in communications, there are always structures, vernaculars and syntax. Some are less obvious than others. In addition to various levels of channel noise there are understandings about how to behave or act. Otherwise, there would be no continuity, nothing to engage with. I refer to stuff that is "private" unless asked as an example here. Like who you may be dating and the status of your relationship. Again, it's hard to blame Facebook in my opinion when the "user" has the ability to not fill those fields out. I don't recall, having used Facebook for a long time now, those fields being "required".
In the end, eBay comes to mind the most when it comes to "liability" and "policy" on a "social network" where the risk of harms are many due to that leap of faith humanity must take in any marketplace transaction. "Buyer Beware" was a byline mantra when the site took off. From day one there were reputation management tools that allowed people to flag and file complaints and provide eBay with invaluable feedback to manage changes to their platform before wide-spread disaster or harm struck.
There are all these models outside of their intended use we can draw upon to render "defaults" for how privacy is managed in an increasingly "connected" age. Behavior will be the ultimate judge of how privacy will shape itself in the coming weeks, days, decades. People won't participate in social networks that deprive them of their expected right to security and safety of self and their "data-trails". Those that throw their hands in the air and claim naive when ignorance is more appropriately applied should reconsider why they are participating in a social network or providing information that, no matter what, is at risk of being used in a malicious or harmful manner due to the impossibility of completely securing a "channel" through which information is transmitted.
The coming mantra for 2010-2012 or so will be "User Beware". Not because people or companies are bad but because no one has an answer right now, the stuff is new, we are still shaping it all. Social networks in this sense could be used to share information and awareness about privacy and policy and ways to manage it via the "users" themselves. Which is something we're already seeing.
No comments:
Post a Comment