Thursday, July 22, 2010

Priority Mapping

The Digital Innovation Group has been working on evolving a concept introduced by a former colleague of mine named Joseph Dombroski, a User Experience Architect in the Chicago-Area. A priority map traditionally "road maps" various efforts, contingencies and influences, and the hierarchy of importance inherent within the efforts. It is traditionally used for engineering and software design, some business strategy from a tactical and mostly logistical perspective. 

Practicing User Experience for many years now, a thread I've found common to much of my endeavors is something some refer to as "parallel industry" examples that may speak to a design problem or issue or challenge in ways that answer questions or provide examples of possible directions we can take to innovate in another "parallel industry". An example of this would be priority mapping as applied to a design and user experience development and production process. 

One of the challenges when designing in multi-disciplinary and collaborative teams is dealing with agendas and incentives that drive various "stakeholders" and "players" working towards an "end goal." No matter what the "end goal" is, I've been on many projects where line of sight to the end goal(s) are obfuscated by insertion of agenda as "loudest voice in the room" or personal viewpoint anxiety derailment. What becomes more and more apparent during these moments of distraction, channel noise and argument, is that there needs to be a framework in place to guide and corral the discussions, prioritize efforts from the perspective of the "end goal" (and the business and user needs), focusing all work and conversation around the things that directly address the problems and needs at hand. 

Enter priority mapping for user experience. Priority mapping for UX takes into consideration everything from high level strategy to relative proportion of objects, content, functionality, in addition to "progressive disclosure" by answering to "changing modes" within a customer's intent or the system reflecting answers to that intent. Priority mapping for UX does not specify layout or design language. Priority mapping starts with the human need and expectation for value and backs out to gain perspective on a holistic view of an experience captured within modes and states (a "page" for example). Here's the process as it's evolved thus far:

1. through collaboration with all parties involved with the ideation and production of a final deliverable or solution, facilitate alignment with the "end use" goals throughout the team.

2. Based on these goals, do a content audit to see where existing assets can be leveraged and where new ones may need to be created. 

3. A user story or scenario helps (but be careful not to stereotype or assume) to provide a structure to demonstrate a "path" through an experience. 

4. Coalescing 1-3, "map" out the "high level" content "blocks" within a "mode" (window, browser...). Once the blocks have been identified, providing high level themes for an experience offering, it's time to work collaboratively to identify the "priority" and "proportion" of content, blocks or functionality relative to other content blocks. 

5. Using the finite space of a box (4:3 ratio or 16:9 ratio), come up with percentages of importance or "primary focus" vs "peripheral" or "secondary" focus. These percentages can drive the creation of the priority map in the sense that they are represented within the box by the amount of size each takes up. See Smartmoney's "Map of the Market" for an example of how relative proportion can be used to show volume and weight.

The priority map, once "finished" can evolved based on discussions and iterations. It can be used as a way to focus efforts and thinking on the end goals and work actively towards de-scoping, channel noise or irrelevancy. It is also a great resource to convey a solid direction and strategy that answers to the understanding needs of non-UX influences within the production process. 

As this is a new process and still evolving, I can show no examples from Sears as the work on the table utilizing this method is proprietary and confidential to Sears internal employees only. If you work at Sears, are interested in priority mapping, please reach out to me so I can walk and talk you through some examples and show the process.

Monday, July 5, 2010

iPad reflections on use (first three months) by a UX grouch

This post began creation initially using Atomic web browser in a tab holding blogger's posting UI.
I was able to input the title (though I discovered my breath was a command to hide the keypad) but was unable to begin writing these last two sentences due to some incompatibility with my more like a "real" browser and the "open source API schema". Thankfully I was able to switch to evernote to write this post. I'll copy and paste it into the input box and format it using my laptop which is sometimes a desktop. Some parts of my post may happen via SMS or cell. These smaller mobile devices feel so sluggish in the catch up to the capabilities I tend to take for granted in my larger clunkier devices. Five* years ago or so the iPhone just came out. Touch screens prior to it on mobile depended on stylus input and touch screens on larger scale were tap and point and filled with puffy buttons (well suited for vending, service and terminal applications).
This is one of the places where the iPad feels less like a "robust" machine but a toy version of what's to come. Though i like the thinking around multiple orientations and locking (something I wish the iPhone had) I seem to prefer landscape mode over most for reasons of more space for more stuff or breathing room for focus (I tend to use the device on the toilette or in bed horizontal).
I still wish I could fluidly multitask like on a laptop or desktop and feel trapped within the shuffle of transitions that seem and feel redundant when I wait for feed or program loads (sometimes not the fault of the device). My states however are saved, like if I spazzed and accidentally hit the hardware recessed home button and closed evernote without hitting save. But like most novel things that are initially deemed "cool" in an interface can quickly become repetitive nuances hindering or breaking the flow of using a tool or application.
I can't deny that it serves as a great photo frame and music player and portable note taker as well as a sharing device in a show you kind of way. I sense slide shows coming back with it getting easier to wirelessly transfer images instantaneously to several places at once, like flickr, where I can preview and witness the shoot unfold.
Physically, my breath seems to say close keyboard in certain positions while typing. Again I think of the next manifestations of keyboard input like simulated 3D like tactile response inflation of the box so I don't have to scrunch or develop bad typing and spelling habits (it's much like a conversation on a cell phone, you're shown a possibility of how what you said could be interpreted and sometimes you have to repeat yourself several times before the other person can understand, sometimes through a crash or disconnection and others through distortion of my intended or expected input as represented by the device be its voice channel or text input channel).
When I switched to the safari web browser native to the iPad os I encountered the same input problems and again switched back to Evernote. At this point it may be fair to outline the pros and cons experienced thus far in my use of my iPad.
Screen brightness and size compared to the other "mobile" or "micro" devices I use and own (this includes a "netbook" loaded with both WindowsXP and Ubuntu Linux, an iPhone 3GS, a 13" MacBook Pro with a 7200 RPM custom hard drive and maxed out ram, among other gadgets) is impressive, in addition to the resolution. What I can admit is that computers and components are in fact shrinking and becoming more mobile in their use. In my early days of design and computers, a desktop was a necessity if one wanted to produce audio or video or high resolution graphics. Moore's law came faster (and slower - myths here) than many of us professional insiders will admit. The iPad isn't even a year old. All of these "game changing" devices are in their infancy.
Hardware mapping to function: seems like Apple has institutionalized the "home" metaphor through the application of providing a hardware key. It's like the early versions and applications of the esc key as the universal panic button. If I'm disoriented or want to switch to another application I hit the home key. This landing and routing scheme support single-tasking through requiring a user to ass through the gate of home before moving onto a sub-level within the architecture. The screen orientation lock button as hardware and the orientation scheme in general are disorienting. There is a conflict with the lock toggle and the volume controls. Despite owning the device and using it daily for several months, I still require the use of trial and error to discern up from down. Then there is the lock button. While I understand it's dependency for the iPhone (decrease butt dialing) I fail to see the value here. Especially when cases for the iPad are considered in this mix. A case seems essential to the ownership of an iPad if not for protection of a relatively frivolous and expensive gadget in an ecosystem of devices I utilize in my daily life. In my experience the case facilitates easier use via provision of inclined surface for typing on the keyboard or stand for when my iPad is in what I refer to (among others) as "passive viewing mode". What the lock breaks is the principle of on/off expectation. There is a mapping to the unlock in software form yet locking itself is initiated via hardware. There is no software based lock equivalent. Same goes for the screen lock. And volume. Why make these hardware based functions when everything else on the device seems to be software based?

Keyboard: here's where I get overly frustrated. No matter the position I sit, no matter how hard I concentrate, no matter how much practice, my rate of error using a touch pad keyboard is astoundingly high (inefficient). For a while the flashiness of the UI was able to salve my disdain and at first I welcomed auto-correct. What I don't get is that Apple took something that is a universally understood design vernacular and "innovated" it in ways that provide more reliance on acceptance of a learning curve and the limitations of the interaction than on using the input mode to foster more efficient input into the system — like switching "states" between symbolic/numeric input (see screen shot), or hiding and showing the keyboard (again, discovery initiated with a learning curve). Last, haven't figure out how "shift" works... 

Oh! That's what the symbolic/numeric toggle button on the keyboard is for. It makes me wonder if apple is trying to change the game not only with platforms and gadgets but how we cognitively map our physical world into a virtual one. I assume they own the rights or patent on this QWERTY keyboard as well as the auto-suggest that I have a love/hate relationship with. 

Though I can see the value of ownership locking out (and locking in) competition and fostering advocacy and adoption, I can't forget Sony strategy, among others in the industry deemed to be overly focused on proprietary nuances that made "open" systems closed to everyone not subscribing to a brand. I can't help but think that this is a very carefully planned and executed strategy on Apple's part. Not only are they innovative in terms of platforms, systems and hardware/software but lead the pack in terms of design thinking and business strategy.

That said, how could a closed system be a long term strategy when we are barreling towards a more "open" system? In the short term apple profits from locking out other players pitching their humanness to the public and positioning the perception of their company as the underdog misunderstood creative spirit counter to the business machines land of Microsoft and sun. People who whole-heartedly drink the Jobs punch are ignorant of the fact that non of apples work, position in the market, or focus on being different would be possible without competition. Yet, like most businesses trying to eke out market share, the goal seems to be complete control, monopoly. Like their relationship with AT&T over any other carrier. I've never been able to stomach why a device should control the service I use to make it a communications channel. One of the best ways I could see someone being "different" in this space is through providing customers with options and choices; much less ubiquitously open systems of syndication, access, consumption and management (metadata and content/messaging).

What I am trying to say is that apple isn't as "user friendly" once the surface is peeled back and the motives of their corporation become painfully obvious. Further I would say their lock in and forcing of the user to adopt to shortcomings in thinking or user testing before releasing to the market actually stifles innovation and human evolution. But I represent only .00000000003% of the people who consume these products due to my education, interests, history of use and background in HCI, human-centered design, product interface design. In other words, I have the vernacular to articulate where when and how interfaces fail while 99.000000007% of the population have no clue, live in a world where technology and gadgets take up far less time and space in their lives than mine.
What apple seems to do very well time and time again is to be first to market with technologies that other companies fail to realize at the same pace or same prowess in terms of delivery and value proposition. Perhaps that is where Apple is truly a leader - they are organized in such a way that they are able to produce in timely and efficient manners, products and services that appeal to the average "Jane".
Much of what I have written so far is bout expectations both personal and presented by the brand, the device and the baggage I carry from previous experiences. Yes, I am hard on design and user interfaces. That's because I see the risks involved with what I refer to as "captive audience" when using a "GUI". Periphery disappears and focus on a boxed in context is intense. At that point the device has undivided attention and thus control over both physical and cognitive processes. It would not be impossible to actively work to design user interfaces actually alter some very foundational physical and cognitive processes within us all, including what we say and how we say it (think about truncation these days and abbreviations and the countless reports coming out about the western human's decline of focus, depth or non herd adaption to shortcuts, workarounds, or system failures that actively destroy vital ability. Like Neil Stevenson and Jaron Lanier said in many ways in many forums to date: BEWARE. Be very conscious when using new technologies and note when you are forced to change behavior to adapt to an offering hidden behind messaging like "it's all about you" because it never is when products and services and agendas are involved in the value proposition equation. At the end of the day Apple is a company that is publicly traded and therefore beholden to shareholder buy in. Like all the other businesses out there.
Back to the iPad... These gripes and critiques aside, I do find much pleasure in using my iPad in several areas not initially intended. There has been much debate about the death of print and I am one of those old people stuck in a generation of publishing, of citation of source and the unmitigated/able nature of the printed word. The app I seem to use the most is Kindle. And it is ironic because it integrated with the Amazon product platform and facilitated much spending by me outside of the Apple Store ecosystem. The conduit to this were my lists on an existing platform focused on and somewhat good at a certain kind of product that warrants much of what we deem valuable on the net today and going forward (ubiquitous access to information and experts and social communities of use...).

I am so into the tactile interaction of a "multi-touch" screen. Having designed touch screen interfaces in my past and hating the poke input model, I love seeing stuff from the early days of Flash (called spark) in terms of responsive UI that engages users more subtly, less literally or metaphorically and more "intuitively" through true interaction and communication loops. However, looking through the human interface guidelines document I realize that within their closed development structure, there is little room for variation or defiance of the standard patterns put forth without a great deal of expertise, effort and an extreme amount of patience in a developer. With the rise of HTML5 I hope we'll see a mass exodus from the app store and a flocking towards a more open web that truly captures the advantages of the many channels and devices we use every day.

Some promising applications have been slow to realize like AirDisplay and Mobile Mouse. The lag with screen sharing is prohibitive to use. Lag when in response to input it death for an interface. Still it offers hope in that use case I'm waiting for "token devices" that fluidly share with one other, allowing me to unmoor or shed weight when needed while maintaining a home base or several home bases.

* pieces of the iPhone "GUI" were developed years before the iPhone appeared.

The default keyboard.






From numeric mode, I go to symbol mode. If this is a multitouch device, why not leverage the existing functionality of a multitouch keyboard like I'm used to on a "real" computer?


While in numeric mode, Apple remaps my punctuation keys which is again disorienting and causes much in the way of toggle-based mistakes on input. Where is my standard shift key?

Wednesday, June 2, 2010

A Response: Natural User Interfaces Are Not Natural

"I believe we will look back on 2010 as the year we expanded beyond the mouse and keyboard and started incorporating more natural forms of interaction such as touch, speech, gestures, handwriting, and vision--what computer scientists call the "NUI" or natural user interface."
— Steve Ballmer, CEO Microsoft

That would be an awesome quote were it not for the FACT that all of this NUI stuff was around at Xerox Parc over 20 years ago (as Norman mentions). What is astounding is how slow culture, both in and outside of business, has slowed in terms of evolution while technology steadily increases velocity in terms of evolution (Moore's Law is now wrong, we're at a pace exponentially faster according to people in the know). Why is it taking so long to make GUI's (NUI's) that match the technology progression? My theory is that this stuff is "new" in the sense that it takes time to incorporate it all into the contexts of our lives, that disruptive innovation introductions to the market, even for "early adopters" has increased to a level of overwhelming for even the most spastic of embrace (myself included). As we're in an economy of choice as opposed to pure scale and demand fulfillment, even innovation seems to be a product category calling for discerning consumption.

Don writes: 
"As usual, the rhetoric is ahead of reality... Fundamental principles of knowledge of results, feedback, and a good conceptual model still rule. The strength of the graphical user interface (GUI) has little to do with its use of graphics: it has to do with the ease of remembering actions, both in what actions are possible and how to invoke them... The important design rule of a GUI is visibility: through the menus, all possible actions can be made visible and, therefore, easily discoverable."
Menus and the vernaculars he and many people rely on (AKA "patterns" and/or "standards") are direct responses to the constraints inherent in the systems (metaphors, proprietary hardware...) that they service. The "desktop" metaphor has been ripped to shreds and proven to be a culturally-biased manifestation of a group of highly insular engineers; much less detrimental to the development of operating systems that are truly cross-cultural and/or flexible enough to be usable in many contexts. That this metaphor has hurt the industry more than helped it in terms of innovation (see "In the Beginning was the Command Line", an essay by Neil Stevenson). Standards are good... For programming and system-level platform architecture... For sanity... For stability. But standards are often static and mistaken as gospel as opposed to dynamic sets of frameworks driven by the evolution of the marketplace and the demands therein; not to mention context, that human reality. When Norman makes statements like "Systems that avoid these well-known methods suffer," I get angry because statements like that are blatant examples of how ignorant designers can be at times (i.e. generalizing without taking the time to think about the complexities of interactions, the concept of empathic response and emergent technologies). In other words, systems that avoid usable and appropriate (to the user AND the business) methods suffer. Experiences and interfaces should respond to the demands of the content they are trying to service and provide to end users. For example the unique facets of products or services should drive a designer to explore the best "vehicles" through which to drive a particular path down the information superhighway. When we live within our comfort zones in the name of stability and sanity, we miss out, we suffer through a stagnation of evolution culturally, physically, cognitively and socially (human factors, user-centered frameworks). And if you want to speak to "affordances", Norman should perhaps look at advertising agencies or advertising in an of itself, the approaches that speak to the "unique selling points" of products or services as a driver for campain messaging and positioning. The same applies to GUI or NUI: an interaction is a form of exchange, of rapport. There are many many things going on outside of a pure form or system level analysis.
"Because gestures are ephemeral, they do not leave behind any record of their path, which means that if one makes a gesture and either gets no response or the wrong response, there is little information available to help understand why."
Not all contexts are universal. Anthropometrics can apply to two dimensional realities in the form of feedback from input, indication, understanding, response... There are many layers to the arguments Don positions that are ignored in favor of some call to convergence and standardization of thinking in a realm that suffers greatly from any algorythm-based application of solutions without thinking about the problem itself and the humans benefitting from the solution(s). What he speaks of here is handled by the display, the response of the system and not entirely dependent on the mode of input, be it gestural or keyboard, etc. I get the sense that because the keyboard and mouse have been around longer in a consumer context, Norman will find no fault in their use citing "standards". As Jaron Lanier states clearly, we should be extremely angry at the lack of progression of these systems, how we are extremely tolerant of shortcommings, how we alter behavior, much of the time dumbing it down, to facilitate the limitations of systems that should be much more functional.

Norman goes onto talk about standardization of gestures, etc. I assume he's dipping into his "affordances" misinterpretation at that point (or ignoring his own philosophies about that entirely). I mean, non-verbal communications, surfaces of inscription, modes of channel-based communications, have been studied as disciplines for decades prior to the invention of the PC. It scares me to see this foundational knowledge ignored by a so-called "expert" in the field. Going back further, Plato's The Cave would be a great read at this point. It seems that human perception, if not human experience is abandoned in favor of a full-out rant against a disruptive market release (because it calls into question many of his "standards" based on his interpretation of interaction and technology as well as a very obvious need to gain marketshare as an expert in this realm by speaking to the anxieties of his constituency - mostly business and mostly people who work with user experience professionals as opposed to practice it on a daily basis).

As a "design historian" he should also be in touch with what the futurists are predicting, some of which is already here like physical feedback mechanisms triggered by neuro stimulation or holography (3D) or interactions which combine multiple input methods and models like voice/sound as a gesture that influences touch in combination with keyboard or key. Multi-combination input is central to gaming. Mapping new commands to actions is commonplace as a learning curve in many realms, even in non-expert user interfaces. Again, generalizing is appropriate in some cases. These generalizations, assumptions and supposedly credible insights about multi-touch and gestural UI are a tremendous disservice to the design community. Then again, looking through the prism of our current technology and how slowly it is catching up to what he called rhetoric ahead of reality, it's understandable to latch onto what is comfortable and requires little effort and expertise to explain or explore or extend.

Wednesday, May 26, 2010

Facebook and Privacy Part II

Attached is a PDF generated from Notable about my thoughts regarding Facebook and Privacy settings. As I've written previously in posts regarding privacy, the landscape is changing, morphing by the millisecond so anything I post in this context will probably be old news before I click the submit button.

Regardless, from an experience and design and business perspective, I noticed many things that fail to provide the (assumed) user with effective ways of not only configuring settings but understanding the configuration(s) and/or setting(s) in and of themselves.

High Level Observations:

- Why does a user have to go to a dashboard or a full-blown state/mode to configure content display models, content access or screen configuration? In other words, it would be so much more understandable and valuable to users if the settings for privacy where accessible in the context of interacting with the content.

- Why does the "preview" state have to be a state? Why can't it be a "resolution model" which shows me a real-time feedback loop of how what I choose or select impacts the "default view" of my profile from

- multiple perspectives. If you're going to force me into the "Only me, friends, and everyone" model of grouping, at least give me the option to define my own groups and ways of naming them/specifying access control. Facebook has always felt more like an application or platform as opposed to a website made of pages and page turns. Yet they insist on staying "simple and elegant" (which means they are too lazy to think about some fundamental design issues).

- Still seeing a lot of fine print, abstraction, and obfuscation burying more fine print behind links in sub or supporting copy blocks. An organization like Facebook is responding to public outcry. The experience in and of itself is a "brand message" and wholly effects "perception". It's not good enough to simply offer access anymore. What is vital if Facebook plans on retaining users or limiting attrition is to be completely transparent in policy and effect/input by the user.

- How do my privacy settings affect the use of my "social graph" in the form of several syndicatable streams, including Facebook? How does OpenID get affected? How can I manage OpenID/FBConnect privacy settings in this context? Can I?

Also stated before is the fact that social networking sites were not built to retain or protect a person's sense of privacy because they are about public (or specified as private) interactions via a channel called the "internet". In the end, these settings are a knee-jerk and quick panic response by what I assume to be c-class and legal fighting some made-up time limitation with the intent to "get something up" as opposed to provide real value (i.e. Clear understanding) to the user. The troubling pattern I am seeing here is that facebook is in a loose-loose situation. They are trying to control something that is at the core of their value proposition both to themselves and the people who use the website. Without the "social graph" and "data trail" people leave, FB diminishes in value returns in terms of relevancy and experience. By answering to public outcry, facebook has abandoned this core value structure capitulating to advertising and revenue streams due to its market position.

We all know that when the user is happy, the company will be too. I wonder when the companies of tomorrow will start realizing that this "game" has changed. That the user is in control now and that the system is expected to provide this control. It's no longer let's build it and let the user figure it out. It's the user dictates everything and I provide the tools to enable him or her or it to do so. Still, I see many companies, even as new as Facebook, holding tightly to old and failed models, repeating mistakes in favor of the business as opposed to listening to customers. This leaves a great gap for opportunity and competition, if not the death of Facebook to come (at least as we know it today).

My prediction for identity and privacy on the web: user beware and user controls. More and more pieces of our online identity have been moving to the "cloud" which means a syndicated and consistently synced identity that the user chooses where and what information is accessible to whom and when and how. We're not there yet. And the war is with the usual suspects who most of the time want to be given information without giving anything other than a bad user experience back. The value to all gets lost in the battle when the solution seems simple to those with experience: be transparent or don't do anything at all when it comes to my data and my privacy and a risk of me being harmed or vulnerable to harm through use or a system. Liability will always be an issue when it comes to privacy because the entire definition and concept of privacy is dependent on multiple people or parties. There are negotiations, norms, implicit and non-implicit rules of behavior. There are also policies in place that can be leveraged if harm does happen. In the end, it's all about personal responsibility and vigilance by the user to manage what data is provided and when and how.

Wednesday, May 19, 2010

Tomorrowland by Daniel D. Castro

I can't repeat this enough: your microwave will be speaking to your tires in
the somewhat near future. Sensory input (aka passive influence) into systems
will automate much of what we angst over about "privacy" online. Still, I
can't help but think back to classes in 1998 and prior where my esteemed
professors would speak of such things being common by "2010" (this is when
people scoffed at an "expert" proposition that over half of all households
in the US would have "broadband" access - ADSL within the next five years).
Point is that predicting the future is AIMING an arrow towards a target
while reading factors like wind speed and direction etc. If you focus on the
target, you usually miss, like in pool when you look at the cue ball (a
no-no) when lining up the shot. Businesses seem to think in shorter-term
intervals (like yesterday, I need this yesterday) without considering the
path walk, the journey and perhaps a change, constant change in plans along
the way. That's not to say that some businesses get lucky by blindly
charging forward in knee-jerk reaction ways as second movers or fast
followers or strange (interpretations) ways of "following" via a complete
lack of understanding in regards to stuff like user experience or design or
programming/software engineering...

We used to refer to this as "ubiquitous computing" where you would gain
"peripheral awareness" of activity by and from your servant machines. Isn't
it ironic that in AI and machine learning people are spending tons of money
on understanding concepts of "empathy" over data aggregation or cleansing?
Just some thoughts.

Thursday, May 13, 2010

Facebook and Privacy

(this is a blog post... waiting rooms)

http://www.allfacebook.com/2010/05/infographic-the-history-of-facebooks-default-privacy-settings/

This is very interesting and clearly shows default settings over time. I'd love to see a side by side as well as callouts to policies related to the shifts in their default settings. Regardless it does serve as a metaphor for the fluidity of the policies in place, as witnessed with recent court cases with the FCC and EFF, among other banal acronyms. Harkening to the blog post - expecting privacy in a "social network" without actively learning how to manage it (i.e. spending time and calories) is like getting into a taxi in Chicago and expecting not to pay. There is an implicit understanding implied by the very nature of the website, clearly broadcast in "advertising" often featuring real-time "social graph" threads (posts, photos). What is troubling to me is the belief that regulation is the solution, that our government or someone else can make some very personal and important choices for us when we ourselves have no idea what choice we would make if the situation arose (because we have not experienced it yet). 

Defaults (i.e. just in case someone doesn't take the time to review policy, preferences, settings, etc...):

My wallet is private. What I spend my money on is not public knowledge for obvious security reasons. Besides, banks hate fraud and scams and spam (if they are legitimate). When that stuff happens to my money, I can sleep safely (sort of) knowing that my bank wants that info secure as much as I. Mint.com was able to "open up" their platform in ways extremely useful to their core offering without risking even the perception of risk. Why is it so hard to do this on eCommerce sites? How can eCommerce providers reassure their customers that the information collected will not put someone at risk of theft or harm but will enhance their experience through gained understanding (Amazon claims this but I have yet to stop seeing stuff so far out of the realm of what I am interested in getting in the way of what I am that I fail to see the logic working). While my wishlist may reflect what I like and perhaps am able to spend money on, what I own and have purchased from them are not public knowledge unless I "opt-in" to identify and "rate"...

I partake in "social networks" because I want to connect with people (friends or otherwise). Whatever my intentions, it has never been anything other than clear to me that what I do will be shared with a "network" of people. When the network was small and limited to those "inside" (logged into) the platform, I feared little about violations to my privacy. It seems that as the network opened up and the whole world could scrutinize my "data-shadow" I began to worry that, say, some ill-intentioned organization or individual will recontextualize and repurpose my data for evil or harmful means. This has never happened to me or any of the people I know. Sure, there has been "drama" between a friend or former lover or family member, some spam, some spam from me to others I had to apologize for...

Most of the time when I experience harm from being active on a social site it is when I do something that breaks a collective "norm" of behavior. If I post something inappropriate, if I say something shocking, I get a response, negative or positive. Someone I know posted something to the effect of "why would god do such horrible things to a child if he has so much power?" and I was alarmed and checked in because I was concerned. One time I posted some comments about an agency I did work for and later regretted the rant and took it down. All of this stuff is so new (YouTube, for example, turned FIVE YEARS OLD yesterday). And when things are new they are "disruptive".

Ultimately and unfortunately, the harms that people experience through violations of their privacy will result in remediation to address and asses risk. We will come up with new ways of monitoring and managing information rapidly in the coming years due to increased connectivity, higher "bandwidth", better devices and infrastructures... But since humans are using all of this, there are social and cultural (and emotional) considerations and frameworks in place that could help in the development of systems and processes that ensure safety online, even on social networks.

It is not the website, it is the PERSON who is responsible for how s/he/they use the website. When we click those EULAs we agree to this. No social networking site wants people to live in terror or fear when they use their services. If someone gets into a car wreck, the car is seldom to blame (except for Toyota...). In other words, when someone uses information they should not have access to in the first place to cause harm and harm is caused, there is usually a consequence to the action. If the harm is widespread and severe enough, there is usually a policy-level reaction. Maybe I'm naive but I don't know anyone who would maliciously "phreak" someone on a social network and do harm to someone else. Luckily I've not been a victim; nor have I heard about any.

That's not to say sites like Facebook shouldn't be a little more empathetic to the lives of people, more consoling in their response to questions about their policies. No matter the "channel" in communications, there are always structures, vernaculars and syntax. Some are less obvious than others. In addition to various levels of channel noise there are understandings about how to behave or act. Otherwise, there would be no continuity, nothing to engage with. I refer to stuff that is "private" unless asked as an example here. Like who you may be dating and the status of your relationship. Again, it's hard to blame Facebook in my opinion when the "user" has the ability to not fill those fields out. I don't recall, having used Facebook for a long time now, those fields being "required".

In the end, eBay comes to mind the most when it comes to "liability" and "policy" on a "social network" where the risk of harms are many due to that leap of faith humanity must take in any marketplace transaction. "Buyer Beware" was a byline mantra when the site took off. From day one there were reputation management tools that allowed people to flag and file complaints and provide eBay with invaluable feedback to manage changes to their platform before wide-spread disaster or harm struck.

There are all these models outside of their intended use we can draw upon to render "defaults" for how privacy is managed in an increasingly "connected" age. Behavior will be the ultimate judge of how privacy will shape itself in the coming weeks, days, decades. People won't participate in social networks that deprive them of their expected right to security and safety of self and their "data-trails". Those that throw their hands in the air and claim naive when ignorance is more appropriately applied should reconsider why they are participating in a social network or providing information that, no matter what, is at risk of being used in a malicious or harmful manner due to the impossibility of completely securing a "channel" through which information is transmitted.

The coming mantra for 2010-2012 or so will be "User Beware". Not because people or companies are bad but because no one has an answer right now, the stuff is new, we are still shaping it all. Social networks in this sense could be used to share information and awareness about privacy and policy and ways to manage it via the "users" themselves. Which is something we're already seeing.

Monday, May 3, 2010

Thoughts about "The Data Driven Lifesyle" article in the New York Times Weekend Magazine May 2, 2010

From the Author:
"People are not assembly lines. We cannot be tuned to a known standard, because a universal standard for human experience does not exist."

ME: which is why User Experience professionals tend to get frustrated (and designers, but that is an older story and much richer). Pat Whitney said it well when he spoke to the fact that "user research" and "data" based on behavior and sensor input automation has driven down costs and effort. Further, relying on older models that service older media channels (like television and radio advertising) will not provide the awareness or understanding it would take to create competitive experiences in the very near future (see now).


Comments:
"The map is not the territory." — Alfred Korzybski

ME: Richard Saul Wurman speaks to this. Maps are polical artifacts that speak to policy while the lives of people and culture etc form the basis of communities. We're used to looking at the map and the map is becoming less and less relevant with the rise of what we call "globalism".
"I think the loss of our human-ness is more the result of inadequate tools that make us adapt to them instead of the tools adapting to us.

The philosophical paradigm shift this represents is on a scale with the spread of written language, the development of agriculture, or the Enlightenment. Whether we like it or not, integrating the computer into the minutia of our daily lives means we are changing the game - externalizing the computing power of our own brains. The terror and the excitement people feel at this more and more obvious change is the most convincing evidence I can think of that it's real and it's accelerating."

ME: Jaron Lanier speaks to this in "You Are Not a Gadget". We tend to praise interfaces these days that would have been scoffed at 10 years ago in favor of the flash and glitter of the glint. It still amazes me that wiki is like the bomb these days. Still referred to as radical etc. Seems like we get lost in the end game and end result (or what we want it to be) rather than step back, as Pat Whitney said at AIGA's "The Death of Advertising", and abstract the real problems and human needs, intent, agendas... Further, bad interfaces that we are forced to rely on alter our workflow, our epistemology, our mental constructs; not to mention cause great inefficiency in workflow. The last point is a great one. The fact that its happening and being openly discussed means it's too late to stop it? Do we wish to stop it? Can we slow it down? No. Moore's Law - it applies to us as well as machines.
"I've met people like this.

I usually find them very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very...

BORING."

ME: LMAO!!!! I wonder who wrote that... Anyway, Kurt Vonnegut was asked if he spends most of his time with his writer buddies and communities. His reply was short and sweet: No. When asked why he said something to the effect that it would be extremely boring and he would gain little in terms of the insight and awareness he relies on when he crafts stories for people who are not writers (like most of the world). When I attended graduate school, I always counted my fortunes when my life outside of the campus was not spent with other "human-centered designers". My mom always said "no one is more right or knows more than a graduate student". Not only can they be boring but offensively ignorant of the world outside of their own, specialized realms.
"literacy was once a threat to humanity because of the way it "represented" the vagaries of human life. (I am reminded of the belief in some cultures that photographing the human form is kind of theft of the soul.) I am sure you are right that we will eventually find humanity in data, as we have in the written word.

However, it is not honest or responsible to confidently assert, for example, that early critics of the written word were simply wrong. History does not show that. History shows, rather, that the written word made its wielders more powerful. Don't forget: the written word has often been used to oppress. Think of Martin Luther and the early Protestantism--it was largely a response to the way the Church had used literacy as a tool of oppression. Our idea that literacy liberates is basically a function of the fact that it equalizes the weak with their oppressors, not that it is "inherently" liberating.

Self-tracking will undoubtedly be used to oppress. It will wend its way into mainstream culture, eventually becoming something that employers expect of you as a matter of course. The temporal "productivity gaps" which we use to daydream, think about politics or other non-work related ideas, or simply consolidate memories, will be targeted and eliminated. Also, it is almost inconceivable that self-tracking data will avoid eventually going public.
Only by grasping the subtle seriousness of this issue will we give ourselves a chance at actualizing a future that does not involve blanketing ourselves in highly granular control mechanisms.

It's probably inevitable but that doesn't make it good. Look at it this way: we will never know what the world would be like today if writing hadn't been invented, and conversely, there are an indefinite number of technologies that weren't invented hundreds of years ago, and we will never know what the world would be like today if they had been invented."

ME: Yeah, people's initial reaction to change, usually when it is inevitable and will disrupt current behavior, is to shoot it down. We in the digital innovation group experience this daily. Especially when we're right-on in our response to a problem or thinking about something. I know we've done a great job when the reaction to our work is WTF!? Even if it's wrong the presentation serves as a "probe" to gain insight into what people think would be "right".

Further, what was missing from the comments and the article itself was mentioning about how much of the input AND analysis of the "data" about us will be automated so it won't require a "second life" of "reflection" to make sense, make use or, or find value in the "personal data stream". They also missed the point about personal control and our tendency to not use stuff we can't control - especially when it has to do with our ability to deny or ignore various aspects of our inner lives.