Tuesday, September 28, 2010

A Response to Gestural Interfaces: A Step Backward in Usability

Gestural Interfaces: A Step Backward in Usability
by Donald Norman and Jakob Nielson

I find the article I just read in Interactions Magazine offensive, if not an example of the ignorance that has held interactive multimedia back for at least 15 or more years:

in the rush to develop gestural (or "natural") interfaces, well-tested and understood standards of interaction design were being overthrown.

If either of the writers would get out of their one-way mirrored focus group "labs" and actually do primary ethnographic observation, much less embrace the revolutions that are happening technology-and-business-wise in the real world (like "Design Thinking"), they would realize that the desktop metaphor is a dead horse we are forced to beat with a mouse and keyboard, that they are antiquated examples of completely non-sustainable and non-scalable interaction "modalities". I can't tell you how much of my design career has been spent working on seemingly "radical" concepts that were shelved in favor of that all-to-familiar personally subjective knee-jerk reaction to something outside the boxes of limited thinking and fear of the new or unprecidented. See the RAZR for example or the iPhone, the cell phone in general, the automobile ("faster horse" would have come out of "HCI" research methods like articulated survey responses and focus groups).

Throughout my career, I had the privilage of working on things that were deemed "too advanced" for the general public (because there were no precidents in the market) and killed before they saw the light of day. I quickly learned that the best reaction from the "stakeholders" when innovating was "WTF!?" because I knew it was something that took these people well outside their comfort zones. Those companies have gone into some seriously painful times as I type this, realizing (too late) that they should have taken some chances in the market and listened to the people with their ears to the ground, who live, breathe and eat design thinking on a 24/7 basis; much less the "end-users" who would have to incorporate these technologies, services, and products into their daily routines. 

But the place for such experimentation is in the lab. After all, most new ideas fail, and the more radically they depart from previous best prectices, the more likely they are to fail.

This "HCI" stuff Norman/Nielson cite as gospel is a true example of analytical thinking, data-based engineering, testing that quantifies then qualifies ignorance and limited thinking done in the "lab" as opposed to contextually in the field through "validation." Again, we are not in the age of "technological evolution" but "technological revolution". They are bloody and leave in their wake the obsolete thinkings of "leaders" who hold humanity back in favor of their personal need for predictability and structure. Again, having worked from many perspectives in the design industry, from products to services to education, I cringe when someone comes into a meeting where innovation is supposed to take place citing some Nielson/Norman study about how this "button" should be "here" because "x% of users"... Discussions killed in this way ruin human potential. The "lab" is for "rats". The "lab" should be our world of experience. Hence, life is the lab. Humans are not rats when it comes to how we live and interact with each other and the world around us.

Most progress is made through small and sustained incremental steps.

Since when has any "game changing" innovation been made through "sustained and incremental steps"? Inventions? The iPad? I guess you could say they were incremental in the sense that they have been held back since long before the Xerox Parc days by people who were too scared to take a chance, to fail. Hence the design thinking tact: fail often and fail early. And learn. Or keep it all in the "lab" and release tiny portions of brilliance in favor of maintaining some safe growth position in the books and charts. Meanwhile, short-stick the user, the customer, the human being and ruin growth potential for your organization (differentiation, advantage, unique or core selling point offerings, marketing 101, competition, value to the humans who honor you with their consumption and use of your production...).

The truth is that we actually have more evidence through seeing these "radical new" products come to market (I mean, seriously! In 1997 ubiquity was around the corner and we're still not there yet) now after being locked up in the "lab" for far too long. I can see the safe thinking they employ and profess being useful in high liability contexts like healthcare or voting, where risk to a human is high. But social networking? Gaming? Entertainment? Shopping? Anyone who has lived in Asia or Southeast Asia, travelled to Europe, has seen the future (or the now) that America seems to have ignored for decades.

Why is 3D movie making the "cool thing" again? Why do Hollywood movies seem to be bland, to be safe, to suck? Why do they remake remakes and churn artistically devoid fodder? Um... Let me take a guess: they're based on demographically targeted planning algorythms as opposed to real thinking about empathic connection with real human beings who have emotions and feel through primary experience. Like the book "The Design of Business: Why Design Thinking is the Next Competitive Advantage" by Roger Marin said: "It's like driving forward while looking in your rear view mirror." It's not wrong to look (glance from time to time) in the rear view mirror—one benefits from a 360º view of the situation while driving. But it is wrong to look solely in the rear-view mirror while driving forward, while ignoring the left and right, up and down, for example. And some cars don't have rear view mirrors anymore (like the "image map" quip they inserted to sound like industry old-hats). Some cars can park themselves now. Some can even drive themselves now. How do those offerings and behaviors make the existing principals and standards completely obsolete?

These "funamental principles of interaction design" are pitfalls 9 out of 10 times (I've studied this through living through it). Ignore them or question them religiously and think about context over prescription. More antiquated thinking:

Discoverability: All operations can be dicovered by systematic exploration of menus.

Scalability: The operation should work on all screen sizes, small and large.

Have they read "Mobile Web Design" by Cameron Moll? Have they studied the "experts" in other fields who think about the role context plays in interaction, who study humans as humans and not "nodes" (see Elizabeth Churchill's article "The (Anti)Social Net" in the same publication)? "Menus"? One size fits all? Hence my problem with "HCI" as a relevant approach in this age. It has a place, don't get me wrong (i.e. Engineering, backend, technology; not primary research and not innovation)...

These people remind me of some of the organizations I have worked with and for in the past who laughed hysterically at some ideas or predictions of the future I live with now: like a phone that feels more like playing a game than a tool, like gestures and mind control over pointing devices and some metaphor some nerd applied to something so infinitely free no one can define it: human interaction and rapport. 

Stop being safe. Stop listening to these "statistics" and "fundamental principles" and consider the sources, intent and agendas from which they come. They are based on visions of objects much closer than they appear in the rear view mirror, based on older ways of looking at the world that don't really apply anymore. They are like the slow drivers in the left lane. You want to honk at them and wonder how they are still allowed to drive on your road. Then you pass them and realize they are simply oblivious and/or old, drunk, dumb, incompetent, angry... And you fogive them, pass them and breathe a sigh of relief you are no longer at their mercy in terms of time abuse. Explore, inquire, absorb, apply and be human. We're imperfect and standards seldom apply when consciousness is involved.

Friday, September 17, 2010

Hail Halo for Men Chicago (Social Networking Best in Class)

I'll start with a shameless plug: my friend Deanna is the best stylist in the city. You can see my review on Yelp. I wouldn't normally have taken the time to do this if she wasn't my friend in addition to the fact that Halo for Men openly promotes their rewards points system for doing so in ways that I can understand. If I "check in", if I post a review on Yelp, if I sign up to be a part of their Facebook page, if I twitter about my experience, I get points. When I rack up 450 points, I get a free haircut, a free hand wax (which I decline, yuck!) and a free scalp massage. For posting a review on Yelp, for example, I got 150 points. Sign up for their Facebook page and you get another 150 points. That's 300 right there! 450 points is worth around or more than $50 of personal beauty care and worth many referrals and awareness in channels for them (can you say "free advertising and PR" any louder? Can you say "return and repeat and adopted customer" any louder?). 

When you visit their salons in person or walk by them, adjacent to their logo and signage are the Facebook and Twitter logos, Yelp and other social streams through which to find, get information, and participate in your own "brand butlering". Before they jumped on the "bandwagon", they had a website that answered to the real needs of someone interested in getting a haircut via a "book online" feature that would offer opt-ins for notifications and calendar synchronization after a one-time registration (along with the option to do it as a "guest", complete with SMS and email notifications, as well as a way to have them actually call you to remind you beforehand). The receptionist is actively involved in "triggering" and "informing" their customers to participate in social networks and very clearly lets a customer know that participation produces award points towards free stuff or discounts. The incentives are endless - refer friends and get points, share your stuff with Halo for Men and get points... Not only can you go to their website to get information but you may see them syndicated in other places while doing other things in the periphery. 

I think I may be getting points for posting this blog. If I don't, I have a strange feeling I could simply mention it and get points. Or at least a mention and a link somewhere, which provides me with as much "social capital" as them because I look like a hip and stylish metrosexual who patronizes "hot" establishments in the name of great style. So if you go to Halo for Men (men only, sorry ladies - this is chock full of pool tables, beer, sports, and video games, comfortable leather recliners, very friendly and stylish hostesses, the ability to have your eyebrows and nose hairs trimmed...) please mention Mike sent you. For your mentioning of my name, I get points. For showing up for your first appointment, you'll get points. Virtually anything you "give" them gets you points, including giving them your birthday. Everyone gets points and everyone is happy in the end. 

The only place I don't see awareness of these social channels and incentives is on their website itself. There is a "press" section, and I like the way they show the sources as opposed to a dense table of article threads or links as an entry point, but there is no mention or linking to their Yelp reviews, Facebook or Twitter updates, FourSquare or Groupon or any of the other places they have strategically partnered with. It's hard to account for everything when the ecosystem is so diverse and extensive. However, missing the most simple of inclusions (like their own site) is something to learn from. 

Some people may say that it is somewhat unethical to "trigger" a review when clearly the motivation is payment (points) but, other than getting the haircut experience of the century from my old friend Deanna, what incentives would I have to take my time to write a review, friend them on Facebook, follow them on Twitter? They never told me the review had to be positive. In fact, I was told to be honest because they need honest feedback. In this case my rewards are points and the warm fuzzy that comes from promoting a good friend towards her success in a service industry. Hair care is a referral business offering an experience good or service and is perfect for this kind of "social networking" as more and more people use reviews and ratings and other websites when they are researching considerations for providers of a need. By leveraging this insight, Halo for Men responds to and facilitates active streams of "social activity". And I would venture to guess that asking the owner of the salons about the "success" of this effort would produce a response like "invaluable to the growth and retention of customers for our business." 

What I learned:

If you offer "points" let me know clearly what their value is towards tangible products or services.

Provide me a clear understanding of how many points each action I could take will net me.

On the Halo for Men side, have a system or platform that will "know" when a customer posts a review, friends Halo for Men on Facebook. They require me to send them a "reminder email" to let them know I posted a review on Yelp for example. From this email, they can assign points to my account.

Again, please book an appointment with Deanna K at the Wicker Park Chicago Halo for Men and tell her or the receptionist that Mike sent you. If you read this post, send them an email telling them this post made you want to check them out. I can't give you some of my points as a gift but can assure you the experience of getting your hair done by Deanna will be a worthwhile expenditure; not to mention a really fun time.



Saturday, August 7, 2010

Thoughts About "You Are Not a Gadget" by Jaron Lanier

It didn't take me very long to read this book as it was like listening to someone articulate many of the issues and concerns floating through my subconscious for the last 15 years. Jaron is referred to as the "Godfather of Virtual Reality" and a very loud voice for what he considers a true "fight for the human spirit" in an age of massive technological innovation and disruption.
Jaron will hate my paraphrasing, the slicing and dicing of extractions from his book, citing his treatises about fragmented knowledge and the promotion of shallow understanding when doing so. Though I agree with him in many respects, where I part ways is when I think about the perils of generalization. In other words, if someone (like myself) reads the entire text as it was meant to be consumed (linearly, sequentially) and then extracted the points of interest, I would not consider this act detrimental to the intent of the author; nor the benefit of the reader in terms of knowledge transfer. If my way of digesting this turgid and massive text about highly abstract social technological issues is to highlight and revisit to extract - which aids in memory and internalization - I fail to see how every case of chunked extraction promotes ADD. Where it may have a detrimental effect is when you, the reader of this blog post, bipass reading his book as linear text (Jenny speaks of "codex"), taking what my interpretations are at face value, don't ever read the source material. As Benjamain speaks to in "The Work of Art in the Age of Mechanical Reproduction", the information is diluted the further it travels from its orgins, the aura is somewhat lost.

Excerpts from the book will be blockquoted with my comments following:
The words in this book are written for people, not computers... You have to be somebody before you share yourself. ix

Kurzweil would argue the above. By 2046, he says we'll be one with computers and technology. Therefore, computers will be "somebody" by then, if not in limited ways now. Turing tests are another counter to this statement. How would the book know it was being read by a human vs a computer? Books are ONE WAY communication nodes. 
Speech is the mirror of the soul; as a man speaks, so is he. — publilus syrus

Articulation is tricky. To be able to verbalize is a skill learned over time through several influences including culture, evolution, etc. Non-verbal communication seems to be the major breaking point in our current efforts to understand customers.  I would rephrase this to say "You are what you do, not what you say you do."

[web 2.0] promotes radical freedom on the surface of the web, but that freedom ironically, is more for machines than people. p.3

This is Jaron's introduction to "lock in" where computers define the design constraints as opposed to responding to them. Web 2.0, in favor of some "back end" capabilities as well as enhancements to hardware and channels to move information, has helped design and user experience take a large step back in favor of functionality over form. An entire design vernacular has been introduced and followed by flock-like mentality within the industry. Web 2.0 seems to have driven a wedge between an already widening gap between designer and developer by there mere fact that the markup and languages and systems are evolving quickly enough to warrant specialization. 

It is impossible to work with information technology without also engaging I social engineering. p.4

Communications influences. You can't erase it or take it back. Virtual or non-virtual, time keeps on ticking away. When any human "uses" something, s/he/it is being manipulated and exploited, guided through a taxonomy or construct. The internet has never not been social. It was created for human beings to share information via a syntax (markup) via a network scheme. Sharon Poggenpahl was right on ten years ago when she told me "designers of the future will not be stylists but will be designing frameworks and systems that leverage patterns." I see the transition from Web 2.0 introducing an enduring concept that has become somewhat of a mantra of late "content is king". The medium will not be the message (right now it is).

Different media designs stimulate different potentials in human nature. We shouldn't seek to make the pack mentality as efficient as possible. We should instead seek to inspire the phenomenon of individual intelligence. p.5

Individual intelligence comes from empathic connection and engagement with objects and others. I too am concerned with the "pack mentality" found throughout the world of Web 2.0. Sure, conformity makes things easier in terms of management and adoption. But I don't think we're far enough into the evolution of our systems to warrant the abandonment of trying new things. Still, tribe and relationships are human nature in the span of time with or without computers (again, the distinction between on and offline is blurring). 
Being a person is not a pat formula, but a quest, a mystery, a leap of faith. p.5
Yes, being a person is trial and error and learning and growing. To what extend there is a will to be an individual... that's another story all together. 
We make up extensions to your being, like remote eyes and ears (webcams and mobile phones) and expanded memory (the world of details you can search for online). These become the structures by which you connect to the world and other people. These structures in turn can change how you conceive of yourself and the world. We tinker with your philosophy by direct manipulation of your cognitive experience, not indirectly, through argument. It takes only a tiny group of engineers to create technology that can shape the entire future of human experience with incredible speed. Therefore, crucial arguments about the human relationship with technology should take place between developers and users before such direct manipulations are designed. p.6
This makes me think about libraries and the difference between a physical repository of credited and credible information vs. complete and total trust of a hyper-anonymous ethersphere. What scares me about digital print is the opportunity for revisionism. Jaron goes deeper when stating the above hinting at the influences the interfaces themselves have on human cognition and physical manipulation. The unintended consequences will become more apparent as the technology evolves at exponential rates of change faster than anything anyone alive today can fathom (save for people like Jaron and Kurzweil, et al).
There is a constant confusion between real and ideal computers. p.6
The brittle character of maturing computer programs can cause digital designs to get frozen into place by a process known as lock-in. This happens when many software programs are designed to work with an existing one. p.7

The unintended consequence of lock-in is felt acutely in large organizations with enormous "legacy" issues on their "backends" or "middlewear" systems. The cost/benefit equation is used to justify a lack of upgrading at the expense of the customer or the business in terms of limitations or poor experience offerings. The greatest risks are not to the systems themselves but to the cultures, the people and processes that rely on them. Over time, this lock-in can lead to lapses of vision, perspective, or even the ability to survive in the marketplace. 
Software is worse that railroads because it must always adhere with absolute perfection to a boundlessly particular, arbitrary, tangled, intractable messiness. p.8
The process of lock-in is like a wave gradually washing over the rulebook of life, culling the ambiguities of flexible thoughts as more and more thought structures are solidified into effectively permanent reality.
The philosopher Karl Popper was correct when he claimed that science is a process that disqualifies thoughts as it proceeds... p.9

Makes me think of that quote hanging at my desk:

"To define is to kill. To suggest is to create." — Stephane Mallarmé

Validity vs analitically based thinking is an age old "friction" between "design" and "business" or "art" and "science" etc... Some processes like to use past data to project future trends, a process I've heard referred to as "driving forward while looking in the rear-view mirror". Art likes to try stuff out, fail early, refine, try again, and is resistent to the quantifiable modelling of analysis in the empiracle or traditional sense. When change in the marketplace was not exponential, analytical thinking (data-based) had a glimmer of hope and relevance. Now, as we are seeing exponential change, validity based thinking will be more the norm (in successful organizations). As some people in the industry have seen, we've transitioned from an economy of scale to an economy of choice.

Lock-in, however, removes design options based on what is easiest to program, what is politically feasible, what is fashionable, or what is created by chance.
If it's important to find the edge of mystery, to ponder the things that can't quite be defined—or rendered into a digital standard—then we will have to perpetually seek out entirely new ideas and objects, abandoning old ones like musical notes... I'll explore whether people are becoming like MIDI notes—overly defined, and restricted to what can be represented by a computer. p.10

The above is the central argument to his book. I once tried to present in 1999 a concept I was working on about computer generated music. I predicted in the presentation, based on the research done by many AI people on player pianos (also a great book by Vonnegut), that within the decade we would have access at the consumer level to software that would allow us to not only compose "MIDI" music but truly incorporate the nuances of tremelo or sustain, tonality, color, tempo, even human error or deviances within a performance. I was laughed at, walked away with my tail between the legs but redeeemed the second Apple released garage band. But that was almost 10 years later and all the people in the room most-likely forgot my weak presentation. 

The human organism, meanwhile, is based on continuous sensory, cognitive, and motor processes that have to be synchronized in time. UNIX expresses too large a belief in discrete abstract symbols and not enough of a belief in temporal, continuous, non abstract reality... p.11
The ideas expressed by the file include the notion that human expression comes in severable chunks that can be organized as leaves on an abstract tree—and that the chunks have versions and need to be matched to compatible applications. p.13

"network effect." Every element in the system—every computer, every person, every bit—comes to depend on relentlessly detailed adherence to a common standard, a common point of exchange. p.15

The central mistake of recent digital culture is to chop up a network of individuals so finely that you end up with a mush. You then start to care about the abstraction of the network more than the real people who are networked, even though the network by itself is meaningless. Only the people were ever meaningful. p.17
humanism in computer science doesn't seem to correlate with any particular cultural style.p.17
the web 2.0 designs actively demand that people define themselves downward. p.19
Again, see the Mallarmé quote.
• Emphasizing the crowd means deemphasizing the individual humans in the design of society, and when you ask people not to be people, they revert to bad moblike behaviors. This leads not only to empowered trolls, but to a generally unfriendly and unconstructive online world.
• Finance was transformed by computing clouds. Success in finance became increasingly about manipulating the cloud at the expense of sound financial principles.
• There are proposals to transform the conduct of science along similar lines. Scientists would then understand less of what they do.
• Pop culture has entered into a nostalgic malaise. Online culture is dominated by trivial mashups of the culture that existed before the onset of mashups, and by fandom responding to the dwindling outposts of centralized mass media. It is a culture of reaction without action.
• Spirituality is committing suicide. Consciousness is attempting to will itself out of existence.
p.19-20
Someone who has been immersed in orthodoxy needs to experience a figure-ground reversal in order to gain perspective. p.23
The Rapture and the Singularity share one thing in common: they can never be verified by the living. p.26

I think Kurzweil was speaking of Singularity in the sense of a merger; not one or the other. I'll have to check on that. 

A computer isn't even there unless a person experiences it. p.26

Guns are real in a way that computers are not. p.27
The first tenet of this new culture [Silicon Valley, et al, sic] is that all of reality, including humans, is one big information system.p.28
...it promotes a new philosophy: that the computer is evolving into a life-form that can understand people better than people can understand themselves. p.28
I say that information doesn't deserve to be free... What if it's even less inanimate, a mere artifact of human thought? What if only humans are real, and information is not?... there is a technical use of the term "information" that refers to something entirely real. That is the kind of information that is related to entropy... Information is alienated experience. p.28

Experience is the only process that can de-alienate information. p.29

What Kurzweil refers to as the utility of data used as information. And then there's that super dense black hole conversation about knowledge vs information vs data... 

What the [Turing] test really shows us, however, even if it's not necessarily what Turning hoped it would say, is that machine intelligence can only be known in a relative sense, in the eyes of a human beholder. p.31
Chess and computers are both direct descendants of the violence that drives evolution in the natural world. p.33
If that is true, then the objective in chess is to make moves that promote more moves for yourself while limiting the options of the opponent. Which would lead someone to refer to the "violence" as more of a disruption or challenge rather than some harmful attack. Unless, of course, the chess game is real survival. But that is for the movies. 
In order for a computer to beat the human chess champion, two kinds of progress had to converge: an increase in raw hardware power and an improvement in the sophistication and clarity with which the decisions of chess play are represented in software. p.34

When people are told that a computer is intelligent, they become prone to changing themselves in order to make the computer appear to work better, instead of demanding that the computer be changed to become more useful. p.36

Consciousness is situated in time, because you can't experience a lack of time, and you can't experience the future. p.42
Isn't the only way to have a future or a now to have a past? In the case of amnesia... I forgot what I was going to write...
people are encouraged by the economics of free content, crowd dynamics, and lord aggregators to serve up fragments instead of considered whole expressions or arguments. p.47
Yeah. Because we (the consumers and workers, etc) received more access to a wider and deeper range of content in mulitiplied contexts. What's the difference between a card catelogue at a library and a feed aggregator? Little in terms of the "codex" or format. There is an arrangement and structure, and degree of access to information about objects or cards... Since when did we get whole expressions or arguments when engaging with the "media"? For people outside the world of "nerd", computers are largely entertainment centers.
The only hope for social networking sites from a business point of view is for a magic formula to appear in which some method of violating privacy and dignity becomes acceptable. p.55
The value of a tool is its usefulness in accomplishing a task. p.59
If we are to continue to focus the powers of digital technology on the project of making human affairs less personal and more collective, then we ought to consider how that project might interact with human nature. p.62

FUD—feat, uncertainty, doubt. p.67

Information systems need to have information in order to run, but information underrepresents reality. p.69

What computerized analysis of all the country's school test has done to education is exactly what Facebook has done to friendships. In both cases, life is turned into a database. p.69

The places that work online always turn out to be the beloved projects of individuals, not the automated aggregations of the cloud. p.72

Yeah, but these "individuals" have relationships of opportunity and influence with other people. If they are in a cloud or if they are in a cubicle. This kind of innovation don't happen in a vacuum.

The deep design mystery of how to organize and present multiple threads of conversation on a screen remains as unsolved as ever. p.72

It's the people who make the forum, not the software. p.72
once you have the basics of a given technological leap in place, it's always important to step back and focus on the people for a while. p.72
People will focus on activities other than fighting and killing one another only so long as technologists continue to come up with ways to improve living standards for everyone at once. p.80

If money is flowing to advertising instead of musicians, journalists, and artists, then a society is more concerned with manipulation that truth or beauty. If content is worthless, then people will start to become empty-headed and contentless. p.83
Which usually leads to a backlash of "authentic" expression in societies as some art historians would say this is a cyclical pattern in "post-capitalist" societies. 

The limitations of organic human memory and calculation used to put a cap on the intricacies of self-delusion. p.96

There are so many layers of abstraction between the new kind of elite investor and actual events on the ground that the investor no longer has any concept of what is actually being done as a result of investments. p.97

Each layer of digital abstraction, no matter how well it is crafted, contributes some degree of error and obfuscation. No abstraction corresponds to reality perfectly. A lot of such layers become a system unto themselves, one that functions apart from the reality that is obscured far below. p.97

Locks are only amulets of inconvenience that remind us of a social contract we ultimately benefit from. p.107

Economics is about how to best mix a set of rules we cannot change with rules that we can change. p.112

The economy is a tool, and there's no reason it has to be as open and wild as the many open and wild things of our experience. But it also doesn't have to be as tied down as some might want. It should and could have an intermediate level of complexity. p.117

cybernetic totalism will ultimately be bad for spirituality, morality, and business. In my view, people have often respected bits too much, resulting in a creeping degradation of their own qualities as human beings. p.119

And if you look at the evolution of the technology closely, the "big ticket" technology bits items seem to be about expression or capture or passive viewing of the human story (TVs, cameras, music, games, etc). So again, Kurzweil may be onto something when he speaks of convergence... We're using VR and gesture and voice to augment the normally tactile activities in our lives so we can spend more time playing, no? 

Ideal computers can be experienced when you write a small program. They seem to offer infinite possibilities and an extraordinary sense of freedom. Real computers are experienced when we deal with large programs. They can trap us in tangles of code and make us slaves to legacy. p.119

If each cultural expression is a brand-new tiny program, then they are all aligned on the same starting line. Each one is created using the same resources as every other one. p.120

That's one reason web 2.0 designs strongly favor flatness in cultural expression. p.120

Let's suppose that back in the 1980s I had said, "In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX!" It would have sounded utterly pathetic. p.122
Welcome to my world. We've seen it all coming for a while, back in the 1950's there was the Jetpack stuff and Jetsons etc. It's like we're bracing ourselves. Somewhere along the way we forgot to think about the social impacts and emotional impacts of technological disruption and innovation and change on such rapid scales and at such rapid paces. 
The distinction between first-order expression and derivative expression is lost on true believers in the hive. First-order expression is when someone presents a whole, a work that integrates its own worldview and aesthetic. It is something genuinely new in the world. Second-order expression is made of fragmentary reactions to first order expression. p.122
Only people can make schlock, after all. A bird can't be schlocky when it sings, but a person can. p.123
I've seen computers make some SERIOUS schlock. I mean, SERIOUS. See Makers by Cory Doctorow.
The decentralized nature of architecture makes it almost impossible to track the nature of the information that is flowing through it. p.123
In more recent eras, ideologies related to privacy and anonymity joined a fascination with emerging systems similar to some conceptions of biological evolution to influence engineers to reinforce the opacity of the design of the internet. Each new layer of code has furthered the cause of deliberate obscurity. p.124

The appeal of deliberate obscurity is an interesting anthropological question... One is a desire to see the internet come alive as a metaorganism: many engineers hope for this eventually, and mystifying the workings of the net makes it easier to imagine it is happening. There is also a revolutionary fantasy: engineers sometimes pretend they are assailing a corrupt existing media order and demand both the covering of tracks and anonymity from all involved in order to enhance this fantasy... the result is that we must now measure the internet as if it were are part of nature, instead of from the inside, as if we were examining books of a financial enterprise. p.124

Some of the youngest, brightest minds have been trapped in a 1970s intellectual framework because they are hypnotized into accepting old software designs as if they were facts of nature. p.126

pattern exhaustion, a phenomena in which a culture runs out of variations of traditional designs i their pottery and becomes less creative. p.128

Spore addresses an ancient conundrum about causality and deities that was far less expressibly before the advent of computers. It shows that digital simulation can explore ideas in the form of direct experiences, which was impossible with previous art forms.p.132

A HYPOTHESIS LINKS the anomaly in popular music to the characteristics of flat information networks that suppress local contexts in favor global ones. p.133

A digital image of an oil painting is forever a representation not a real thing. p.133

The definition of a digital object is based on assumptions of what aspects of it will turn out to be important. It will be a flat, mute nothing if you ask something of it that exceeds expectations. p.134

Hip-hop is imprisoned within digital tools like the rest of us. But at least it bangs fiercely against the walls of its confinement. p.135

The hive ideology robs musicians and other creative people of the ability to influence the context within which their expressions are perceived, if they are to transition out of the old world of labels and music licensing. p.136

Every artist tries to foresee or even nudge the context in which expression is to be perceived so that the art will make sense. It's not necessarily a matter of overarching ego, or manipulative promotion, but a simple desire for meaning. p.137

Even if a video of a song is seen a million times, it becomes just one dot in a vast pointillist spew of similar songs when it is robbed of its motivating context. Numerical popularity doesn't correlate with intensity of connection in the cloud. p.137

If you grind any information structure up too finely, you can loose the connections of the parts to their local contexts as experienced by the humans who originated them, rendering the structure itself meaningless. p.138

There are two primary strands of cybernetic totalism. In one strand, the computing cloud is supposed to get smart to a superhuman degree on its own, and in the other, a crowd of people connected to the cloud through anonymous, fragementary contact is supposed to the super-human entity that gets smart. p.139

Once organisms became encapsulated, they isolated themselves into distinct species, trading genes only with others of their kind. p.140

you'll generally find for most topics, the Wikipedia entry is the first URL returned by the search engines but not necessarily the best URL returned by search engines but not necessarily the best URL available. p.143
One of the negative aspects of Wikipedia is this: because of how its entities are created, the process can result in a softening of ambition or, more specifically, a substitution of ideology for achievement. p.143

The distinction between understanding and creed, between science and ethics, is subtle. p.151

computationalism. This term is usually used more narrowly to describe a philosophy of mind, but I'll extend it to include something like a culture... the world can be understood as a computational process, with people as subprocesses. p.153

My first priority must be to avoid reducing people to mere devices. The best way to do that is to believe that the gadgets I can provide are inherent tools and are only useful because people have the magical ability to communicate meaning through them. p.154

The whole point of technology, though, is to change the human situation, so it is absurd for humans to aspire to be inconsequential. p.155
Logical positivism is the idea that a sentence or another fragment—something you can put in a computer file—means something in a freestanding way that doesn't require invoking the subjectivity of a human reader... "The meaning of a sentence is the instructions to verify it."... The new version of the idea if that if you have a lot of data you can make logical positivism work on a large-scale statistical basis. The thinking goes that within the cloud there will be no need for the numinous halves of traditional oppositions such as syntax/semantics, quantity/quality, content/context, and knowledge/wisdom. p.155
"realism." The idea is that humans, considered as information systems, weren't designed yesterday, and are not the abstract playthings of some higher being, such as a web 2.0 programmer in the sky or a cosmic spore player. Instead, I believe that humans are the result of billions of years of implicit, evolutionary study in the school of hard knocks. The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality... what can make bits have meaning is that their patterns have been hewn out of so many encounters with reality that they aren't really abstractable bits anymore, but are instead a non-abstract continuation of reality... Realism is based on specifics, but we don't yet know—and might never know—the specifics of personhood from a computational point of view. The best we can do right now is engage in the kind of storytelling that evolutionary biologists sometimes indulge in.   p.157
Fourier Transform. A Fourier transform detects how much action there is at particular "speeds" (frequencies) in a block of digital information. p.161

Gabor wavelet transform... This mathematical process identifies individual blips of action at particular frequencies in particular places, while the Fourier transform jest tells you what frequencies are present overall. p.161

Odors are completely different, as in the brain's method of sensing them. p.162
The number of distinct odors is limited only by the number of olfactory receptors capable of interacting with them. p.163

There is no way to interpolate between two smell molecules... colors and sounds can be measured with rulers, but odors must be looked up in a dictionary. p.163

smelly chemicals... are tied to the many stages of rotting or ripening of organic materials. As it turns out, there are three major, distinct chemical paths of rotting, each of which appears to define a different stream of entries in the brain's dictionary of smells. p.164

A smell is a synecdoche: a part standing in for them whole. p.164

Olfaction, like language, is built up from entries in a catalog, not from infinitely morphable patterns... the grammar of language is primarily a way of fitting those dictionary words in a larger context. p.165

This is perhaps the most interesting take away from the book. The olfactory as a medium, as a sense, as a channel. 

Darwin's most compelling evolutionary speculations was that music might have preceded language. He was intrigued by the fact that many species use song for sexual display and wondered if human vocalizations might have started out that way too. It might follow, then, that vocalizations could have become varied and complex only later, perhaps when song came to represent actions beyond mating and such basics of survival. p.167

The brain's cerebral cortex areas are specialized for particular sensory systems, such as vision. There are also overlapping regions between these parts—the cross-modal areas I mentioned earlier in connection with olfaction. Rama [V.S. Ramachandran] is interested in determining how the cross-modal areas of the brain may give rise to a core element of language and meaning: the metaphor. p.171

conflict that has been at the heart of information science since its inception: Can meaning be described compactly and precisely, or is it something that can emerge only in approximate form based on statistical associations between large numbers of components? p.173

when you deny the specialness of personhood, you elicit confused, inferior results from people. p.177

Separation anxiety is assuaged by constant connection. p.180

software development doesn't necessarily speed up in sync with improvements in hardware. It often instead slows down as computers get bigger because there are more opportunities for errors in bigger programs. Development becomes slower and more conservative when there is more at stake, and that's what is happening. p.181

Some of the greatest speculative investments in human history continue to converge on silly Silicon Valley schemes that seem to have been named by Dr. Seuss. On any given day, one might hear of tens or hundreds or millions of dollars flowing to a start-up company named Ublibudly or MeTickly. These are names I just made up, but they would make great venture capital bait if they existed. At these companies one finds rooms full of MIT PhD engineers not seeking cancer cures or sources of safe drinking water for the underdeveloped world but schemes to send little digital pictures of teddy bears and dragons between adult members of social networks. At the end of the road of the pursuit of technological sophistication appears to lie a playhouse in which humankind regresses to nursery school. p.182

Yes, I agree whole-heartedly that "social networking" is in its infancy—especially when you approach it from a purely technological viewpoint, as we tend to do in every industry that touches a machine or uses one as a mediation device. If we ditch the computer when thinking about these interactions, we'll find there are several disciplines, both professional and academic that have been dealing with many of the issues inherent with social networking on the internet.

For more information about Jaron Lanier, see his website: http://www.jaronlanier.com/

Thursday, July 22, 2010

Priority Mapping

The Digital Innovation Group has been working on evolving a concept introduced by a former colleague of mine named Joseph Dombroski, a User Experience Architect in the Chicago-Area. A priority map traditionally "road maps" various efforts, contingencies and influences, and the hierarchy of importance inherent within the efforts. It is traditionally used for engineering and software design, some business strategy from a tactical and mostly logistical perspective. 

Practicing User Experience for many years now, a thread I've found common to much of my endeavors is something some refer to as "parallel industry" examples that may speak to a design problem or issue or challenge in ways that answer questions or provide examples of possible directions we can take to innovate in another "parallel industry". An example of this would be priority mapping as applied to a design and user experience development and production process. 

One of the challenges when designing in multi-disciplinary and collaborative teams is dealing with agendas and incentives that drive various "stakeholders" and "players" working towards an "end goal." No matter what the "end goal" is, I've been on many projects where line of sight to the end goal(s) are obfuscated by insertion of agenda as "loudest voice in the room" or personal viewpoint anxiety derailment. What becomes more and more apparent during these moments of distraction, channel noise and argument, is that there needs to be a framework in place to guide and corral the discussions, prioritize efforts from the perspective of the "end goal" (and the business and user needs), focusing all work and conversation around the things that directly address the problems and needs at hand. 

Enter priority mapping for user experience. Priority mapping for UX takes into consideration everything from high level strategy to relative proportion of objects, content, functionality, in addition to "progressive disclosure" by answering to "changing modes" within a customer's intent or the system reflecting answers to that intent. Priority mapping for UX does not specify layout or design language. Priority mapping starts with the human need and expectation for value and backs out to gain perspective on a holistic view of an experience captured within modes and states (a "page" for example). Here's the process as it's evolved thus far:

1. through collaboration with all parties involved with the ideation and production of a final deliverable or solution, facilitate alignment with the "end use" goals throughout the team.

2. Based on these goals, do a content audit to see where existing assets can be leveraged and where new ones may need to be created. 

3. A user story or scenario helps (but be careful not to stereotype or assume) to provide a structure to demonstrate a "path" through an experience. 

4. Coalescing 1-3, "map" out the "high level" content "blocks" within a "mode" (window, browser...). Once the blocks have been identified, providing high level themes for an experience offering, it's time to work collaboratively to identify the "priority" and "proportion" of content, blocks or functionality relative to other content blocks. 

5. Using the finite space of a box (4:3 ratio or 16:9 ratio), come up with percentages of importance or "primary focus" vs "peripheral" or "secondary" focus. These percentages can drive the creation of the priority map in the sense that they are represented within the box by the amount of size each takes up. See Smartmoney's "Map of the Market" for an example of how relative proportion can be used to show volume and weight.

The priority map, once "finished" can evolved based on discussions and iterations. It can be used as a way to focus efforts and thinking on the end goals and work actively towards de-scoping, channel noise or irrelevancy. It is also a great resource to convey a solid direction and strategy that answers to the understanding needs of non-UX influences within the production process. 

As this is a new process and still evolving, I can show no examples from Sears as the work on the table utilizing this method is proprietary and confidential to Sears internal employees only. If you work at Sears, are interested in priority mapping, please reach out to me so I can walk and talk you through some examples and show the process.

Monday, July 5, 2010

iPad reflections on use (first three months) by a UX grouch

This post began creation initially using Atomic web browser in a tab holding blogger's posting UI.
I was able to input the title (though I discovered my breath was a command to hide the keypad) but was unable to begin writing these last two sentences due to some incompatibility with my more like a "real" browser and the "open source API schema". Thankfully I was able to switch to evernote to write this post. I'll copy and paste it into the input box and format it using my laptop which is sometimes a desktop. Some parts of my post may happen via SMS or cell. These smaller mobile devices feel so sluggish in the catch up to the capabilities I tend to take for granted in my larger clunkier devices. Five* years ago or so the iPhone just came out. Touch screens prior to it on mobile depended on stylus input and touch screens on larger scale were tap and point and filled with puffy buttons (well suited for vending, service and terminal applications).
This is one of the places where the iPad feels less like a "robust" machine but a toy version of what's to come. Though i like the thinking around multiple orientations and locking (something I wish the iPhone had) I seem to prefer landscape mode over most for reasons of more space for more stuff or breathing room for focus (I tend to use the device on the toilette or in bed horizontal).
I still wish I could fluidly multitask like on a laptop or desktop and feel trapped within the shuffle of transitions that seem and feel redundant when I wait for feed or program loads (sometimes not the fault of the device). My states however are saved, like if I spazzed and accidentally hit the hardware recessed home button and closed evernote without hitting save. But like most novel things that are initially deemed "cool" in an interface can quickly become repetitive nuances hindering or breaking the flow of using a tool or application.
I can't deny that it serves as a great photo frame and music player and portable note taker as well as a sharing device in a show you kind of way. I sense slide shows coming back with it getting easier to wirelessly transfer images instantaneously to several places at once, like flickr, where I can preview and witness the shoot unfold.
Physically, my breath seems to say close keyboard in certain positions while typing. Again I think of the next manifestations of keyboard input like simulated 3D like tactile response inflation of the box so I don't have to scrunch or develop bad typing and spelling habits (it's much like a conversation on a cell phone, you're shown a possibility of how what you said could be interpreted and sometimes you have to repeat yourself several times before the other person can understand, sometimes through a crash or disconnection and others through distortion of my intended or expected input as represented by the device be its voice channel or text input channel).
When I switched to the safari web browser native to the iPad os I encountered the same input problems and again switched back to Evernote. At this point it may be fair to outline the pros and cons experienced thus far in my use of my iPad.
Screen brightness and size compared to the other "mobile" or "micro" devices I use and own (this includes a "netbook" loaded with both WindowsXP and Ubuntu Linux, an iPhone 3GS, a 13" MacBook Pro with a 7200 RPM custom hard drive and maxed out ram, among other gadgets) is impressive, in addition to the resolution. What I can admit is that computers and components are in fact shrinking and becoming more mobile in their use. In my early days of design and computers, a desktop was a necessity if one wanted to produce audio or video or high resolution graphics. Moore's law came faster (and slower - myths here) than many of us professional insiders will admit. The iPad isn't even a year old. All of these "game changing" devices are in their infancy.
Hardware mapping to function: seems like Apple has institutionalized the "home" metaphor through the application of providing a hardware key. It's like the early versions and applications of the esc key as the universal panic button. If I'm disoriented or want to switch to another application I hit the home key. This landing and routing scheme support single-tasking through requiring a user to ass through the gate of home before moving onto a sub-level within the architecture. The screen orientation lock button as hardware and the orientation scheme in general are disorienting. There is a conflict with the lock toggle and the volume controls. Despite owning the device and using it daily for several months, I still require the use of trial and error to discern up from down. Then there is the lock button. While I understand it's dependency for the iPhone (decrease butt dialing) I fail to see the value here. Especially when cases for the iPad are considered in this mix. A case seems essential to the ownership of an iPad if not for protection of a relatively frivolous and expensive gadget in an ecosystem of devices I utilize in my daily life. In my experience the case facilitates easier use via provision of inclined surface for typing on the keyboard or stand for when my iPad is in what I refer to (among others) as "passive viewing mode". What the lock breaks is the principle of on/off expectation. There is a mapping to the unlock in software form yet locking itself is initiated via hardware. There is no software based lock equivalent. Same goes for the screen lock. And volume. Why make these hardware based functions when everything else on the device seems to be software based?

Keyboard: here's where I get overly frustrated. No matter the position I sit, no matter how hard I concentrate, no matter how much practice, my rate of error using a touch pad keyboard is astoundingly high (inefficient). For a while the flashiness of the UI was able to salve my disdain and at first I welcomed auto-correct. What I don't get is that Apple took something that is a universally understood design vernacular and "innovated" it in ways that provide more reliance on acceptance of a learning curve and the limitations of the interaction than on using the input mode to foster more efficient input into the system — like switching "states" between symbolic/numeric input (see screen shot), or hiding and showing the keyboard (again, discovery initiated with a learning curve). Last, haven't figure out how "shift" works... 

Oh! That's what the symbolic/numeric toggle button on the keyboard is for. It makes me wonder if apple is trying to change the game not only with platforms and gadgets but how we cognitively map our physical world into a virtual one. I assume they own the rights or patent on this QWERTY keyboard as well as the auto-suggest that I have a love/hate relationship with. 

Though I can see the value of ownership locking out (and locking in) competition and fostering advocacy and adoption, I can't forget Sony strategy, among others in the industry deemed to be overly focused on proprietary nuances that made "open" systems closed to everyone not subscribing to a brand. I can't help but think that this is a very carefully planned and executed strategy on Apple's part. Not only are they innovative in terms of platforms, systems and hardware/software but lead the pack in terms of design thinking and business strategy.

That said, how could a closed system be a long term strategy when we are barreling towards a more "open" system? In the short term apple profits from locking out other players pitching their humanness to the public and positioning the perception of their company as the underdog misunderstood creative spirit counter to the business machines land of Microsoft and sun. People who whole-heartedly drink the Jobs punch are ignorant of the fact that non of apples work, position in the market, or focus on being different would be possible without competition. Yet, like most businesses trying to eke out market share, the goal seems to be complete control, monopoly. Like their relationship with AT&T over any other carrier. I've never been able to stomach why a device should control the service I use to make it a communications channel. One of the best ways I could see someone being "different" in this space is through providing customers with options and choices; much less ubiquitously open systems of syndication, access, consumption and management (metadata and content/messaging).

What I am trying to say is that apple isn't as "user friendly" once the surface is peeled back and the motives of their corporation become painfully obvious. Further I would say their lock in and forcing of the user to adopt to shortcomings in thinking or user testing before releasing to the market actually stifles innovation and human evolution. But I represent only .00000000003% of the people who consume these products due to my education, interests, history of use and background in HCI, human-centered design, product interface design. In other words, I have the vernacular to articulate where when and how interfaces fail while 99.000000007% of the population have no clue, live in a world where technology and gadgets take up far less time and space in their lives than mine.
What apple seems to do very well time and time again is to be first to market with technologies that other companies fail to realize at the same pace or same prowess in terms of delivery and value proposition. Perhaps that is where Apple is truly a leader - they are organized in such a way that they are able to produce in timely and efficient manners, products and services that appeal to the average "Jane".
Much of what I have written so far is bout expectations both personal and presented by the brand, the device and the baggage I carry from previous experiences. Yes, I am hard on design and user interfaces. That's because I see the risks involved with what I refer to as "captive audience" when using a "GUI". Periphery disappears and focus on a boxed in context is intense. At that point the device has undivided attention and thus control over both physical and cognitive processes. It would not be impossible to actively work to design user interfaces actually alter some very foundational physical and cognitive processes within us all, including what we say and how we say it (think about truncation these days and abbreviations and the countless reports coming out about the western human's decline of focus, depth or non herd adaption to shortcuts, workarounds, or system failures that actively destroy vital ability. Like Neil Stevenson and Jaron Lanier said in many ways in many forums to date: BEWARE. Be very conscious when using new technologies and note when you are forced to change behavior to adapt to an offering hidden behind messaging like "it's all about you" because it never is when products and services and agendas are involved in the value proposition equation. At the end of the day Apple is a company that is publicly traded and therefore beholden to shareholder buy in. Like all the other businesses out there.
Back to the iPad... These gripes and critiques aside, I do find much pleasure in using my iPad in several areas not initially intended. There has been much debate about the death of print and I am one of those old people stuck in a generation of publishing, of citation of source and the unmitigated/able nature of the printed word. The app I seem to use the most is Kindle. And it is ironic because it integrated with the Amazon product platform and facilitated much spending by me outside of the Apple Store ecosystem. The conduit to this were my lists on an existing platform focused on and somewhat good at a certain kind of product that warrants much of what we deem valuable on the net today and going forward (ubiquitous access to information and experts and social communities of use...).

I am so into the tactile interaction of a "multi-touch" screen. Having designed touch screen interfaces in my past and hating the poke input model, I love seeing stuff from the early days of Flash (called spark) in terms of responsive UI that engages users more subtly, less literally or metaphorically and more "intuitively" through true interaction and communication loops. However, looking through the human interface guidelines document I realize that within their closed development structure, there is little room for variation or defiance of the standard patterns put forth without a great deal of expertise, effort and an extreme amount of patience in a developer. With the rise of HTML5 I hope we'll see a mass exodus from the app store and a flocking towards a more open web that truly captures the advantages of the many channels and devices we use every day.

Some promising applications have been slow to realize like AirDisplay and Mobile Mouse. The lag with screen sharing is prohibitive to use. Lag when in response to input it death for an interface. Still it offers hope in that use case I'm waiting for "token devices" that fluidly share with one other, allowing me to unmoor or shed weight when needed while maintaining a home base or several home bases.

* pieces of the iPhone "GUI" were developed years before the iPhone appeared.

The default keyboard.






From numeric mode, I go to symbol mode. If this is a multitouch device, why not leverage the existing functionality of a multitouch keyboard like I'm used to on a "real" computer?


While in numeric mode, Apple remaps my punctuation keys which is again disorienting and causes much in the way of toggle-based mistakes on input. Where is my standard shift key?

Wednesday, June 2, 2010

A Response: Natural User Interfaces Are Not Natural

"I believe we will look back on 2010 as the year we expanded beyond the mouse and keyboard and started incorporating more natural forms of interaction such as touch, speech, gestures, handwriting, and vision--what computer scientists call the "NUI" or natural user interface."
— Steve Ballmer, CEO Microsoft

That would be an awesome quote were it not for the FACT that all of this NUI stuff was around at Xerox Parc over 20 years ago (as Norman mentions). What is astounding is how slow culture, both in and outside of business, has slowed in terms of evolution while technology steadily increases velocity in terms of evolution (Moore's Law is now wrong, we're at a pace exponentially faster according to people in the know). Why is it taking so long to make GUI's (NUI's) that match the technology progression? My theory is that this stuff is "new" in the sense that it takes time to incorporate it all into the contexts of our lives, that disruptive innovation introductions to the market, even for "early adopters" has increased to a level of overwhelming for even the most spastic of embrace (myself included). As we're in an economy of choice as opposed to pure scale and demand fulfillment, even innovation seems to be a product category calling for discerning consumption.

Don writes: 
"As usual, the rhetoric is ahead of reality... Fundamental principles of knowledge of results, feedback, and a good conceptual model still rule. The strength of the graphical user interface (GUI) has little to do with its use of graphics: it has to do with the ease of remembering actions, both in what actions are possible and how to invoke them... The important design rule of a GUI is visibility: through the menus, all possible actions can be made visible and, therefore, easily discoverable."
Menus and the vernaculars he and many people rely on (AKA "patterns" and/or "standards") are direct responses to the constraints inherent in the systems (metaphors, proprietary hardware...) that they service. The "desktop" metaphor has been ripped to shreds and proven to be a culturally-biased manifestation of a group of highly insular engineers; much less detrimental to the development of operating systems that are truly cross-cultural and/or flexible enough to be usable in many contexts. That this metaphor has hurt the industry more than helped it in terms of innovation (see "In the Beginning was the Command Line", an essay by Neil Stevenson). Standards are good... For programming and system-level platform architecture... For sanity... For stability. But standards are often static and mistaken as gospel as opposed to dynamic sets of frameworks driven by the evolution of the marketplace and the demands therein; not to mention context, that human reality. When Norman makes statements like "Systems that avoid these well-known methods suffer," I get angry because statements like that are blatant examples of how ignorant designers can be at times (i.e. generalizing without taking the time to think about the complexities of interactions, the concept of empathic response and emergent technologies). In other words, systems that avoid usable and appropriate (to the user AND the business) methods suffer. Experiences and interfaces should respond to the demands of the content they are trying to service and provide to end users. For example the unique facets of products or services should drive a designer to explore the best "vehicles" through which to drive a particular path down the information superhighway. When we live within our comfort zones in the name of stability and sanity, we miss out, we suffer through a stagnation of evolution culturally, physically, cognitively and socially (human factors, user-centered frameworks). And if you want to speak to "affordances", Norman should perhaps look at advertising agencies or advertising in an of itself, the approaches that speak to the "unique selling points" of products or services as a driver for campain messaging and positioning. The same applies to GUI or NUI: an interaction is a form of exchange, of rapport. There are many many things going on outside of a pure form or system level analysis.
"Because gestures are ephemeral, they do not leave behind any record of their path, which means that if one makes a gesture and either gets no response or the wrong response, there is little information available to help understand why."
Not all contexts are universal. Anthropometrics can apply to two dimensional realities in the form of feedback from input, indication, understanding, response... There are many layers to the arguments Don positions that are ignored in favor of some call to convergence and standardization of thinking in a realm that suffers greatly from any algorythm-based application of solutions without thinking about the problem itself and the humans benefitting from the solution(s). What he speaks of here is handled by the display, the response of the system and not entirely dependent on the mode of input, be it gestural or keyboard, etc. I get the sense that because the keyboard and mouse have been around longer in a consumer context, Norman will find no fault in their use citing "standards". As Jaron Lanier states clearly, we should be extremely angry at the lack of progression of these systems, how we are extremely tolerant of shortcommings, how we alter behavior, much of the time dumbing it down, to facilitate the limitations of systems that should be much more functional.

Norman goes onto talk about standardization of gestures, etc. I assume he's dipping into his "affordances" misinterpretation at that point (or ignoring his own philosophies about that entirely). I mean, non-verbal communications, surfaces of inscription, modes of channel-based communications, have been studied as disciplines for decades prior to the invention of the PC. It scares me to see this foundational knowledge ignored by a so-called "expert" in the field. Going back further, Plato's The Cave would be a great read at this point. It seems that human perception, if not human experience is abandoned in favor of a full-out rant against a disruptive market release (because it calls into question many of his "standards" based on his interpretation of interaction and technology as well as a very obvious need to gain marketshare as an expert in this realm by speaking to the anxieties of his constituency - mostly business and mostly people who work with user experience professionals as opposed to practice it on a daily basis).

As a "design historian" he should also be in touch with what the futurists are predicting, some of which is already here like physical feedback mechanisms triggered by neuro stimulation or holography (3D) or interactions which combine multiple input methods and models like voice/sound as a gesture that influences touch in combination with keyboard or key. Multi-combination input is central to gaming. Mapping new commands to actions is commonplace as a learning curve in many realms, even in non-expert user interfaces. Again, generalizing is appropriate in some cases. These generalizations, assumptions and supposedly credible insights about multi-touch and gestural UI are a tremendous disservice to the design community. Then again, looking through the prism of our current technology and how slowly it is catching up to what he called rhetoric ahead of reality, it's understandable to latch onto what is comfortable and requires little effort and expertise to explain or explore or extend.

Wednesday, May 26, 2010

Facebook and Privacy Part II

Attached is a PDF generated from Notable about my thoughts regarding Facebook and Privacy settings. As I've written previously in posts regarding privacy, the landscape is changing, morphing by the millisecond so anything I post in this context will probably be old news before I click the submit button.

Regardless, from an experience and design and business perspective, I noticed many things that fail to provide the (assumed) user with effective ways of not only configuring settings but understanding the configuration(s) and/or setting(s) in and of themselves.

High Level Observations:

- Why does a user have to go to a dashboard or a full-blown state/mode to configure content display models, content access or screen configuration? In other words, it would be so much more understandable and valuable to users if the settings for privacy where accessible in the context of interacting with the content.

- Why does the "preview" state have to be a state? Why can't it be a "resolution model" which shows me a real-time feedback loop of how what I choose or select impacts the "default view" of my profile from

- multiple perspectives. If you're going to force me into the "Only me, friends, and everyone" model of grouping, at least give me the option to define my own groups and ways of naming them/specifying access control. Facebook has always felt more like an application or platform as opposed to a website made of pages and page turns. Yet they insist on staying "simple and elegant" (which means they are too lazy to think about some fundamental design issues).

- Still seeing a lot of fine print, abstraction, and obfuscation burying more fine print behind links in sub or supporting copy blocks. An organization like Facebook is responding to public outcry. The experience in and of itself is a "brand message" and wholly effects "perception". It's not good enough to simply offer access anymore. What is vital if Facebook plans on retaining users or limiting attrition is to be completely transparent in policy and effect/input by the user.

- How do my privacy settings affect the use of my "social graph" in the form of several syndicatable streams, including Facebook? How does OpenID get affected? How can I manage OpenID/FBConnect privacy settings in this context? Can I?

Also stated before is the fact that social networking sites were not built to retain or protect a person's sense of privacy because they are about public (or specified as private) interactions via a channel called the "internet". In the end, these settings are a knee-jerk and quick panic response by what I assume to be c-class and legal fighting some made-up time limitation with the intent to "get something up" as opposed to provide real value (i.e. Clear understanding) to the user. The troubling pattern I am seeing here is that facebook is in a loose-loose situation. They are trying to control something that is at the core of their value proposition both to themselves and the people who use the website. Without the "social graph" and "data trail" people leave, FB diminishes in value returns in terms of relevancy and experience. By answering to public outcry, facebook has abandoned this core value structure capitulating to advertising and revenue streams due to its market position.

We all know that when the user is happy, the company will be too. I wonder when the companies of tomorrow will start realizing that this "game" has changed. That the user is in control now and that the system is expected to provide this control. It's no longer let's build it and let the user figure it out. It's the user dictates everything and I provide the tools to enable him or her or it to do so. Still, I see many companies, even as new as Facebook, holding tightly to old and failed models, repeating mistakes in favor of the business as opposed to listening to customers. This leaves a great gap for opportunity and competition, if not the death of Facebook to come (at least as we know it today).

My prediction for identity and privacy on the web: user beware and user controls. More and more pieces of our online identity have been moving to the "cloud" which means a syndicated and consistently synced identity that the user chooses where and what information is accessible to whom and when and how. We're not there yet. And the war is with the usual suspects who most of the time want to be given information without giving anything other than a bad user experience back. The value to all gets lost in the battle when the solution seems simple to those with experience: be transparent or don't do anything at all when it comes to my data and my privacy and a risk of me being harmed or vulnerable to harm through use or a system. Liability will always be an issue when it comes to privacy because the entire definition and concept of privacy is dependent on multiple people or parties. There are negotiations, norms, implicit and non-implicit rules of behavior. There are also policies in place that can be leveraged if harm does happen. In the end, it's all about personal responsibility and vigilance by the user to manage what data is provided and when and how.

Wednesday, May 19, 2010

Tomorrowland by Daniel D. Castro

I can't repeat this enough: your microwave will be speaking to your tires in
the somewhat near future. Sensory input (aka passive influence) into systems
will automate much of what we angst over about "privacy" online. Still, I
can't help but think back to classes in 1998 and prior where my esteemed
professors would speak of such things being common by "2010" (this is when
people scoffed at an "expert" proposition that over half of all households
in the US would have "broadband" access - ADSL within the next five years).
Point is that predicting the future is AIMING an arrow towards a target
while reading factors like wind speed and direction etc. If you focus on the
target, you usually miss, like in pool when you look at the cue ball (a
no-no) when lining up the shot. Businesses seem to think in shorter-term
intervals (like yesterday, I need this yesterday) without considering the
path walk, the journey and perhaps a change, constant change in plans along
the way. That's not to say that some businesses get lucky by blindly
charging forward in knee-jerk reaction ways as second movers or fast
followers or strange (interpretations) ways of "following" via a complete
lack of understanding in regards to stuff like user experience or design or
programming/software engineering...

We used to refer to this as "ubiquitous computing" where you would gain
"peripheral awareness" of activity by and from your servant machines. Isn't
it ironic that in AI and machine learning people are spending tons of money
on understanding concepts of "empathy" over data aggregation or cleansing?
Just some thoughts.

Thursday, May 13, 2010

Facebook and Privacy

(this is a blog post... waiting rooms)

http://www.allfacebook.com/2010/05/infographic-the-history-of-facebooks-default-privacy-settings/

This is very interesting and clearly shows default settings over time. I'd love to see a side by side as well as callouts to policies related to the shifts in their default settings. Regardless it does serve as a metaphor for the fluidity of the policies in place, as witnessed with recent court cases with the FCC and EFF, among other banal acronyms. Harkening to the blog post - expecting privacy in a "social network" without actively learning how to manage it (i.e. spending time and calories) is like getting into a taxi in Chicago and expecting not to pay. There is an implicit understanding implied by the very nature of the website, clearly broadcast in "advertising" often featuring real-time "social graph" threads (posts, photos). What is troubling to me is the belief that regulation is the solution, that our government or someone else can make some very personal and important choices for us when we ourselves have no idea what choice we would make if the situation arose (because we have not experienced it yet). 

Defaults (i.e. just in case someone doesn't take the time to review policy, preferences, settings, etc...):

My wallet is private. What I spend my money on is not public knowledge for obvious security reasons. Besides, banks hate fraud and scams and spam (if they are legitimate). When that stuff happens to my money, I can sleep safely (sort of) knowing that my bank wants that info secure as much as I. Mint.com was able to "open up" their platform in ways extremely useful to their core offering without risking even the perception of risk. Why is it so hard to do this on eCommerce sites? How can eCommerce providers reassure their customers that the information collected will not put someone at risk of theft or harm but will enhance their experience through gained understanding (Amazon claims this but I have yet to stop seeing stuff so far out of the realm of what I am interested in getting in the way of what I am that I fail to see the logic working). While my wishlist may reflect what I like and perhaps am able to spend money on, what I own and have purchased from them are not public knowledge unless I "opt-in" to identify and "rate"...

I partake in "social networks" because I want to connect with people (friends or otherwise). Whatever my intentions, it has never been anything other than clear to me that what I do will be shared with a "network" of people. When the network was small and limited to those "inside" (logged into) the platform, I feared little about violations to my privacy. It seems that as the network opened up and the whole world could scrutinize my "data-shadow" I began to worry that, say, some ill-intentioned organization or individual will recontextualize and repurpose my data for evil or harmful means. This has never happened to me or any of the people I know. Sure, there has been "drama" between a friend or former lover or family member, some spam, some spam from me to others I had to apologize for...

Most of the time when I experience harm from being active on a social site it is when I do something that breaks a collective "norm" of behavior. If I post something inappropriate, if I say something shocking, I get a response, negative or positive. Someone I know posted something to the effect of "why would god do such horrible things to a child if he has so much power?" and I was alarmed and checked in because I was concerned. One time I posted some comments about an agency I did work for and later regretted the rant and took it down. All of this stuff is so new (YouTube, for example, turned FIVE YEARS OLD yesterday). And when things are new they are "disruptive".

Ultimately and unfortunately, the harms that people experience through violations of their privacy will result in remediation to address and asses risk. We will come up with new ways of monitoring and managing information rapidly in the coming years due to increased connectivity, higher "bandwidth", better devices and infrastructures... But since humans are using all of this, there are social and cultural (and emotional) considerations and frameworks in place that could help in the development of systems and processes that ensure safety online, even on social networks.

It is not the website, it is the PERSON who is responsible for how s/he/they use the website. When we click those EULAs we agree to this. No social networking site wants people to live in terror or fear when they use their services. If someone gets into a car wreck, the car is seldom to blame (except for Toyota...). In other words, when someone uses information they should not have access to in the first place to cause harm and harm is caused, there is usually a consequence to the action. If the harm is widespread and severe enough, there is usually a policy-level reaction. Maybe I'm naive but I don't know anyone who would maliciously "phreak" someone on a social network and do harm to someone else. Luckily I've not been a victim; nor have I heard about any.

That's not to say sites like Facebook shouldn't be a little more empathetic to the lives of people, more consoling in their response to questions about their policies. No matter the "channel" in communications, there are always structures, vernaculars and syntax. Some are less obvious than others. In addition to various levels of channel noise there are understandings about how to behave or act. Otherwise, there would be no continuity, nothing to engage with. I refer to stuff that is "private" unless asked as an example here. Like who you may be dating and the status of your relationship. Again, it's hard to blame Facebook in my opinion when the "user" has the ability to not fill those fields out. I don't recall, having used Facebook for a long time now, those fields being "required".

In the end, eBay comes to mind the most when it comes to "liability" and "policy" on a "social network" where the risk of harms are many due to that leap of faith humanity must take in any marketplace transaction. "Buyer Beware" was a byline mantra when the site took off. From day one there were reputation management tools that allowed people to flag and file complaints and provide eBay with invaluable feedback to manage changes to their platform before wide-spread disaster or harm struck.

There are all these models outside of their intended use we can draw upon to render "defaults" for how privacy is managed in an increasingly "connected" age. Behavior will be the ultimate judge of how privacy will shape itself in the coming weeks, days, decades. People won't participate in social networks that deprive them of their expected right to security and safety of self and their "data-trails". Those that throw their hands in the air and claim naive when ignorance is more appropriately applied should reconsider why they are participating in a social network or providing information that, no matter what, is at risk of being used in a malicious or harmful manner due to the impossibility of completely securing a "channel" through which information is transmitted.

The coming mantra for 2010-2012 or so will be "User Beware". Not because people or companies are bad but because no one has an answer right now, the stuff is new, we are still shaping it all. Social networks in this sense could be used to share information and awareness about privacy and policy and ways to manage it via the "users" themselves. Which is something we're already seeing.