UX design as contract

design, psychology

Back to William James again, and my favorite quote: “My experience is what I agree to attend to.”

Previously I wrote about what this said regarding the range of experience UX designers could leverage to engage users (UX happens everywhere).  But there’s more behind this statement than the observation that where a person’s attention goes, there goes their experience of the world. There’s an ethical responsibility implicit there as well.

What and how we attend to things matters to our quality of life. Psychologists, medical doctors, and Buddhists have known this for some time (Buddhists have known it a bit longer). Focused attention is used in mindfulness-based stress reduction programs for cancer patients; an excessive level of difficulty in maintaining focus is a diagnosable disorder; “right mindfulness” is part of the Buddha’s Eightfold Path. The very process of therapy involves drawing attention to specific patterns of behavior.

But attention isn’t the whole story. If William James is correct, then experience involves not just attention, but an agreement to attend. When a user agrees to give us (UX architects) some of their attention, they are in effect agreeing to make us a small part of their experience of the world. They are allowing us to have an effect on their quality of life, small or large depending on what our product or service is.

As the other half of that agreement, we enter into an unspoken contract with users to make that experience worth their while.


Originally posted on alexfiles.com (1998–2018) on January 2, 2011.

UX happens everywhere

design thinking, psychology

“My experience is what I agree to attend to,” said William James. Although James wasn’t talking about user experience as designers think of it, this is my favorite UX quote, and one I believe every UX architect, designer, or strategist should keep in mind. Today I’m writing about the implications this has on where we should focus our attention.

Where a person’s attention goes, there goes their experience of the world. In other words, UX happens everywhere.

Your product may be the ultimate experience you want your users to have, and your web site experience may help get them to purchase it (or be the goal itself, if you’re a social network or some other online service). But long before they land on your site or purchase your product, every interaction of the user with your brand is UX.

What people say about your product on social networks or blogs, your advertising (online and off), how your competitors represent you and your service. Your content lives everywhere, and your existing users and prospects can potentially encounter it everywhere. You can’t control this, but you can add to the milieu in a variety of ways: blogs, forums, social networks, videos, mobile applications, gadgets, rich media advertising, news, and choosing to advertise on more targeted sites.

Why does this matter? Because people make decisions in an all-or-nothing manner. Neurologically speaking, every encounter creates a positive or negative moments in a user’s head—a yes/no binary decision. A user’s overall impression comes from the preponderance of the individual binary choices associated with a concept.

Further, in the absence of knowledge most people tend to go with whatever information gets in first with the most. In this way informational cascades are spread across a population which may or may not be accurate. (This may be why car salespeople are trained to get customers to say “yes” more than once, and to speak to more than one salesperson. You can read more about binary decision making and informational cascades in The tyranny of dichotomy.)

If you expect users to “agree to attend” to ultimately experience your product, one way is to create more positive binary moments about your brand and product than there are negative ones. Every encounter with your brand weights a user’s interest in one direction or another. As UX strategists, it’s clearly in our interests as UX strategists to create positive user experiences in every relevant context possible.

Update: I don’t think I said clearly enough here that “positive” requires an experience to be honest and to the user’s advantage. So I’m saying it now.


Originally posted on the alexfiles (1998–2018) on January 1, 2011.

Simplicity is not a goal, but a tool

design, design thinking

Simplicity in design is not a goal by itself, but a tool for better experience. The goal is the need of the moment: to sell a product, to express an opinion, to teach a concept, to entertain. While elegance and optimal function in design frequently overlaps with simplicity, there are times that simplicity is not only not possible but hurts usability. Yet many designers do not understand this, and over the years, I’ve seen the desire to “keep it simple, stupid,” lead to poor UX.

I was therefore glad to see Francisco Inchauste’s well-thought, longer version of Einstein’s “as simple as possible, but no simpler” remark.

From the column:

As an interactive designer, my first instinct is to simplify things. There is beauty in a clean and functional interface. But through experience I’ve found that sometimes I can’t remove every piece of complexity in an application. The complexity may be unavoidably inherent to the workflow and tasks that need to be performed, or in the density of the information that needs to present. By balancing complexity and what the user needs, I have been able to continue to create successful user experiences.

Plus, as I’ve commented before, messy is fun!


Originally posted on former personal blog UXtraordinary.com.

Evolutional UX

design, design thinking

This was originally posted on the UXtraordinary blog, before I incorporated under that name. Since then this approach has proven successful for me in a variety of contexts, especially Agile (including Scrum, kanban, and Lean UX – which is an offshoot of Agile whether it likes it or not).


I subscribe to the school of evolutional design. In evolution, species change not to reach for some progressively-closer-to-perfection goal, but in response to each other and their ever-changing environment. My user experience must do likewise.

Rather than reach for pixel-perfect, which is relatively unattainable outside of print, (and is probably only “perfect” to myself and possibly my client), I reach for what’s best for my users, which is in the interests of my client. I expect that “best” to change as my users change, and as my client’s services/products change. This approach makes it much easier to design for UX.

Part of evolutional design is stepping away from the graceful degradation concept. The goal is not degraded experience, however graceful, but differently adapted experience. In other words, it’s not necessary that one version of a design be best. Two or three versions can be equally good, so long as the experience is valuable. Think of the differences simply resizing a window can have on well-planned liquid design, without hurting usability. Are the different sizes bad? Of course not.

This approach strongly supports behavioral design, in which design focuses on the behavior and environment of the user. You might be designing for mobile, or a laptop, or video, or an e-newsletter; you might be designing for people being enticed to cross a pay wall, or people who have already paid and are enjoying your service. You might be appealing to different demographics in different contexts. Evolutional UX thinks in terms of adaptation within the digital (and occasionally analog) ecology.

Evolutional UX also reminds the designer that she herself is part of an evolving class of worker, with many species appearing and adapting and mutating and occasionally dying out. We must adapt, or fall out of the game—and the best way to do that is to design for your ever-changing audience and their ever-changing tools.

And now, some words of wisdom from that foremost evolutional ecologist, Dr. Seuss. Just replace the “nitch” spelling with “niche” and you’ve got sound ecological theory, as every hermit crab knows.

And NUH is the letter I use to spell Nutches,
Who live in small caves, known as Nitches, for hutches.
These Nutches have troubles, the biggest of which is
The fact there are many more Nutches than Nitches.
Each Nutch in a Nitch knows that some other Nutch
Would like to move into his Nitch very much.
So each Nutch in a Nitch has to watch that small Nitch
Or Nutches who haven’t got Nitches will snitch.


Designing for purpose

design thinking

This is the first of several presentations applying different psychological systems to user experience.

Designing for users is a tough job. To optimize our designs and strategy, UX professionals frequently turn to concept/site testing. The problem is that most design strategy and testing thinks in terms of input → output. We provide input, users perform a desired response (click-through, purchase, content creation). How to break out of this mold?

Perceptual control theory (PCT) assumes that all output is based on the ultimate goal of improved perceptual input. If you replace “input” in the previous sentence with “experience,” you’ll see the direction this discussion is going…


Originally posted on UXtraordinary, August 3, 2009.

Fun is fundamental

design thinking, game elements, psychology

Fun is a seriously undervalued part of user experience (perhaps of any experience). In fact, a sense of play may be a required characteristic of good UX interaction. But too often, I hear comments like the following, seen on ReadWriteWeb:

When you think of virtual worlds, the first one that probably pops into your head is Second Life, but in reality, there are a number of different virtual worlds out there. There are worlds for socializing, worlds for gaming, even worlds for e-learning. But one thing that most virtual worlds have in common is that they are places for play, not practicality. (Yes, even the e-learning worlds are designed with elements of “fun” in mind).

I was surprised to see the concept of play set in tension with practicality, as if they were incompatible, and to read that “even the e-learning worlds” employed fun. Game elements have been used to promote online learning for well over a decade, and used in offline educational design for much longer.

I certainly don’t mean to imply that every web site can be made fun. But it can employ the techniques of play in order to be more fun. As Clark Aldrich observes, discussing learning environments (emphasis his),

You cannot make most content fun for most people in a formal learning program… At best what you can do is make it more fun for the greatest percentage of the target audience. Using a nice font and a good layout doesn’t make reading a dry text engaging, but it may make it more engaging

The driving focus, the criteria against which we measure success, should be on making content richer, more engaging, more visual, with better feedback, and more relevant. And of course more fun for most students.

It was while developing an educational site for Nortel Networks that I first discovered the value of game elements in design. Deliberately incorporating mini games, an ongoing “quest” hidden in the process, rewards (including surprise Easter eggs), levels, triggers, and scores (with a printable certificate) made the tedious process of learning how to effectively make use of an intranet database much more fun. We also offered different learning techniques, so users could learn by text, video, or audio, as they preferred.

This can apply to non-learning environments as well. Think about it: online games have already done all the heavy lifting in figuring out the basics of user engagement. Some techniques I’ve found valuable in retail, informational, and social media include:

  • Levels. These provide a sense of achievement for exploration, UGC (user-generated content) or accomplishment. Levels can reduce any possible sense of frustration at the unending quest.
  • Unending quest. There should always be a next step for users. This doesn’t mean the user needs to be told that they’ll never be through with the site. Instead, it should always provide something engaging, that leads them on to a next step, and a next, and so forth.
  • Surprise rewards/triggers. These include Easter egg links, short-term access to previously inaccessible documents, etc.
  • Mini games, which can result in recognition or rewards for the user and can provide research data and UGC for the site.
  • Scores, which can encourage competitiveness and a sense of accomplishment.
  • Avatars and other forms of personalization.
  • User-driven help and feedback. Users (particularly engineers, in my experience) love to be experts. Leverage this to support your help forums if you need them.

Online, offline, crunching numbers at work, immersed in a game, sitting in a classroom, or building a barn, a sense of fun doesn’t just add surface emotional value, it frequently improves the quality of the work and adds pleasant associations, making us more likely to retrieve useful data for later application. Perhaps this is why so many artists and scientists have been known for a sense of play. And for most of us, it’s during childhood – the time we are learning the most at the fastest rate – that we are typically our most playful.

All websites are to some extent educational. Even a straightforward retail site wants you to learn what they offer, how to choose an item, and how to pay for it. Perhaps we can take a tip from our childhood and incorporate more fun into the user experience. Then we can learn how best to learn.

Originally posted on former personal blog UXtraordinary.com.

The tyranny of dichotomy

psychology

An informational cascade is a perception—or misperception—spread among people because we tend to let others think for us when we don’t know ourselves. For example, recently John Tierney (tierneylab.blog.nytimes.com) discussed the widely held belief but little-supported belief that too much fat is nutritionally bad. Peter Duesberg contends that the HIV hypothesis for AIDS is such an error (please note, I am not agreeing with him).

Sometimes cultural assumptions can lead to such errors. Stephen Gould described countless such mistakes, spread by culture or simple lack of data, in The Mismeasure of Man. Gould points out errors such as reifying abstract concepts into entities that exist apart from our abstraction (as has been done with IQ), and forcing measurements into artificial scales, both assumptions that spread readily within and without the scientific community without any backing.

Mind, informational cascades do not have to be errors—one could argue that the state of being “cool” comes from an informational cascade. Possibly many accurate understandings come via informational cascades as well, but it’s harder to demonstrate those because of the nature of the creatures.

It works like this: people tend to think in binary, all-or-nothing terms. Shades of gray do not occur. In fact, it seems the closest we come to a non-binary understanding of a concept is to have many differing binary decisions about related concepts, which balance each other out.

So, in the face of no or incomplete information, we take our cues from the next human. When Alice makes a decision, she decides yes-or-no; then Bob, who knows nothing of the subject, takes his cue from Alice in a similarly binary fashion, and Carol takes her cue from Bob, and so it spreads, in a cascade effect.

Economists and others rely on this binary herd behavior in their calculations.

But.

The problem is that people don’t always think this way; therefore people don’t have to think this way. Some people seem to have the habit of critical thought at an early age. As well, the very concept of binary thinking seems to fit too neatly into our need to measure. It’s much easier to measure all-or-nothing than shades of gray, so a model that assumes we behave in an all-or-nothing manner can easily be measured, and is therefore more easily accepted within the community of discourse.

Things tend to be more complex than we like to acknowledge. As Stephan Wolfram observed in A New Kind of Science,

One might have thought that with all their successes over the past few centuries the existing sciences would long ago have managed to address the issue of complexity. But in fact they have not. And indeed for the most part they have specifically defined their scope in order to avoid direct contact with it.

Which makes me wonder if binary classification isn’t its own informational cascade. In nearly every situation, there are more than two factors and more than two options.

The tradition of imposing a binary taxonomy our world goes back a long way. Itkonen (2005) speaks about the binary classifications that permeate all mythological reasoning. By presenting different quantities as two aspects of the same concept, they are made more accessible to the listener. By placing them in the concept the storyteller shows their similarities, and uses analogical reasoning to reach the audience.

Philosophy speaks of the law of the excluded middle—something is either this or that, and not an in between—but this is a trick of language. A question that asks for only a yes or no answer does not allow for responses such as both or maybe.

Neurology tells us that neurons either fire or they don’t. But neurons are much more complex than that. From O’Reilly and Munakata’s Computational Explorations in Cognitive Neuroscience (italics from the authors, boldface mine):

In contrast with the discrete boolean logic and binary memory representations of standard computers, the brain is more graded and analog in nature… Neurons integrate information from a large number of different input sources, producing essentially a continuous, real valued number that represents something like the relative strength of these inputs…The neuron then communicates another graded signal (its rate of firing, or activation) to other neurons as a function of this relative strength value. These graded signals can convey something like the extent or degree to which something is true….

Gradedness is critical for all kinds of perceptual and motor phenomena, which deal with continuous underlying values….

Another important aspect of gradedness has to do with the fact that each neuron in the brain receives inputs from many thousands of other neurons. Thus, each individual neuron is not critical to the functioning of any other—instead, neurons contribute as part of a graded overall signal that reflects the number of other neurons contributing (as well as the strength of their individual contribution). This fact gives rise to the phenomenon of graceful degradation, where function degrades “gracefully” with increasing amounts of damage to neural tissue.

So, now we have a clue that binary thinking may be an informational cascade all its own, what do we do about it?


References

Itkonen, E. (2005). Analogy as structure and process: Approaches in linguistics, cognitive psychology and philosophy of science. Amsterdam: John Benjamins Publishing.

O’Reilly, R.C., and Y. Munakata. (2000). Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. Cambridge, MA: MIT Press.


Originally posted on alexfiles.com (1998–2018) on May 5, 2008.

Excluding data limits thought

design thinking

From illustrations in Stephen Jay Gould’s “Wonderful Life;” these creatures were misidentified for decades because of thought-limiting taxonomies. Stippled ink, watercolor.

I have never understood the desire to delete articles in Wikipedia solely on the basis of the highly subjective concept of “notability,” and I’ve fought against deletion of such articles. It’s easy to store the information, and it’s useful to someone or it wouldn’t be there. To these reasons I would add another: the more information you have, the more freedom you have to think flexibly about a subject.

Nicholson Baker supports the concept of a Deletopedia, a wikimorgue where all the “nonnotable” articles removed by the frustrated book-burners on Wikipedia would reside. Baker describes it:

…a bin of broken dreams where all rejects could still be read, as long as they weren’t libelous or otherwise illegal. Like other middens, it would have much to tell us over time.

Why, exactly, is this useful? Because we need taxonomic freedom.

A taxonomy is only as free as its data. The more categories you have—the more data—the more ways a given piece can move from one category to another and be connected—then the more flexibly and creatively you can arrange and understand the data. Not only does the freedom to connect and associate a given piece of data help, but each piece of data increases the number of patterns possible.

How we understand information is driven by the taxonomies—the patterns—we place it in. As Marvin Minsky said, You don’t understand anything until you learn it more than one way. The biologists have known this for some time. Initially biological species classification was based primarily on anatomy and phenotype. But there are many ways to think about organisms: according to evolutionary ancestry (cladistics), according to geography, according to the niche they occupy ecologically, to name a few. What taxonomy you choose to use determines how you’re able to perceive and understand a given organism or system.

The moment you begin to exclude and include along any lines, you begin to enforce a taxonomy of sorts. The taxonomies we use determine and limit the direction and options of our thought. We need to apply them to look at things from a given perspective, but we need to be aware of them so we can change them and see different perspectives. So, thinking in terms of deleting what is not notable is implicitly applying a self-limiting taxonomy. You will not be able to change your perspective to one that makes use of the deleted information, because you will not have the information.

This tendency by some to ignore or remove information that does not fit into their personal taxonomy of relevance is present in library cataloging, too. As a former online cataloger myself, I’m also in support of keeping analog card catalogs as well as digital. Having project-managed teams that converted card catalogs into databases, I’ve seen first-hand how subjective the choices of what pieces of information on the card get migrated onto the database can be. I think every piece of data should be online, but there are plenty of catalogers who skip over descriptive items they find trivial.

Humans are linguistic souls (even the mostly spatial types like myself), and having a new word or symbol attached to a concept immediately adds a tool to our arsenal of thought. This is why one of the first things repressive regimes do is burn the books and suppress the intellectuals. “All the Nazi or Fascist schoolbooks made use of an impoverished vocabulary, and an elementary syntax, in order to limit the instruments for complex and critical reasoning” (Umberto Eco, 22 June 1995, New York Review of Books). We do ourselves a disservice when we close off possible avenues of thought by disregarding data currently not important to us.

Besides, as Flaubert observed, “Anything becomes interesting if you look at it long enough.”

Maybe Wikipedia should make that its motto.


Originally posted on UXtraordinary.com, March 20, 2008.