The man my great-uncle didn’t kill

career, psychology

Cross-posted and expanded from my LinkedIn account.

My grandfather and great-uncle during WWII
My grandfather and uncle talking at the German-run camp my uncle was in during World War II.

Everyone demonstrates the fundamental attribution error—a variation of correspondence bias (pdf)—to some extent. We look at the action and assume it’s the character. Even when we know there are extenuating circumstances we do it. The defense lawyer, doing his duty to provide the best defense possible, is seen as supporting crime. The debate student, assigned to defend a certain position, is seen as believing it; no matter the usefulness of the role or the purity of intent, every devil’s advocate runs the risk of being seen as devilish. And of course, the criminally negligent incompetent person driving the car that just cut us off.

In the workplace this can create misunderstandings, usually small but sometimes project-killing or even career-destroying. It’s a problem because the only way to overcome correspondence bias and not commit the fundamental attribution error is to constantly question your assumptions and opinions, looking for the larger context.

Since we’re all story-driven creatures, sometimes an anecdote can help. This is a story of a time a life was on the line, and it’s the best example of correspondence bias I know.

My mother’s uncle was a man named Jara. He was my grandmother’s brother, an artist when he could be (I saw beautiful sculptures and drawings in his widow’s home). His best friend, whose name I don’t know, was a professional artist.

Watercolor portrait of Uncle Jara by a friend
A watercolor portrait of Jara by his best friend, faded but still showing great talent.

During Hitler’s occupation of Prague, Jara and his best friend were sent to a labor camp. At the camp worked another man whose name I don’t know. Let’s call him Karel. Karel worked as an overseer, managing his fellow citizens for the Nazis. Karel was hated. He treated everyone “like a dog,” Jara said, swearing at them and driving them mercilessly, generally making the labor camp experience every bit as awful as you imagine it to be.

Watching Karel’s behavior, day in, day out, Jara and his friend eventually realized Karel could not be allowed to live. It was obvious to them. Karel was a traitor, a collaborator with the enemy, and responsible for much misery. They were young, and passionate about their country. They made a pact together, that if all three of them survived the war, Jara and his friend would hunt down Karel and kill him. They viewed it as an execution.

The war ended, the labor camp closed, and life continued for all three men. Jara and his friend discreetly found out where Karel lived. They obtained a gun, and one day they set out to his home.

Karel lived outside Prague, in a somewhat rural area. When Jara and his friend arrived, Karel’s wife was outside, hanging laundry. When they said they’d known Karel at the labor camp, she smiled and invited them in, calling to Karel that friends from the camp had arrived. They followed her to the kitchen, where they found the monster they sought.

Karel was sitting by the table with a large tub of water and baking soda in front of him, soaking his feet. He was wearing rolled-up pants, suspenders, and a collarless, button shirt, the kind you could put different collars with under a jacket. He greeted them with a broad smile, immediately calling them by name and introducing them to his wife. Jara said Karel was so happy, he had tears in his eyes. He asked his wife to give them coffee, and she brought out pastries, and all sat down to talk about old times.

Jara and his friend were dumbfounded, but did not show it. During the conversation they realized that Karel had not thought of himself as collaborating with the Nazis, but as mitigating their presence. He was stepping in so no one worse could. His harshness was protective; the Germans could not easily accuse the workers of under-producing when Karel pushed his fellow Czechs so hard.

They stayed several hours with Karel and his wife, reminiscing and privately realizing no one was getting shot that day, then took their leave. On the way back they threw the gun into a pond. Jara went on to work at the Barrandov film studios, where he met his wife, Alena (she was an accountant). They married, lived a long life, and were happy more often than not.

Jara and Alena with their dog.
Jara and Alena and a canine friend.

Jara was transformed by this experience. Never again would he take any person’s actions as the sum of their character. And I do my best to see things in context and not judge, in part because of the man my great-uncle didn’t kill.

Fun is fundamental

design thinking, game elements, psychology

Fun is a seriously undervalued part of user experience (perhaps of any experience). In fact, a sense of play may be a required characteristic of good UX interaction. But too often, I hear comments like the following, seen on ReadWriteWeb:

When you think of virtual worlds, the first one that probably pops into your head is Second Life, but in reality, there are a number of different virtual worlds out there. There are worlds for socializing, worlds for gaming, even worlds for e-learning. But one thing that most virtual worlds have in common is that they are places for play, not practicality. (Yes, even the e-learning worlds are designed with elements of “fun” in mind).

I was surprised to see the concept of play set in tension with practicality, as if they were incompatible, and to read that “even the e-learning worlds” employed fun. Game elements have been used to promote online learning for well over a decade, and used in offline educational design for much longer.

I certainly don’t mean to imply that every web site can be made fun. But it can employ the techniques of play in order to be more fun. As Clark Aldrich observes, discussing learning environments (emphasis his),

You cannot make most content fun for most people in a formal learning program… At best what you can do is make it more fun for the greatest percentage of the target audience. Using a nice font and a good layout doesn’t make reading a dry text engaging, but it may make it more engaging

The driving focus, the criteria against which we measure success, should be on making content richer, more engaging, more visual, with better feedback, and more relevant. And of course more fun for most students.

It was while developing an educational site for Nortel Networks that I first discovered the value of game elements in design. Deliberately incorporating mini games, an ongoing “quest” hidden in the process, rewards (including surprise Easter eggs), levels, triggers, and scores (with a printable certificate) made the tedious process of learning how to effectively make use of an intranet database much more fun. We also offered different learning techniques, so users could learn by text, video, or audio, as they preferred.

This can apply to non-learning environments as well. Think about it: online games have already done all the heavy lifting in figuring out the basics of user engagement. Some techniques I’ve found valuable in retail, informational, and social media include:

  • Levels. These provide a sense of achievement for exploration, UGC (user-generated content) or accomplishment. Levels can reduce any possible sense of frustration at the unending quest.
  • Unending quest. There should always be a next step for users. This doesn’t mean the user needs to be told that they’ll never be through with the site. Instead, it should always provide something engaging, that leads them on to a next step, and a next, and so forth.
  • Surprise rewards/triggers. These include Easter egg links, short-term access to previously inaccessible documents, etc.
  • Mini games, which can result in recognition or rewards for the user and can provide research data and UGC for the site.
  • Scores, which can encourage competitiveness and a sense of accomplishment.
  • Avatars and other forms of personalization.
  • User-driven help and feedback. Users (particularly engineers, in my experience) love to be experts. Leverage this to support your help forums if you need them.

Online, offline, crunching numbers at work, immersed in a game, sitting in a classroom, or building a barn, a sense of fun doesn’t just add surface emotional value, it frequently improves the quality of the work and adds pleasant associations, making us more likely to retrieve useful data for later application. Perhaps this is why so many artists and scientists have been known for a sense of play. And for most of us, it’s during childhood – the time we are learning the most at the fastest rate – that we are typically our most playful.

All websites are to some extent educational. Even a straightforward retail site wants you to learn what they offer, how to choose an item, and how to pay for it. Perhaps we can take a tip from our childhood and incorporate more fun into the user experience. Then we can learn how best to learn.

Originally posted on former personal blog UXtraordinary.com.

Messy is fun: challenging Occam’s razor

design thinking, psychology, taxonomy

The scientific method is the most popular form of scientific inquiry, because it provides measurable testing of a given hypothesis. This means that once an experiment is performed, whether the results were negative or positive, the foundation on which you are building your understanding is a little more solid, and your perspective a little broader. The only failed experiment is a poorly designed one.

So, how to design a good experiment? The nuts and bolts of a given test will vary according to the need at hand, but before you even go about determining what variable to study, take a step back and look at the context. The context in which you are placing your experiment will determine what you’re looking for and what variables you choose. The more limited the system you’re operating in, the easier your test choices will be, but the more likely you are to miss something useful. Think big. Think complicated. Then narrow things down.

But, some say, simple is good! What about Occam’s razor and the law of parsimony (entities should not be unnecessarily multiplied)?

Occam’s razor is a much-loved approach that helps make judgment calls when no other options are available. It’s an excellent rule of thumb for interpreting uncertain results. Applying Occam’s razor, you can act “as if” and move on to the next question, and go back if it doesn’t work out.

Still, too many people tend to use it to set up the context of the question, unconsciously limiting the kind of question they can ask and limiting the data they can study. It’s okay to do this consciously, by focusing on a simple portion of a larger whole, but not in a knee-jerk fashion because “simple is better.” Precisely because of this, several scientists and mathematicians have suggested anti-razors. These do not necessarily undermine Occam’s razor. Instead, they phrase things in a manner that helps keep you focused on the big picture.

Some responses to Occam’s concept include these:

Einstein: Everything should be as simple as possible, but no simpler.

Leibniz: The variety of beings should not rashly be diminished.

Menger: Entities must not be reduced to the point of inadequacy.

My point is not that Occam’s razor is not a good choice in making many decisions, but that one must be aware that there are alternative views. Like choosing the correct taxonomy in systematics, choosing different, equally valid analytic approaches to understand any given question can radically change the dialogue. In fact, one can think of anti-razors as alternative taxonomies for thought: ones that let you freely think about the messy things, the variables you can’t measure, the different perspectives that change the very language of your studies. You’ll understand your question better, because you’ll think about it more than one way. And while you’ll need to pick simple situations to test your ideas, the variety and kind of situations you can look at will be greatly expanded.

Plus, messy is fun.

Originally posted on former personal blog UXtraordinary.com.

Zombie ideas

psychology

In 1974 Robert Kirk wrote about the “zombie idea,” describing the concept that the universe, the circle of life, humanity, and our moment-to-moment existence could all have developed, identically with “particle-for-particle counterparts,” and yet lack feeling and consciousness. The idea is that evolutionally speaking, it is not essential that creatures evolved consciousness or raw feels in order to evolve rules promoting survival and adaptation. Such a world would be a zombie world, acting and reasoning but just not getting it (whatever “it” is).

I am not writing about Kirk’s idea. (At least, not yet.)

Rather, I’m describing the term in the way it was used in 1998, by four University of Texas Health Science Center doctors, in a paper titled, “Lies, Damned Lies, and Health Care Zombies: Discredited Ideas That Will not Die
(pdf). Here the relevant aspect of the term “zombie” is refusal to die, despite being killed in a reasonable manner. Zombie ideas are discredited concepts that nonetheless continue to be propagated in the culture.

While they (and just today, Paul Krugman) use the term, they don’t explicate it in great detail. I thought it might be fun to explore the extent to which a persistent false concept is similar to a zombie.

  • A zombie idea is dead.
    For the vast majority of the world, the “world is flat” is a dead idea. For a few, though, the “world is flat” virus has caught hold, and this idea persists even in technologically advanced cultures.
  • A zombie idea is contagious.
    Some economists are fond of the concept of “binary herd behavior.” The idea is that when most people don’t know about a subject, they tend to accept the view of the person who tells them about it; and they tend to do that in an all-or-nothing manner. Then they pass that ignorant acceptance on to the next person, who accepts it just as strongly. (More about the tyranny of the dichotomy later.) So, when we’re children and our parents belong to Political Party X, we may be for Political Party X all the way, even though we may barely know what a political party actually is.
  • A zombie idea is hard to kill.
    Some zombie viruses are very persistent. For example, most people still believe that height and weight is a good calculator to determine your appropriate calorie intake. Studies, however, repeatedly show that height and weight being equal, other factors can change the body’s response.Poor gut flora, certain bacteria, and even having been slightly overweight in the past can mean that of two people of the same height and weight, one will eat the daily recommended calories and keep their weight steady, and one will need to consume 15% less in order to maintain the status quo. Yet doctors and nutritionists continue to counsel people to use the national guidelines to determine how much to eat.
  • A zombie idea eats your brain.
    Zombie ideas, being contagious and false, are probably spreading through binary thinking. A part of the brain takes in the data, marks it as correct, and because it works in that all-or-nothing manner, contradictory or different data has a harder time getting the brain’s attention. It eats up a part of brain’s memory, and by requiring more processing power to correct it, eats up your mental processing time as well.It also steals all the useful information you missed because your brain just routed the data right past your awareness, thinking it knew the answer.
  • Zombies are sometimes controlled by a sorcerer, or voodoo bokor.
    Being prey to zombie ideas leaves you vulnerable. If you have the wrong information, you are more easily manipulated by the more knowledgeable. Knowledge, says Mr. Bacon, is power.
  • Zombies have no higher purpose than to make other zombies.
    Closely related to the previous point. Even if you are not being manipulated, your decision-making suffers greatly when you are wrongly informed. You are also passing on your wrong information to everyone you talk to about it. Not being able to fulfill your own purposes, you are simply spreading poor data.

So we see that the tendency to irony is not just useful in and of itself, but useful in helping prevent zombie brain infections. As lunchtime is nearly over, and I can’t think of more similarities, I’m stopping here to get something to eat.

[Exit Alex stage right, slouching, mumbling, “Must…eat…brains.”]

Originally posted on former personal blog UXtraordinary.com.