A proud moment at GA: Integrating the highest lesson rubric requirements into the lesson format

Constant learning is constant humility

career, psychology

This morning I shared a bit of what the learning part of design meant to me in an email, and decided to expand on that.

Working in design means constantly learning. You have to make like a sponge and absorb the field you’re supporting, the user’s language and perspectives, the platform your design is being built with, and so much more. In my career alone I’ve supported users in healthcare SaaS, a social network, several technology companies, shoe retail, a veterinary clinic, an advertising/promotional agency - you get the picture. I’m sure there are agency designers and freelancers with an even more varied set of personas.

Sometimes that can be daunting. Many people in this field suffer from imposter syndrome: the sense that they are faking it, and someone will notice someday. The fact that no one has noticed, and that their work is excellent, doesn’t seem to stop them from this underlying insecurity. I would like to suggest that this is less about actual insecurity, and more about the nature of constantly learning things beyond design in order to perform design.

The man my great-uncle didn’t kill

career, psychology

Cross-posted and expanded from my LinkedIn account.

My grandfather and great-uncle during WWII
My grandfather and uncle talking at the German-run camp my uncle was in during World War II.

Everyone demonstrates the fundamental attribution error—a variation of correspondence bias (pdf)—to some extent. We look at the action and assume it’s the character. Even when we know there are extenuating circumstances we do it. The defense lawyer, doing his duty to provide the best defense possible, is seen as supporting crime. The debate student, assigned to defend a certain position, is seen as believing it; no matter the usefulness of the role or the purity of intent, every devil’s advocate runs the risk of being seen as devilish. And of course, the criminally negligent incompetent person driving the car that just cut us off.

In the workplace this can create misunderstandings, usually small but sometimes project-killing or even career-destroying. It’s a problem because the only way to overcome correspondence bias and not commit the fundamental attribution error is to constantly question your assumptions and opinions, looking for the larger context.

Since we’re all story-driven creatures, sometimes an anecdote can help. This is a story of a time a life was on the line, and it’s the best example of correspondence bias I know.

My mother’s uncle was a man named Jara. He was my grandmother’s brother, an artist when he could be (I saw beautiful sculptures and drawings in his widow’s home). His best friend, whose name I don’t know, was a professional artist.

Watercolor portrait of Uncle Jara by a friend
A watercolor portrait of Jara by his best friend, faded but still showing great talent.

During Hitler’s occupation of Prague, Jara and his best friend were sent to a labor camp. At the camp worked another man whose name I don’t know. Let’s call him Karel. Karel worked as an overseer, managing his fellow citizens for the Nazis. Karel was hated. He treated everyone “like a dog,” Jara said, swearing at them and driving them mercilessly, generally making the labor camp experience every bit as awful as you imagine it to be.

Watching Karel’s behavior, day in, day out, Jara and his friend eventually realized Karel could not be allowed to live. It was obvious to them. Karel was a traitor, a collaborator with the enemy, and responsible for much misery. They were young, and passionate about their country. They made a pact together, that if all three of them survived the war, Jara and his friend would hunt down Karel and kill him. They viewed it as an execution.

The war ended, the labor camp closed, and life continued for all three men. Jara and his friend discreetly found out where Karel lived. They obtained a gun, and one day they set out to his home.

Karel lived outside Prague, in a somewhat rural area. When Jara and his friend arrived, Karel’s wife was outside, hanging laundry. When they said they’d known Karel at the labor camp, she smiled and invited them in, calling to Karel that friends from the camp had arrived. They followed her to the kitchen, where they found the monster they sought.

Karel was sitting by the table with a large tub of water and baking soda in front of him, soaking his feet. He was wearing rolled-up pants, suspenders, and a collarless, button shirt, the kind you could put different collars with under a jacket. He greeted them with a broad smile, immediately calling them by name and introducing them to his wife. Jara said Karel was so happy, he had tears in his eyes. He asked his wife to give them coffee, and she brought out pastries, and all sat down to talk about old times.

Jara and his friend were dumbfounded, but did not show it. During the conversation they realized that Karel had not thought of himself as collaborating with the Nazis, but as mitigating their presence. He was stepping in so no one worse could. His harshness was protective; the Germans could not easily accuse the workers of under-producing when Karel pushed his fellow Czechs so hard.

They stayed several hours with Karel and his wife, reminiscing and privately realizing no one was getting shot that day, then took their leave. On the way back they threw the gun into a pond. Jara went on to work at the Barrandov film studios, where he met his wife, Alena (she was an accountant). They married, lived a long life, and were happy more often than not.

Jara and Alena with their dog.
Jara and Alena and a canine friend.

Jara was transformed by this experience. Never again would he take any person’s actions as the sum of their character. And I do my best to see things in context and not judge, in part because of the man my great-uncle didn’t kill.

UX design as contract

design, psychology

Back to William James again, and my favorite quote: “My experience is what I agree to attend to.”

Previously I wrote about what this said regarding the range of experience UX designers could leverage to engage users (UX happens everywhere).  But there’s more behind this statement than the observation that where a person’s attention goes, there goes their experience of the world. There’s an ethical responsibility implicit there as well.

What and how we attend to things matters to our quality of life. Psychologists, medical doctors, and Buddhists have known this for some time (Buddhists have known it a bit longer). Focused attention is used in mindfulness-based stress reduction programs for cancer patients; an excessive level of difficulty in maintaining focus is a diagnosable disorder; “right mindfulness” is part of the Buddha’s Eightfold Path. The very process of therapy involves drawing attention to specific patterns of behavior.

But attention isn’t the whole story. If William James is correct, then experience involves not just attention, but an agreement to attend. When a user agrees to give us (UX architects) some of their attention, they are in effect agreeing to make us a small part of their experience of the world. They are allowing us to have an effect on their quality of life, small or large depending on what our product or service is.

As the other half of that agreement, we enter into an unspoken contract with users to make that experience worth their while.


Originally posted on alexfiles.com (1998–2018) on January 2, 2011.

UX happens everywhere

design thinking, psychology

“My experience is what I agree to attend to,” said William James. Although James wasn’t talking about user experience as designers think of it, this is my favorite UX quote, and one I believe every UX architect, designer, or strategist should keep in mind. Today I’m writing about the implications this has on where we should focus our attention.

Where a person’s attention goes, there goes their experience of the world. In other words, UX happens everywhere.

Your product may be the ultimate experience you want your users to have, and your web site experience may help get them to purchase it (or be the goal itself, if you’re a social network or some other online service). But long before they land on your site or purchase your product, every interaction of the user with your brand is UX.

What people say about your product on social networks or blogs, your advertising (online and off), how your competitors represent you and your service. Your content lives everywhere, and your existing users and prospects can potentially encounter it everywhere. You can’t control this, but you can add to the milieu in a variety of ways: blogs, forums, social networks, videos, mobile applications, gadgets, rich media advertising, news, and choosing to advertise on more targeted sites.

Why does this matter? Because people make decisions in an all-or-nothing manner. Neurologically speaking, every encounter creates a positive or negative moments in a user’s head—a yes/no binary decision. A user’s overall impression comes from the preponderance of the individual binary choices associated with a concept.

Further, in the absence of knowledge most people tend to go with whatever information gets in first with the most. In this way informational cascades are spread across a population which may or may not be accurate. (This may be why car salespeople are trained to get customers to say “yes” more than once, and to speak to more than one salesperson. You can read more about binary decision making and informational cascades in The tyranny of dichotomy.)

If you expect users to “agree to attend” to ultimately experience your product, one way is to create more positive binary moments about your brand and product than there are negative ones. Every encounter with your brand weights a user’s interest in one direction or another. As UX strategists, it’s clearly in our interests as UX strategists to create positive user experiences in every relevant context possible.

Update: I don’t think I said clearly enough here that “positive” requires an experience to be honest and to the user’s advantage. So I’m saying it now.


Originally posted on the alexfiles (1998–2018) on January 1, 2011.

Fun is fundamental

design thinking, game elements, psychology

Fun is a seriously undervalued part of user experience (perhaps of any experience). In fact, a sense of play may be a required characteristic of good UX interaction. But too often, I hear comments like the following, seen on ReadWriteWeb:

When you think of virtual worlds, the first one that probably pops into your head is Second Life, but in reality, there are a number of different virtual worlds out there. There are worlds for socializing, worlds for gaming, even worlds for e-learning. But one thing that most virtual worlds have in common is that they are places for play, not practicality. (Yes, even the e-learning worlds are designed with elements of “fun” in mind).

I was surprised to see the concept of play set in tension with practicality, as if they were incompatible, and to read that “even the e-learning worlds” employed fun. Game elements have been used to promote online learning for well over a decade, and used in offline educational design for much longer.

I certainly don’t mean to imply that every web site can be made fun. But it can employ the techniques of play in order to be more fun. As Clark Aldrich observes, discussing learning environments (emphasis his),

You cannot make most content fun for most people in a formal learning program… At best what you can do is make it more fun for the greatest percentage of the target audience. Using a nice font and a good layout doesn’t make reading a dry text engaging, but it may make it more engaging

The driving focus, the criteria against which we measure success, should be on making content richer, more engaging, more visual, with better feedback, and more relevant. And of course more fun for most students.

It was while developing an educational site for Nortel Networks that I first discovered the value of game elements in design. Deliberately incorporating mini games, an ongoing “quest” hidden in the process, rewards (including surprise Easter eggs), levels, triggers, and scores (with a printable certificate) made the tedious process of learning how to effectively make use of an intranet database much more fun. We also offered different learning techniques, so users could learn by text, video, or audio, as they preferred.

This can apply to non-learning environments as well. Think about it: online games have already done all the heavy lifting in figuring out the basics of user engagement. Some techniques I’ve found valuable in retail, informational, and social media include:

  • Levels. These provide a sense of achievement for exploration, UGC (user-generated content) or accomplishment. Levels can reduce any possible sense of frustration at the unending quest.
  • Unending quest. There should always be a next step for users. This doesn’t mean the user needs to be told that they’ll never be through with the site. Instead, it should always provide something engaging, that leads them on to a next step, and a next, and so forth.
  • Surprise rewards/triggers. These include Easter egg links, short-term access to previously inaccessible documents, etc.
  • Mini games, which can result in recognition or rewards for the user and can provide research data and UGC for the site.
  • Scores, which can encourage competitiveness and a sense of accomplishment.
  • Avatars and other forms of personalization.
  • User-driven help and feedback. Users (particularly engineers, in my experience) love to be experts. Leverage this to support your help forums if you need them.

Online, offline, crunching numbers at work, immersed in a game, sitting in a classroom, or building a barn, a sense of fun doesn’t just add surface emotional value, it frequently improves the quality of the work and adds pleasant associations, making us more likely to retrieve useful data for later application. Perhaps this is why so many artists and scientists have been known for a sense of play. And for most of us, it’s during childhood – the time we are learning the most at the fastest rate – that we are typically our most playful.

All websites are to some extent educational. Even a straightforward retail site wants you to learn what they offer, how to choose an item, and how to pay for it. Perhaps we can take a tip from our childhood and incorporate more fun into the user experience. Then we can learn how best to learn.

Originally posted on former personal blog UXtraordinary.com.

The tyranny of dichotomy

psychology

An informational cascade is a perception—or misperception—spread among people because we tend to let others think for us when we don’t know ourselves. For example, recently John Tierney (tierneylab.blog.nytimes.com) discussed the widely held belief but little-supported belief that too much fat is nutritionally bad. Peter Duesberg contends that the HIV hypothesis for AIDS is such an error (please note, I am not agreeing with him).

Sometimes cultural assumptions can lead to such errors. Stephen Gould described countless such mistakes, spread by culture or simple lack of data, in The Mismeasure of Man. Gould points out errors such as reifying abstract concepts into entities that exist apart from our abstraction (as has been done with IQ), and forcing measurements into artificial scales, both assumptions that spread readily within and without the scientific community without any backing.

Mind, informational cascades do not have to be errors—one could argue that the state of being “cool” comes from an informational cascade. Possibly many accurate understandings come via informational cascades as well, but it’s harder to demonstrate those because of the nature of the creatures.

It works like this: people tend to think in binary, all-or-nothing terms. Shades of gray do not occur. In fact, it seems the closest we come to a non-binary understanding of a concept is to have many differing binary decisions about related concepts, which balance each other out.

So, in the face of no or incomplete information, we take our cues from the next human. When Alice makes a decision, she decides yes-or-no; then Bob, who knows nothing of the subject, takes his cue from Alice in a similarly binary fashion, and Carol takes her cue from Bob, and so it spreads, in a cascade effect.

Economists and others rely on this binary herd behavior in their calculations.

But.

The problem is that people don’t always think this way; therefore people don’t have to think this way. Some people seem to have the habit of critical thought at an early age. As well, the very concept of binary thinking seems to fit too neatly into our need to measure. It’s much easier to measure all-or-nothing than shades of gray, so a model that assumes we behave in an all-or-nothing manner can easily be measured, and is therefore more easily accepted within the community of discourse.

Things tend to be more complex than we like to acknowledge. As Stephan Wolfram observed in A New Kind of Science,

One might have thought that with all their successes over the past few centuries the existing sciences would long ago have managed to address the issue of complexity. But in fact they have not. And indeed for the most part they have specifically defined their scope in order to avoid direct contact with it.

Which makes me wonder if binary classification isn’t its own informational cascade. In nearly every situation, there are more than two factors and more than two options.

The tradition of imposing a binary taxonomy our world goes back a long way. Itkonen (2005) speaks about the binary classifications that permeate all mythological reasoning. By presenting different quantities as two aspects of the same concept, they are made more accessible to the listener. By placing them in the concept the storyteller shows their similarities, and uses analogical reasoning to reach the audience.

Philosophy speaks of the law of the excluded middle—something is either this or that, and not an in between—but this is a trick of language. A question that asks for only a yes or no answer does not allow for responses such as both or maybe.

Neurology tells us that neurons either fire or they don’t. But neurons are much more complex than that. From O’Reilly and Munakata’s Computational Explorations in Cognitive Neuroscience (italics from the authors, boldface mine):

In contrast with the discrete boolean logic and binary memory representations of standard computers, the brain is more graded and analog in nature… Neurons integrate information from a large number of different input sources, producing essentially a continuous, real valued number that represents something like the relative strength of these inputs…The neuron then communicates another graded signal (its rate of firing, or activation) to other neurons as a function of this relative strength value. These graded signals can convey something like the extent or degree to which something is true….

Gradedness is critical for all kinds of perceptual and motor phenomena, which deal with continuous underlying values….

Another important aspect of gradedness has to do with the fact that each neuron in the brain receives inputs from many thousands of other neurons. Thus, each individual neuron is not critical to the functioning of any other—instead, neurons contribute as part of a graded overall signal that reflects the number of other neurons contributing (as well as the strength of their individual contribution). This fact gives rise to the phenomenon of graceful degradation, where function degrades “gracefully” with increasing amounts of damage to neural tissue.

So, now we have a clue that binary thinking may be an informational cascade all its own, what do we do about it?


References

Itkonen, E. (2005). Analogy as structure and process: Approaches in linguistics, cognitive psychology and philosophy of science. Amsterdam: John Benjamins Publishing.

O’Reilly, R.C., and Y. Munakata. (2000). Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. Cambridge, MA: MIT Press.


Originally posted on alexfiles.com (1998–2018) on May 5, 2008.

Messy is fun: challenging Occam’s razor

design thinking, psychology, taxonomy

The scientific method is the most popular form of scientific inquiry, because it provides measurable testing of a given hypothesis. This means that once an experiment is performed, whether the results were negative or positive, the foundation on which you are building your understanding is a little more solid, and your perspective a little broader. The only failed experiment is a poorly designed one.

So, how to design a good experiment? The nuts and bolts of a given test will vary according to the need at hand, but before you even go about determining what variable to study, take a step back and look at the context. The context in which you are placing your experiment will determine what you’re looking for and what variables you choose. The more limited the system you’re operating in, the easier your test choices will be, but the more likely you are to miss something useful. Think big. Think complicated. Then narrow things down.

But, some say, simple is good! What about Occam’s razor and the law of parsimony (entities should not be unnecessarily multiplied)?

Occam’s razor is a much-loved approach that helps make judgment calls when no other options are available. It’s an excellent rule of thumb for interpreting uncertain results. Applying Occam’s razor, you can act “as if” and move on to the next question, and go back if it doesn’t work out.

Still, too many people tend to use it to set up the context of the question, unconsciously limiting the kind of question they can ask and limiting the data they can study. It’s okay to do this consciously, by focusing on a simple portion of a larger whole, but not in a knee-jerk fashion because “simple is better.” Precisely because of this, several scientists and mathematicians have suggested anti-razors. These do not necessarily undermine Occam’s razor. Instead, they phrase things in a manner that helps keep you focused on the big picture.

Some responses to Occam’s concept include these:

Einstein: Everything should be as simple as possible, but no simpler.

Leibniz: The variety of beings should not rashly be diminished.

Menger: Entities must not be reduced to the point of inadequacy.

My point is not that Occam’s razor is not a good choice in making many decisions, but that one must be aware that there are alternative views. Like choosing the correct taxonomy in systematics, choosing different, equally valid analytic approaches to understand any given question can radically change the dialogue. In fact, one can think of anti-razors as alternative taxonomies for thought: ones that let you freely think about the messy things, the variables you can’t measure, the different perspectives that change the very language of your studies. You’ll understand your question better, because you’ll think about it more than one way. And while you’ll need to pick simple situations to test your ideas, the variety and kind of situations you can look at will be greatly expanded.

Plus, messy is fun.

Originally posted on former personal blog UXtraordinary.com.

Zombie ideas

psychology

In 1974 Robert Kirk wrote about the “zombie idea,” describing the concept that the universe, the circle of life, humanity, and our moment-to-moment existence could all have developed, identically with “particle-for-particle counterparts,” and yet lack feeling and consciousness. The idea is that evolutionally speaking, it is not essential that creatures evolved consciousness or raw feels in order to evolve rules promoting survival and adaptation. Such a world would be a zombie world, acting and reasoning but just not getting it (whatever “it” is).

I am not writing about Kirk’s idea. (At least, not yet.)

Rather, I’m describing the term in the way it was used in 1998, by four University of Texas Health Science Center doctors, in a paper titled, “Lies, Damned Lies, and Health Care Zombies: Discredited Ideas That Will not Die
(pdf). Here the relevant aspect of the term “zombie” is refusal to die, despite being killed in a reasonable manner. Zombie ideas are discredited concepts that nonetheless continue to be propagated in the culture.

While they (and just today, Paul Krugman) use the term, they don’t explicate it in great detail. I thought it might be fun to explore the extent to which a persistent false concept is similar to a zombie.

  • A zombie idea is dead.
    For the vast majority of the world, the “world is flat” is a dead idea. For a few, though, the “world is flat” virus has caught hold, and this idea persists even in technologically advanced cultures.
  • A zombie idea is contagious.
    Some economists are fond of the concept of “binary herd behavior.” The idea is that when most people don’t know about a subject, they tend to accept the view of the person who tells them about it; and they tend to do that in an all-or-nothing manner. Then they pass that ignorant acceptance on to the next person, who accepts it just as strongly. (More about the tyranny of the dichotomy later.) So, when we’re children and our parents belong to Political Party X, we may be for Political Party X all the way, even though we may barely know what a political party actually is.
  • A zombie idea is hard to kill.
    Some zombie viruses are very persistent. For example, most people still believe that height and weight is a good calculator to determine your appropriate calorie intake. Studies, however, repeatedly show that height and weight being equal, other factors can change the body’s response.Poor gut flora, certain bacteria, and even having been slightly overweight in the past can mean that of two people of the same height and weight, one will eat the daily recommended calories and keep their weight steady, and one will need to consume 15% less in order to maintain the status quo. Yet doctors and nutritionists continue to counsel people to use the national guidelines to determine how much to eat.
  • A zombie idea eats your brain.
    Zombie ideas, being contagious and false, are probably spreading through binary thinking. A part of the brain takes in the data, marks it as correct, and because it works in that all-or-nothing manner, contradictory or different data has a harder time getting the brain’s attention. It eats up a part of brain’s memory, and by requiring more processing power to correct it, eats up your mental processing time as well.It also steals all the useful information you missed because your brain just routed the data right past your awareness, thinking it knew the answer.
  • Zombies are sometimes controlled by a sorcerer, or voodoo bokor.
    Being prey to zombie ideas leaves you vulnerable. If you have the wrong information, you are more easily manipulated by the more knowledgeable. Knowledge, says Mr. Bacon, is power.
  • Zombies have no higher purpose than to make other zombies.
    Closely related to the previous point. Even if you are not being manipulated, your decision-making suffers greatly when you are wrongly informed. You are also passing on your wrong information to everyone you talk to about it. Not being able to fulfill your own purposes, you are simply spreading poor data.

So we see that the tendency to irony is not just useful in and of itself, but useful in helping prevent zombie brain infections. As lunchtime is nearly over, and I can’t think of more similarities, I’m stopping here to get something to eat.

[Exit Alex stage right, slouching, mumbling, “Must…eat…brains.”]

Originally posted on former personal blog UXtraordinary.com.

Smart because I’m stupid, stupid because I’m smart

psychology

Smart because I’m stupid

Some people are smart because understanding comes easily to them. I, on the other hand, might argue that what smarts I have come from what I don’t understand.

Take, for example, mirrors. There is a basic rule about mirrors that many take for granted, and that is when we focus on a mirror image, we focus not on the mirror but on the things it’s reflecting. Optometrists use this all the time to avoid having twenty-foot-long rooms. Using mirrors, light bounces from the eye chart letters twenty feet before hitting the patient’s eyes; hence the 20 in 20/20. (20/20 means that at twenty feet, you see at the same clarity as a normal human. Some few see more clearly; 20/10 means you see an object at twenty feet as clearly as it would appear at ten to most people.)

But this presented problems to me because I could not understand how it worked. Here were my questions:

How does my eye know how far to focus in order to see the other object clearly? If I’m looking at a mirror that’s reflecting something out of my line of sight—a plain cube, say—how would I know, without a context, where to focus to see it at the appropriate clarity for my vision? How know if it was five inches a side, or five feet? All my brain knows is that the cube is some distance farther from me than the mirror. Yet somehow my eye focuses. (Briefly I wondered why, if my brain can “decide” how distant a thing is, it can’t “pretend” it’s closer so my myopic eyes see it clearly? I’m a highly near-sighted person who sees clearly about six inches from my eyes, after which the world begins to blur. Don’t worry, I figured it out.)

Why can’t I take a picture of the mirror itself, or glass, for that matter? Saying it’s because they’re transparent only raises the issue of what, exactly, makes transparency possible in the first place. A thing that both allows an unimpeded passage of light through, but can also reflect that light almost perfectly?

It was all too weird for me.

But I understood the basics of refraction, and eventually (I’m ashamed to admit how long this took) I came up with a mental picture to understand this at least partly.

First, I had to acknowledge that transparency was not a focusing issue (more about transparency later). Then I imagined a camera, an object, and a mirror at the points of an equilateral triangle. The mirror is angled so that it reflects the object to the camera, and the camera to the object.

The camera’s view includes both the object and the mirror. But the representation of the object the camera sees via the mirror has traveled twice as far, and so is less clear. As was my understanding, until I thought of this.

Then another question occurred to me. If the camera is absorbing both its direct view of the object, and the reflected view, perhaps a piece of film with both those bits of information was more accurate somehow than simply seeing the object clearly. Then I imagined multiple mirrors, each angling a different aspect of the object onto the same point. The results would have much more complete information about the object than a simple photograph. We might not interpret it well on a flat surface. Then I thought of holograms, and suddenly my understanding of the interference pattern that creates a hologram, and the apparent chaos of a piece of holographic film, improved sharply.

Most of the things in my life that I understand well originate like this, with something obvious to others but not to me. And that’s why I say that I’m smart because I’m stupid.

Stupid because I’m smart

Still, I’m also stupid because I’m smart. There are a bunch of cuttlefish in a lab in Pennsylvania that are demonstrably better able to learn than I am, Here’s why:

Jean Boal, an associate professor at Millersville University, studies cuttlefish intelligence, as well as that of other cephalopods. She has devised a fairly complex test, demanding not only learned association, but unlearning it and learning a new one, then going back and forth in a process called serial reversal learning. They’re pretty good at it.

The cuttlefish goes through a door, into a tank within a tank. It’s small, with opaque walls. To both sides of the cuttlefish are two openings into the larger tank, and in front of it is an object such as a plastic plant.

The two openings are marked differently, one framed with broad, vivid stripes, one a solid color. Both appear to be open to the cuttlefish, but one is closed using a transparent piece of plastic. If the object in front of the cuttlefish is a plastic plant, then the solid-framed doorway is open. If the object is a rock, then the striped doorway is open.

Cuttlefish are smart enough to figure this out, and act accordingly to obtain access to the rest of the tank.

Here’s why I’m not as swift as a cuttlefish. There are two good routes to my workplace, one on the highway, one on a street paralleling the highway. If I stepped out the door to see a mockingbird sat on my mailbox, and this was followed by a wreck on the highway necessitating my taking the street route, I would not correlate these things. Even if every time the mockingbird was on the mailbox there was a wreck on the highway, I sincerely doubt I would notice the correlation. For one thing, I’m smart enough to have a lot of things on my mind, which might make noticing and retaining and associating the data more difficult. For another, I’m informed enough to understand the basics of physics, and this tells me that birds on mailboxes are not catalysts for car wrecks. So the very simple association the cuttlefish makes would be beyond me.

Dr. Boal say that “the ultimate question is, am I smart enough to find out how smart they are?” Well, Dr. Boal, I can tell you one thing—they are most definitely smarter than I.

A last note

Transparency and reflection are still interesting to me. Think about it. Here’s a piece of glass, and it allows light to pass through in a straight line, unimpeded so far as we can tell (I’m assuming no impurities are present to tint or otherwise distort the glass). We look through the piece of glass, and we see what’s on the other side. But that same piece of glass can, if I view it from the right perspective, reflect all that light instead of letting it pass through. And my brain lets me see it clearly, despite the focusing distance being farther than the glass itself. Water and other things are similarly challenging.

It’s as though the property of the transparent object—allowing light through, or reflecting it—is dependent on the perspective and behavior of the observer. I know this is very obvious. But it seems to me that this is a good analogy for some of the more mysterious behaviors in the universe. It’s not that something magical is happening. It’s that we’re seeing different facets of the same thing, and we just can’t see the thing itself yet. Because it’s transparent.


Originally posted on LiveJournal

Why blog?

psychology, writing

So, I tell my beloved I’ve become a user on LiveJournal, and he doesn’t get the appeal. It seems too much like a vanity project. Now, this is the man who normally is the first to “get” anything about me, so I started thinking. At the risk of rationalizing my vanity, I thought I’d work it out here.

At a glance, these reasons appear:

  • It’s a way to make the world smaller; a way to find people with common interests who don’t live in the same neighborhood, or even the same continent.
  • In a media-driven culture, placing ourselves online makes us feel more participatory, as opposed to having little or no influence on the world. [Note: “ourselves” and “we” in this entry mean the blogging community, not the voices in my head ;-)]
  • It’s a safe place to express feelings not acceptable in the workplace, etc….
  • It’s a way of dealing with psychological issues without having to confront the fact that you’re dealing with psychological issues. You’re just sharing.
  • It’s a way of dealing with psychological issues deliberately. Writing and speaking thoughts gives us more ability to analyse and “reprogram” them.*

Ok, so there are some reasons, and probably all of them are true for me to some degree. And just writing it all out has made me feel better. So, regardless of the motive, the practice is useful.

* There was an interesting study done on people who experienced sudden catastrophic trauma (such as the WTC tragedy, or Oklahoma City Federal Building bombing). They were asked when they first spoke about the trauma with someone – anyone, whether it be friend, family, police, counselor, stranger – and how they felt about it. They found that a year later, people who spoke about their experience before they went to sleep displayed less severe PTSD (post-traumatic stress disorder) symptoms than people who slept first. Less pain, fewer night terrors, fewer panic attacks, and so forth.

No reason was given, but my idea is that sleep processes information whether we like it or not, and processes it not only on our conscious level but on all those primitive, emotional, fight-or-flight levels as well. Speaking about it doesn’t remove the fear or anger, but does help you sort out the event a little more clearly, and structure the direction of the “hardwiring” that happens while we sleep. Or maybe “firmwiring” is a better phrase, since these are neurons we’re discussing.


Originally posted on LiveJournal.