Emotional cues in virtual spaces

Posted by & filed under Second Life, Usability.

While frequently used to great effect in prose, text is a notoriously poor medium for conveying the emotional metadata humans rely on for face-to-face conversation. How do we know exactly how to interpret someone else’s words, stripped of their emotional context? What was intended as a simple request for information may be taken by one reader as a joke, while another may see it as a personal attack.

The system used by many modern internet users was proposed in the early 1980s by Scott Fahlman, a computer scientist with Carnegie Mellon University. He suggested that users employ a short series of characters, evoking the iconic smiley face, to demonstrate that their words were to be taken lightheartedly: :-)

While at the time, the proposal was viewed by many as somewhat tongue-in-cheek, the smiley quickly caught on, and is as recognizable as the letters “www” today, demonstrating its effectiveness in clarifying human-to-human interaction in text-based communication. In the decades since, internet users have extended the original system by adopting many other emoticons, conveying displeasure, sadness, disgust, exhaustion, and many others, inserting much-needed emotional context to their chat and email conversations.

Just as virtual environments like Second Life are frequently described as updated MUDs or chatrooms, user interactions within them can be similarly enhanced by the use of body language and gestures based on that of real-world humans. Consider the image of an avatar facing another and smiling, looking away disinterestedly, or standing with arms crossed; each conveys a radically different message even when associated with the same text.

But what about cases in which we see avatars’ body language injected into our communications without our explicit permission? There have been countless posts to the Second Life forums by newer users, angry and hurt by the disdainful, superior manner of an established resident, and how they were deliberately ignored.

These new users describe an incident that usually follows a set pattern. They approached a Linden employee or an older resident, usually a fairly high-profile content creator, and greeted them. The established resident turned to face them, looked down their nose, and turned back to what they were doing. In actual fact, this is a client-side avatar animation–when chat is “heard” on the client, avatars appear to turn their heads to face it without any input from the user controlling that avatar.

From the perspective of the Linden or longtime resident, they are unlikely to have even known anyone approached them, as they were busy doing something else: programming, browsing the web, or working on textures, leaving behind a puppet with its strings cut.

This is an example of a “subconscious” message injected to the communications channel. While no information has deliberately been conveyed, to a human observer, a clear message has been sent. The body language of the avatar has effectively spoken for its user. Yet, to the recipient of this message, the avatar is the human. From their perspective, they’ve just been snubbed by some standoffish person who clearly can’t be bothered to even give them the time of day.

Next: Deliberate subconscious filters and their implications.

9 Responses to “Emotional cues in virtual spaces”

  1. Orlie Omegamu

    That is funny, because the avatar turns to look at us so quickly, it should be fairly obvious that it is an automatic reaction. Perhaps some users experience enough lag that it seems to be more of a thoughtful act, as though the other avatar saw our chat message, and then decided, hmmm, this person I want to ignore is talking to me, let me press a key that makes me look at them, so that they know I am ignoring them. More likely , though, we are just used to responding emotionally to the body language, and some of us stop there, without thinking of the technical underpinnings of SL.

  2. Mera

    So is there a solution? Do we force ourselves into busy mode as an indicator to others? I don’t expect the new person to pick up on the visual cues right away, so I can’t blame them for the way they feel. I will say that I typically lock on to an object with my camera. This prevents me from looking at others when they talk. I can’t image that would help, though. I don’t even acknowledge their existance. :)

  3. Catherine Omega

    Yes, I do believe there’s a solution, and that is to more intelligently assess what’s going on. The current model is flawed, which was my point. It needs to change to become more effective.