While frequently used to great effect in prose, text is a notoriously poor medium for conveying the emotional metadata humans rely on for face-to-face conversation. How do we know exactly how to interpret someone else’s words, stripped of their emotional context? What was intended as a simple request for information may be taken by one reader as a joke, while another may see it as a personal attack.
The system used by many modern internet users was proposed in the early 1980s by Scott Fahlman, a computer scientist with Carnegie Mellon University. He suggested that users employ a short series of characters, evoking the iconic smiley face, to demonstrate that their words were to be taken lightheartedly: :-)
While at the time, the proposal was viewed by many as somewhat tongue-in-cheek, the smiley quickly caught on, and is as recognizable as the letters “www” today, demonstrating its effectiveness in clarifying human-to-human interaction in text-based communication. In the decades since, internet users have extended the original system by adopting many other emoticons, conveying displeasure, sadness, disgust, exhaustion, and many others, inserting much-needed emotional context to their chat and email conversations.
Just as virtual environments like Second Life are frequently described as updated MUDs or chatrooms, user interactions within them can be similarly enhanced by the use of body language and gestures based on that of real-world humans. Consider the image of an avatar facing another and smiling, looking away disinterestedly, or standing with arms crossed; each conveys a radically different message even when associated with the same text.
But what about cases in which we see avatars’ body language injected into our communications without our explicit permission? There have been countless posts to the Second Life forums by newer users, angry and hurt by the disdainful, superior manner of an established resident, and how they were deliberately ignored.
These new users describe an incident that usually follows a set pattern. They approached a Linden employee or an older resident, usually a fairly high-profile content creator, and greeted them. The established resident turned to face them, looked down their nose, and turned back to what they were doing. In actual fact, this is a client-side avatar animation–when chat is “heard” on the client, avatars appear to turn their heads to face it without any input from the user controlling that avatar.
From the perspective of the Linden or longtime resident, they are unlikely to have even known anyone approached them, as they were busy doing something else: programming, browsing the web, or working on textures, leaving behind a puppet with its strings cut.
This is an example of a “subconscious” message injected to the communications channel. While no information has deliberately been conveyed, to a human observer, a clear message has been sent. The body language of the avatar has effectively spoken for its user. Yet, to the recipient of this message, the avatar is the human. From their perspective, they’ve just been snubbed by some standoffish person who clearly can’t be bothered to even give them the time of day.
Next: Deliberate subconscious filters and their implications.