Visual Social Semiotics – Harrison

Harrison, Claire. “Visual Social Semiotics: Understanding How Still Images Make Meaning.” Technical Communication 50 1 (2003): 46. 

This article, while focusing on still images and the way they make meaning, is a discussion of visual social semiotics and therefore has many applications to video, as well. Also, the concept of social semiotics relates to my research in regard to the discussion of the way we use gestures, which constitute visual communication, to form meaning.

Semiotics is, essentially, the study of signs including the idea that a sign is a signifier of a signified object or idea. Therefore, visual social semiotics is the study of visual signs, including still or moving images, gestures, facial expressions, etc. as meaning-making signs and that all interpretation–the meaning we make from signs and other forms of communication–are culturally and socially constructed. This is to say that the meaning of a sign does not exist in and of itself. Rather, people create the meaning of any sign, and this meaning is based on the individual, social, and cultural experiences of each person.

Harrison states that it is important to acknowledge at this point, without detailing all the factors that brought it about, that “readers/users no longer rely solely on written text for comprehension: they absorb and process all that they see within a document to create meaning for themselves. She references Robert Horn’s definition of this multimodality that he deems visual language:

“[T]he tight coupling of words, images, and shapes into a unified communication unit. “Tight coupling” means that you cannot remove the words or the images or the shapes from a piece of visual language without destroying or radically diminishing the meaning a reader can obtain from it. (1999, p. 27)” (46).

This definition can certainly extend to the use of video as well as to still images to which the author is referring. Actually, to accurately apply this definition to video, one would need to include audio in the multimodality list. In this way, it might extend the discussion to one of “audio visual social semiotics.” [Did I just coin that?]. In this way, there is still a tight coupling of all these modalities within a single communication method, and to remove one of these modes is to radically diminish or destroy the meaning of the message being conveyed through this means; however, audio becomes one of those modes.

Upon first consideration, “audio” may not fit within the general perception of visual social semiotics. Semiotics is based on signs. If I draw a picture of a table, the actual table is the signified, and the drawing is the sign or signifier; this is within the system of two-dimensional drawing. If I were to write, “the table is blue,” the actual table is the signified and the word “table” in my statement is the signifier; this is in the semiotic system of the written English language. Similarly, that same statement, if spoken, can serve as an example of audio semiotics in which there is an audible signifier (sign) of the signified object.

Harrison details the idea of the representational metafunction, which refers the people, places, and objects within an image (the represented participants or RPs) and the meaning that they convey. Again, she is really referring to still images. However, the concepts apply directly to the discussion of RPs in video as well.

The interpersonal metafunction
This category is about the actions of the individuals within the image and how they engage the viewer.


Feature Processes

OVC Application

Image Act and Gaze:
The image act involves the eyeline of the RP(s) in relation to the viewer.

Demand: The RP is looking directly at the viewer. A demand generally causes the viewer to feel a strong engagement with the RP.

Offer: The RP is looking outside the picture or at someone or something within the image. In this case, the RP becomes an object of contemplation for the viewer, creating less engagement than that of the demand.

The students generally look directly at the camera, which from a viewers perspective, is looking right at the viewer. When not looking right into the camera, the RP might look down to some notes or beyond the camera at some distraction. In this way, there is a feeling of lost connection and the RP becomes objectified as an impersonal video character.
Social Distance and Intimacy:
Social distance is determined by how close RPs in an image appear to the viewer, thereby resulting in feelings of intimacy or distance.

The viewer can see an RP in six different ways.

  • Intimate distance: The head and face only
  • Close personal distance: The head and shoulders
  • Far personal distance: From the waist up 
  • Close social distance: The whole figure
  • Far social distance: The whole figure with space around it 
  • Public distance: Torsos of several people

With the OVC in the AOC, there is generally a close personal distance. Students sit at their desks or with a laptop, so the distance is almost forced due to the space between the user and his or her monitor.

Occasionally, one posts something at intimate distance or at far personal distance.

Perspective–The Horizontal Angle  and Involvement:
This angle refers to the relationship between the position of the RP(s) and the viewer.
  • The frontal angle: When an RP is presented frontally to the viewer. This angle creates stronger involvement on the part of the viewer as it implies that the RP is “one of us.”
  • The oblique angle: When an RP is presented obliquely to the viewer. This angle creates greater detachment since it implies that the RP is “one of them.”

The students invariably shoot their OVC videos from the frontal angle. This is a conversation, so students do not get too artsy with this.

Perspective–The Vertical Angle and  Power:
There are two possible vertical-angle relationships: 1) that of the RP(s) and the viewer, and 2) that between RPs within an image. 

  • High angle: The RP “looking up” has less power.
  • Medium angle: The RP “looking horizontally” has equal power.
  • Low angle: The RP “looking up” has less power.

Rarely does a student use the high angle, although it has happened. Most videos are shot at the medium angle, since students sit at their desks to record the videos. In the way, the camera is more-or-less at eye-level. Many of the videos are shot at low angle. I attribute this to the fact that the students are using cameras built-in to their laptops, which hey actually have on their laps when sitting on a bed or the floor.

Considering the above-noted metafunctions in which students are most often recording their videos, the students close personal distance and look directly into the camera, which is at eye-level. In this way, it is not unlike sitting across a table from someone. That they usually look directly into the camera engages the viewer. Therefore, while the viewer is not actually across the table from a live person, there are elements of the setting that can engage the viewer in many of the same ways, making it, at times, feel like hey are engaged in a conversation.

Leave a Reply