Factors of Distraction in a One-Way-Video, Two-Way-Audio Distance Learning Setting

January 29th, 2010

Briggs, Lowell A., and G. Dale Wagner. “Factors of Distraction in a One-Way-Video, Two-Way-Audio Distance Learning Setting.” PAACE Journal of Lifelong Learning 6 (1997): 67-75.

Published in 1997, this one is pretty outdated, particularly considering that Briggs and Wagner are citing works that were relatively contemporary at the time, but precede their work by 10 or more years. That said, there are a number of excellent points made in this article that are either directly relevant or can at least be modified in such a way as to make them applicable to the topic of online video use in distance classrooms (learn some tips from onlinecoursehow.com).

The authors studied Read the rest of this entry »

Characterizing Video Responses in Social Networks

January 27th, 2010

Benevenuto, Fabricio, et al. “Characterizing Video Responses in Social Networks.” – 0804.4865.

Benevenuto, et al., characterized over 3.4 million video and 400,000 video responses collected from YouTube over a 7-day period. Among other reasons they found their characterization interesting, they cite a sociological reason, “relating to social networking issues that influence the behavior of users interacting primarily with stream objects, instead of textual content traditionally available on the Web.” (pg. 1?)

This article is relevant to my research, since it is a study of online video and video responses. However, Read the rest of this entry »

Crossing Textual and Visual Content in Different Application Scenarios

January 24th, 2010

Ah-Pine, J., et al. “Crossing Textual and Visual Content in Different Application Scenarios.” Multimedia Tools and Applications 42 1 (2009): 31-56.

This article is quite outside the scope of my research and is bordering on irrelevant to it. The article discusses two approaches to text-image information processing in the multimodal scenario. In doing so, the paper is rather thick with formulas and coding to create these methods by which multimodal documents can be automatically scanned and various types of information (text, image, video, audio, etc.) can be extracted and coded.

However, I draw on this article for a few points that the authors address about our current state of multimodality on the Web and about how we now think differently about the interaction of visual (image and video) and text.

Read the rest of this entry »