As the amount of video data in digital libraries increases, support for fast and easy access to this information has become necessary. Our approach is to empower users with direct control of video surrogates and provide interaction flexibility. A video browsing interface prototype using a slide show-style presentation of video key frames was built and tested for user performance and subjective satisfaction. The interface allows display rates to be adjusted interactively. Subjects in this preliminary study performed two browsing-related tasks, object identification and gist determination, at display rates of 1, 4, 8, 12, and 16 key frames per second (kfps). A possible functional limit in accuracy for object identification (OI) was detected between 8 to 12 kfps. Performance for gist determination (GD) tended to degrad with increased display rates. However, no significant performance differences were detected. Furthermore, it was observed that lower rates were required for object identification than for gist determination. Suggestions for designing fast video browsing interfaces are provided.
Website videos are a natural for event videographers. We use them to demonstrate our work to prospective clients, and they have proven to be a vital marketing medium to showcase our range of products. We might even post short video testimonials from happy clients or put our own talking heads on our sites.
This article analyses amateur video editing software and considers its use within a broadly defined context of cultural practices, or `everyday cinematic life'. The authors argue that such software must be understood in relation to specific cinematic discourses and in the context of longstanding promises of popular participation in `movie-making'. They situate the historically sedimented nature of audiovisual experience in terms of a geneaology of non-commercial film editing and filmmaking, and analyse the phenomenological mixture of constraints and potentials embodied by individual amateur filmmakers and implemented in popular consumer-level editing software. The figure of the video editor (the software and the individual), the authors argue, incorporates a compromise inherent to cinematic life between the propensity to `make' by appropriating forms and materials from the cinema, and the material, economic and legal constraints on making that preserve the organization of entertainment industries.
This article helps technical communicators become better informed producers of interactive, cinema-like new media objects (help systems, public information and ordering kiosks, promotional technical presentations on the web, and so on) by providing a summary of how cinema works, and then by proposing a few ways that some basic cinema editing and display techniques can be integrated into on-screen technical communications practice. The author makes the claim that if we are to begin thinking and working like film makers, the fundamental poetics and information designs we use in our new media design and development work must also change.
What roles does writing play in larger communications also involving physically discrete but related texts of other media? How may the properties of what we normally consider writing be modified in such communications? The intermedial context of much workplace writing has been largely overlooked. This study of an insurance company's communication department describes how (a) three written products served as parts of larger messages in multiple media campaigns, (b) an attempt to combine composing processes for print and video failed, and (c) conflicting generic and stylistic properties of other media caused an intermedial graft to fail. The author's study shows that in the right circumstances, a multiple media "overtext" can override some of the rules that govern what and how one communicates in an individual medium. When a written text is involved, its nature may change as it forms symbiotic relationships with texts of other media.
The videos have been collected by the Inclusive Digital Economy Network. They present many of the challenges that older users often face when interacting with new technologies. The purpose of these videos is to highlight the issues in an insightful and thought-provoking way.
Digital video production in composition courses is both new and exciting. However, this newness comes with challenges and obstacles as well as more questions than answers. What exactly is so fun, attractive, liberating, and transgressive about digital video work? Is it the time invested in editing minutes or hours of footage into seconds of film clips? Is it the sheer thrill of having the power to overlay images, words, and sounds to produce an effect impossible in the real world and highly effective in the multimodal, rhetorical one? Is it that the composition teacher is finally asking for a product where grammar (understood as punctuation and sentence structure) is mostly invisible? Is it the crisis moments when the software, the hard drive, and/or the accompanying hardware crashes and we are still left with a classroom full of students to teach? Or, is it the mesmerizing effect of the screen that promises sustained attention to a composition assignment? The answer, we think, in all cases is 'yes'--yet sometimes that yes is a hesitant one.
Nobody really expected the stranglehold that Apple, Microsoft and Real had on the web streaming market in 2003 to be broken. Yet by Spring 2005, just 18 months after that presentation, that is exactly what had happened. Those three web video delivery technologies practically vanished, replaced almost entirely by Flash Video. This is not to say QuickTime and Windows Media are dead technologies. They aren’t by a long shot, but when it comes to putting video on the web, the Flash Player has rapidly become the only game in town.
Despite the proliferation of books and articles dealing with usability testing as a sub-field of technical communication, there remains one artifact that is underdefined: the usability highlights video. Several sources suggest that usability testers should deliver a video to their clients, but there is no common description of the contents of this video or the rhetorical purpose it fulfills. In this article, we examine the use of the highlights video as described in the literature, but we go further to discover some ways practicing usability professionals understand the role of the highlights video. Through the use of generic conventions, rhetorical theory, and industrial practices, we attempt to draw conclusions that point to some common uses of the highlights video that can instruct both teachers in the usability classroom and practicing usability experts as they create videos for client projects.
Screencasting has a problem–it hasn’t evolved all that much over the 10 years or so since its inception. We still record the computer screen from a stationary position (dead centered) and we still present this flat, banal presentation to users sitting at their computers, which in and of itself presents problems (you’re looking at a computer screen on a computer screen–where does one end and the other begin).
Screen video alone is not enough. You need to humanize your content by getting in front of the camera and engaging your audience. And no, I’m not talking about long-winded monologues either. Several 5-7 second talking-head elements can go a long way toward winning over and maintaining the interest of your audience.
For about Five years I worked for AT&T as a full time Instructional Designer and my worked involved the creation of training videos for the employees at AT&T. I loved it. It was creative, challenging and not stressful at all.
With so much training being done on computers (along with other tasks being done while training is taking place on that same computer), it’s important to know some best practices for developing training and other modules with screencasts. Amy Tehan demonstrates tips and tricks for making an effective screencast that will hold the viewer’s attention and get the message across.
In a continuation of a previous introductory article about screencasting, Archee continues the discussion by delving deeper into the history, benefits, usefulness, and future of this powerful technical communication tool.
In general, screencasting is a three-step process: capture of audio and video, editing, and production of a compressed deliverable. Camtasia combines all three functions in a single, integrated application, but in principle they're separable. I can imagine using Camtasia (or an equivalent) for capture, Premiere (or an equivalent) for editing, and Camtasia (or an equivalent) to produce a compressed .SWF file.
Screencasting, or sharing your virtual desktop via video presentation, has exploded in popularity with the advent of podcasting, and gives you the ability to bring the classroom feel to a media presentation that can be delivered over the Internet. The medium of screencasting is readily available to everyone and with a few tools of the trade you can be ready to produce your own.
In this all-day seminar we'll explore the basic concepts in the grammars and syntax of kinetic sight-and sound media: film, video, and multimedia. We'll not discuss how to write scipts. Rather we'll concentrate on learning how to encode information into kinetic visual images using filmic design techniques. Throughout this seminar we'll view and critique award-winning j%ns and videos, and explore a multimedia flowchart to see how others have applied such filmic techniques to solve specific communication problems.
In this article, I’ll discuss four design-related areas: how to create a simple set for in-house use; how to choose the best background for location shoots like case studies and testimonials; current trends in set design for internet-only media sites; how to dress your subjects for optimum compression. The importance of many of the set design principles discussed in this chapter relate to your distribution data rate. If the bitrate of the video you’re delivering is very high, say in the 400Kbps range for 320x240 video or 650Kbps or higher for 640x480, you have a lot more flexibility, since the compressed quality of your video will remain quite high. Once you sink below these rates, quality degrades. Choosing a poor background or set will only make the problem worse.
A screencast is a screen capture of the actions on a user's computer screen, typically with accompanying audio, distributed 1through RSS. In the same way that a screenshot is a static rep- resentation of a computer screen at a point in time, a screencast captures what happens on a monitor over a period of time. The audio track can be the sound from an application being demon- strated, a narrative from the presenter, or background audio from another application. Screencasts can be produced in various formats, and users generally watch them streamed over a network.
Freelance writers, bloggers and independent journalists yearning to use video on the Internet, grab your PDAs. Use these tips to help you begin shooting and editing your own Web video stories.