This report discusses the 'AHA' system for presenting HTML in audio for blind users and others who wish to access the WWW non-visually. AHA is a framework and set of suggestions for HTML presentation based on an initial experiment. Further experimentation and further revisions will be performed with the system.
Technical communicators are skilled listeners. Whether interviewing subject matter experts or working on teams, good communication is essential. But if you have a hearing loss, assistive listening systems (ALSs) can help.
Audacity is a popular sound recorder and audio editor. It is a capable program while still being easy to use. The majority of users are on Windows but the same Audacity source code compiles to run on Linux and Mac too.
Audacity is a free cross platform multi track audio editing program from Sourceforge.net. It will let you record, edit, and mix an unlimited number of tracks. Audacity runs on Windows (98 through XP), Mac OS X, and Linux.
Audio signals also help us interact with our environment. Some of these signals are designed: We wake to the buzz of the alarm clock, answer the ringing telephone, and race to the kitchen when the shrill beep of the smoke alarm warns us that dinner is burning on the stove. Other audio signals are not deliberately designed, but help us nonetheless. For instance, we may know the proper sound of the central air conditioning starting, the gentle hum of the PC fan, or the noise of the refrigerator. So, when these systems go awry, we notice it immediately--something doesn't sound right. Likewise, an excellent mechanic might be able to tell what is wrong with a car engine just by listening to it run.
The AHDS made audio recordings of recent seminars with the aim of transcribing the recordings, and presented them to seminar chairs to facilitate their task of completing reports on each event. This case study looks at some of the issues that occurred as the AHDS recorded and transcribed the material from these seminars. While its findings are based on roundtable seminars, some of them may also be of use to those doing other types of audio recording - interviews, field notes etc.
A fluid text is any work that exists in multiple versions. What are the ethics and legality in the creation, sharing, and ownership of textual versions? What are the boundaries of textual appropriation? How does technology abet appropriation; how might it assist in the useful designation of boundaries? Is the law keeping up?
We present a public usability study that provides preliminary results on the effectiveness of a universally designed system that conveys music and other sounds into tactile sensations. The system was displayed at a public science museum as part of a larger multimedia exhibit aimed at presenting a youths’ perspective on global warming and the environment. We compare two approaches to gathering user feedback about the system in a study that we conducted to assess user responses to the inclusion of a tactile display within the larger audio-visual exhibit; in one version, a human researcher administered the study and in the other version a touch screen computer was used to obtain responses. Both approaches were used to explore the public’s basic understanding of the tactile display within the context of the larger exhibit.
It is hard to make a hat that fits all heads. If one were made, most people would find it uncomfortable. This fact could be the realistic of the web sites design. Web developers face the same issue creating web pages for more general usage. For those deaf and hearing-impaired people, some special technologies should be applied to ease their web browsing and searching. This report will focus on such disabled characteristics.
Because of limited awareness around Deafness and accessibility in the web community, it seems plausible to many of us that good captioning will fix it all. It won’t. Before we can enhance the user experience for all deaf people, we must understand that the needs of deaf, hard of hearing, and big-D Deaf users are often very different.
Lack of inflection pretty much defines the reading voice. If you read a paragraph of text in a normal reading voice, you won’t hear much inflection. But if you listen to a real conversation, or especially if you listen to actors on TV, their voices move up and down the scale with a lot more inflection. It seems the more emotion you add to what you’re saying, the more inflection you end up including.
One of my biggest problems when narrating a screencast is that my throat gets all clogged up. I have to hit the pause and resume key every minute or so to clear my throat. Voiceover actors have learned to deal with this problem, since they often don’t have the benefits of a pause and resume key. You can reduce the amount of phlegm that accumulates in your throat by chiefly doing these two things.
Getting close to your microphone usually results in something called “the proximity effect.” As you get close, most microphones amplify your voice in a rich, deep way. The proximity effect can make you sound like a late-night DJ. Some microphones give you the best proximity effect when you’re practically kissing the mic. Unfortunately, as you get closer to a microphone, the microphone starts to pick up more sounds from your mouth.
The final tip in my list of techniques for developing a personal voice in audio is to breathe correctly. This is actually the hardest technique for me, so I have saved it for the end. Strangely, in normal conversation, most of us don’t have any trouble breathing. But when we start recording voiceovers, we start talking a little faster, with more energy and fewer pauses.
For several months I’ve been looking for a quiet room to record screencasts at my work. Our building has four floors for more than 600 IT professionals. I investigated more than 20 conference rooms, poked my head in empty offices, walked around unfamiliar floors, inquired here and there. When people see my looking, they don’t understand what I mean by a “quiet” room. What does quiet mean?
One of my first recommendations for achieving a natural, believable voice is to employ more free narration rather than always reading a script. I recommended this because all the video tutorials on Lynda.com are narrated at the same time as they are recorded, and the less you read, the more natural your voice sounds. However, I realize that unscripted narration, even just a few sentences, can be problematic.
I’ve postponed writing about microphones for several reasons. First, there are hundreds of different microphones suited for all kinds of situations, from vocal music to kickdrums to broadcasting and more. Also, microphones can get expensive, and not everyone has the same budget. So there is no right voiceover microphone for every person and situation. However, I’ll try to present a simplified view of microphones.
Voiceover professionals often recommend that you smile while you narrate. Smiling injects a touch of warmth and charisma in your voice. Just a few touches here and there can make the entire tone of your voice noticeably warmer.
Traditionally, web applications are accessed via a single mode interface; information is presented and captured with text. However, one can additionally use a voice browser to navigate the Internet. One can navigate or access 'hands free' Internet applications from anywhere; you are not restricted to the desktop or a portable computer. VoiceXML is a language for Internet telephony applications and is based on the XML language. VoiceXML can 'speech-enable' an existing web application to be used through a conversational interface, providing a more natural way of interaction between users and Internet applications.
Previous research indicates that voice annotation helps reviewers to express the more complex and social aspects of a collaborative writing task. Little direct evidence exists, however, about the effect of voice annotations on the writers who must use such annotations. To test the effect, we desinged an interface intended to alleviate some of the problems associated with the voice modality and undertook a study with two goals: to compare the nature and quantity of voice and written comments, and to evaluate how writers responded to comments produced in each mode. Writers were paired with reviewers who made either written or spoken annotations from which the writers revised. The study provides direct evidence that the greater expressivity of the voice modality, which previous research suggested benefits reviewers, produces annotations that writers also find usable. Interactions of modality with the type of annotation suggest specific advantages of each mode for enhancing the processes of review and revision.