Once viewed more as art than science, marketers are increasingly interested in measuring performance. Like it or not, there is a new wave of accountability in the world of marketing, and if you're not prepared, you could get swept under it. Companies are becoming increasingly concerned with ensuring that all activities are profitable. As a result, each dollar invested in marketing is being challenged to demonstrate bottom line performance. New forms of marketing, escalating ad costs and tools that purport to measure marketing effectiveness have all contributed to the pressure traditional media is facing to "prove its worth."
Entrepreneurship is THE economic mode of the digital age and entrepreneurship is defined by risk. Students who will become workers must be comfortable, even engaged by, risk-taking.
I have judged a fair number of national and international product design competitions (five in the past three years alone) and each has made the same procedural mistake: products are assembled and categorized, judging criterion are devised, reputable judges are assembled, and yet we judges never see or touch the products in person. Instead, we receive a set of written documents describing each product, its intended function, and its design process. Imagine an art contest conducted by email and you get the gist of what's going on out there.
I fully acknowledge that a whole lot of very clever thinking went into the construction of Acid3 (as was true of Acid2), and that a lot of very smart people have worked very hard to pass it. Congratulations all around, really. I just can’t help feeling like some broader and more important point has been missed.
Over the past several years, I’ve grown increasingly dissatisfied with the vague and somewhat solipsistic nature of the gradations UX professionals typically use to describe the severity of usability issues. High, medium, and low don’t begin to sufficiently explain the potential brand and business impacts usability issues can have.
Over the past several years, I’ve grown increasingly dissatisfied with the vague and somewhat solipsistic nature of the gradations UX professionals typically use to describe the severity of usability issues. High, medium, and low don’t begin to sufficiently explain the potential brand and business impacts usability issues can have. After incrementally iterating on several existing classifications of severity, I finally decided in late 2008 to simply create some new ones, which I’ll present in this column. For lack of a better term, I call them business-aligned usability ratings.
If you want to go beyond the usual limits of a traditional readership survey that tells you how well received a publication is, first clarify your objectives. Then you might include additional "impact" questions on your next survey, conduct in-depth focus groups with readers, and conduct some objective, "audience-free" measurements of the publication to see how well those objectives were met.
Every other team meeting, three team members get 30 minutes each to talk about projects they are working on, and they get to demonstrate some of the cool things they are integrating into the project. As a team, we look at the project and both learn from what they’ve done, and make suggestions on how they might improve the project.
The purpose of my research was to analyze web content delivered to the mobile computing environment with two goals in mind: first, to determine if the content lost contextual relevance in the mobile environment and, second, to see if a set of effective design principles could be applied towards the mobile environment. My research combines a literature review in conjunction with a survey that encompasses both quantitative and qualitative methods to analyze top-rated web sites in the mobile environment.
If you run a Web site, you're probably already thinking about tracking and analyzing the traffic it gets. Knowing how many pages are accessed, when, by whom, and for what purposes can mean the difference between simply having a Web site and building a sound Web strategy. Understanding how people use your site can help you--and your sales and marketing team--generate more traffic. If you can track your audience, learn which pages and resources are most popular, and identify technical problems and system bottlenecks, you can deliver a better experience. And that's the best way to keep people coming back to your site.
When discussing or working with type, it’s not only important to understand the anatomy of the parts of letterforms, but it’s also important to understand how type is measured. We’re accustomed to measuring things in inches, yards, or miles, or, heaven forbid, the metric system. Type, on the other hand, has its own system of measurement of which most of us have a vague understanding. For example, most of us understand that normal body text is set between 10 and 12 points, and 72 points is much too large for everyday use. Few of us, however, really know what a point really is.
The ACM SIGDOC Executive Board welcomes letters of nomination for the SIGDOC Rigo and Diana Awards. The Rigo Award celebrates an individual's lifetime contribution to the field of information design, technical communication, and online information design; the Diana Award celebrates the contribution of an organization, institution, or business.
Normalt arbejder man i en velkendt kontekst på sin lokale computer, hvad enten der er tale om Windows, Kde, RedHat, Mac os X m.fl. men når vi åbner døren til Internet bliver disse rutiner ødelagt af noget som ikke altid er til at sætte fingeren på. Hvad er det som gør webløsninger svære at arbejde med og finde rundt i? Når man første gang sætter sig foran en computer er det som oftest med et mål. Nysgerrighed, at komme på Internet og shoppe, at skrive et brev og mange andre ting. Oftest er det denne drivkraft som får os til at tage de første slidsomme uger med styresystemet som man langsomt kommer til at forstå, og som man på sigt bliver fortroligt med idet det er den platform som giver og adgang til alle de digitale oplevelser. Kan man ikke arbejde på platformen vil man med sikkerhed heller ikke kunne opnå sine mål med arbejdet.
Appraisals based on objective performance criteria identify and measure the abilities and contributions of technical communicators. This workshop explores how to develop effective performance criteria, specific to technical communication, and how to use these criteria to evaluate performance and foster professional growth and development.
Time-to-market pressure can diminish product testing time and quality. The results are product recalls, shoddy merchandise, and apologies by CEOs about poor quality. The consequence is the loss of consumer confidence. Don’t these companies realise that there’s no compromise on quality? I’m sure that these companies are ISO 9000 certified or have a Total Quality Management (TQM) program, so what is the problem? Perhaps the problem is not with ISO 9000 or TQM but with the way it is used.
Guidelines are statements of direction. They’re about looking to the future and what you want to incorporate in the design. Guidelines are aspirational. Heuristics challenge a design with questions. The purpose of heuristics is to provide a way to “test” a design in the absence of data and primary observation by making an inspection. Heuristics are about enforcement. Both guidelines and heuristics are typically broad and interpretable. They’re built to apply to nearly any interface. But they come into play at different points in a design project.
To back up a request for more budgetor defend the existing one, you need to know exactly what you’re spending--and what you’re getting in return. But how can you tell if you’re spending too much on communication? This article suggests five approaches to weighing up the cost versus value of your communication activities.
Computer and Internet based tests are used for a variety of purposes. From entering education or employment, to improving basic learning, people everywhere are taking electronically formatted tests. With the advancement of testing from traditional paper-based tests to technologically advanced electronic tests, people reap the benefits of easier access to tests, faster response times, and greater reliability and validity of tests. However, persons with disabilities are being left out of the picture and out of many typically-administered tests.
In the workplace, communication serves not as an end in itself, with features that are “good” or “bad,” but as a tool for mediating a range of professional activities, and effective documents are presentations are those that achieve their goals. Yet assessment methods in technical and professional communication often continue to rely on an evaluation of features apart from the intended work of the document. In this paper, we use activity theory as a lens to explore both the criteria for effective communication and the degree to which portfolio assessment methods can be applied to effectively assess student learning in this domain.
Increased support for greater accountability and assessment of engineering communication programs have led many schools of engineering and technology to initiate methods of assessing the quality of their students’ engineering communication abilities. In my institution, I have spearheaded the pilot year of such a program, and, as anticipated, have learned several valuable lessons that may be of interest to others interested in developing assessment procedures for engineering communication programs.
Though engineers around the world conduct their work in nearly every language on the planet, there are very few who never use English for some aspect of their job. The largest professional engineering organizations use English as their primary language; most of the world’s engineering publications are written in English; and nearly all cooperative ventures with multinational participation choose English for their common language of communication. Unfortunately, most of the world’s engineers are not native speakers of English and thus are considerably disadvantaged in professional terms.
A major challenge in engineering education is to prepare professionals for communicating well in writing and speaking, using appropriate technologies, within professional contexts. Communication in the global engineering world includes collaboration on cross-functional teams, virtual-project team management, and writing for multiple, complex audiences. This tutorial discusses how one small engineering school has integrated technical communication teaching and assessment throughout the curriculum with demonstrated success. The integrated curriculum, formative and summative assessments, and real-world contexts offer one model to address growing communication challenges.
A specially designed instrument that measures the effectiveness of written communication courses taught in-house has been pilot-tested with employees of a major power utility. The instrument showed that, one month afrr attending the course, participants’ written communication skills increased by twenty percent. A second measurement, recordedfour months aJer the course, showed there had been only a marginal drop in skills compared to the level recorded three months earlier.