Information design (also known as 'information architecture') is the study of the details of complex systems. Among these are websites, user interactions, databases, technical writing documentation, and human-computer interfaces.
As information architects we all know how important it is to keep the user in mind. The same is true in teaching IA: we must keep the learner in mind. Learning objectives are one tool to help keep your classes focused on the student. They will also help you develop the syllabus, lesson plans, and assessment methods.
This paper examines user-generated metadata as implemented and applied in two web services designed to share and organize digital media to better understand grassroots classification.
We need a word for the class of comparisons that assumes that the status quo is cost-free, so that all new work, when it can be shown to have disadvantages to the status quo, is also assumed to be inferior to the status quo.
Folksonomies are clearly compelling, supporting a serendipitous form of browsing that can be quite useful. But they don't support searching and other types of browsing nearly as well as tags from controlled vocabularies applied by professionals.
The weighted list, known popularly as a `tag cloud', has appeared on many popular folksonomy-based web-sites. Flickr, Delicious, Technorati and many others have all featured a tag cloud at some point in their history. However, it is unclear whether the tag cloud is actually useful as an aid to finding information. We conducted an experiment, giving participants the option of using a tag cloud or a traditional search interface to answer various questions. We found that where the information-seeking task required specific information, participants preferred the search interface. Conversely, where the information-seeking task was more general, participants preferred the tag cloud. While the tag cloud is not without value, it is not sufficient as the sole means of navigation for a folksonomy-based dataset.
Fifty years before web, 30 years before the personal computer, Vannevar Bush envisioned a new machine to make sense of the growing mountains of information, creating the notions of 'hypertext' and the modern link.
There has been a lot of attention to the legal encumbrances in Microsoft's new MS XML format. In this article we'll look at the technical side, and try to show you how the design of these formats affect interoperability. After all, that is the purpose of open standards.
Real programmers love their applications' source code: the faster and more elegant it is, the better. Users are after very different things: they seem to want simplicity, flashy colors, nice icons and tons of options. In spite of these reasons, or perhaps because of them, programmers and users often forget what lies in the middle of it all: information.
Information-seeking behavior varies from situation to situation. Donna Mauer explores different ways in which users look for information and offers tactics for accommodating them.
World Wide Web authors must cope in a hypermedia environment analogous to second-generation computing languages, building and managing most hypermedia links using simple anchors and single-step navigation. Following this analogy, sophisticated application environments on the World Wide Web will require third- and fourth-generation hypermedia features. Implementing third- and fourth-generation hypermedia involves designing both high- level hypermedia features and the high-level authoring environments system developers build for authors to specify them. We present a set of high-level hypermedia features including typed nodes and links, link attributes, structure-based query, transclusions, warm and hot links, private and public links, hypermedia access permissions, computed personalized links, external link databases, link update mechanisms, overviews, trails, guided tours, backtracking, and history-based navigation. We ground our discussion in the hypermedia research literature, and illustrate each feature both from existing implementations and a running scenario. We also give some direction for implementing these on the World Wide Web and in other information systems.
DITA experts Don Day, Michael Priestley, and Gretchen Hargis address the topic architecture of DITA, tips and techniques, and general DITA questions.
DITA supports the proper construction of specialized DTDs from any higher-level DTD or schema. The base DTD is ditabase DTD, which contains an archetype topic structure and three additional peer topics that are typed specializations from the basic topic: concept, task, and reftopic. The principles of specialization and inheritance resemble the principle of variation in species proposed by Charles Darwin. So the name reminds us of the key extensibility mechanism inherent in the architecture.
Information, once rare and valuable, is now as plentiful as it is meaningless. The constant accessibility rendered by various 'networking' technologies has led to a veritable glut of information. Deluged with data and flooded with facts, we are drowning in a river of communication with no clear direction or purpose. Media-mesmerized and stimuli-saturated, we are caught up in the murky current, making it increasingly more difficult to keep our heads above water. Whether we sink or swim will depend on how effective we are at controlling and managing the flow, how efficient we are at fishing for essence and meaning, and how adept we are at preserving the ecology between man and this digital morass.
The world, particularly the ITC world, is abuzz with the term "knowledge sharing." Most of us thoroughly agree that knowledge must be shared. Often, the sharing is viewed only in the context of the product development teams. However, what about the users? We, the technical communicators, information developers, or whatever we prefer to call ourselves, are supposed to help our users in using our products efficiently. So, I guess we are the ones responsible to share our knowledge with the users.
Work with structured abstracts--which contain sub-headings in a standard order--has suggested that such abstracts contain more information, are of a higher quality, and are easier to search and to read than are traditional abstracts. The aim of this article is to suggest that this work with structured abstracts can be extended to cover scientific articles as a whole. The article outlines a set of sub-headings--drawn from research on academic writing--that can be used to make the presentation of scientific papers easier to read and to write. Twenty published research papers are then analyzed in terms of these sub-headings. The analysis, with some reservations, supports the viability of this approach.
The need to make software easy to use and to integrate learning information into software products is changing roles of information developers at DDS. On the one hand information developers are now an integral part of design teams rather than members of a central technical publications group. On the other hand, decentralized development and online delivery require new types of central management and coordination. There’s more need than ever for formal standards, explicit information architecture, and defined best practices. Goals include effectiveness and timely delivery of product information, common look and feel, usability, and elimination of redundant work across departments throughout the life cycle.
Strategic planning is no longer an option for an information-development organization that hopes to survive and thrive in the current climate of downsizing and outsourcing. Information developers must prove their value to their products and their organizations and demonstrate that they are aligned with corporate goals and objectives. Use strategic planning both as a tool to improve your organization and as a sign that you are willing to look closely at the old and comfortable ways of working and make significant quality and process improvements.
Technical communicators have become increasingly interested in how to 'open up' the documentation process - to encourage workers to participate in developing documentation that closely fits their needs. This goal has led technical communicators to engage in usability testing, user-centered design approaches, and, more recently, open source documentation. Although these approaches have all had some success, there are other ways to encourage the participatory citizenship that is implied in these approaches. One way is through an open systems approach in which workers can consensually modify a given system and add their own contributions to the system.
Existing XML processing models are pipelines, controlled by pipeline descriptions which resemble shell scripts. Functional XML allows XML documents to specify their own processing explicitly, without losing the generality of the pipeline script approach.
In the short term of three to five years, I don't really expect significant changes in the way hypertext is done compared to the currently known systems. Of course new stuff will be invented all the time, but just getting the things we already have in the laboratory out into the world will be more than enough. I expect to see three major changes: the consolidation of the mass market for hypertext; commercial information services on the Internet; the integration of hypertext and other computer facilities.
Discusses how XML is changing the definition of 'Information Management' and the challenges associated with this change. XML provides endless opportunities when it comes to solving complex data issues companies face today from data integration to implementation of Service Oriented Architectures(SOA). Companies that choose to exploit the advantages of XML will undoubtly gain an edge over their competitors but will also be required to solve the challenges around how to best manage and service XML data without compromising data security and integrity.
Some of the questions most commonly asked by professionals in a given field are 'where is the field headed?' and 'how will that affect me?' In this article, I give one person's view of where the fields of technical communication, training, and marketing communications are headed and how that might affect people working in those fields.
Nicht selten wird die technische Dokumentation nur nebenbei erstellt, obwohl gute Gründe für eine stärkere Beachtung dieses potenziellen Marketinginstruments sprechen: Rechtliche Bestimmungen erzwingen bestimmte Informationen (wie etwa Sicherheitshinweise) sowie die Qualität und Form, in der sie angebracht werden müssen. Fehlende oder zu spät gelieferte Dokumentation verursacht Zahlungsausfälle in Millionenhöhe. Dokumentation und Information wird zunehmend als zusätzlicher Service, also Mehrwert für den Kunden interessant. Darüber hinaus stellt der Bereich der Dokumentation die Keimzelle für technische Informationssysteme z.B. für das Wissensmanagement oder auch die Qualitätssicherung dar, da in diesen Abteilungen ohnehin bereits sehr große Mengen des technischen Know-hows im Unternehmen vorliegen. Im folgenden Beitrag lesen Sie, wie XML und .NET den Produktionsprozess positiv beeinflussen.