Information design (also known as 'information architecture') is the study of the details of complex systems. Among these are websites, user interactions, databases, technical writing documentation, and human-computer interfaces.
Wireframes: At once a singular composition and a collaborative expression, communicating the vision of both an individual and a team. As a result, they can be stacked with an enormous amount of detail. Are we becoming victims of information pollution in our own wireframes?
Visual and interaction design for successful e-commerce Web sites and Web-based applications requires meticulous attention to detail. Because the smallest matters can ruin the user experience, an orderly process--such as usage-centered design--guided by robust principles is needed; iterative testing and repetitive redesign is inadequate to find and address all the diverse matters needing attention. This paper reviews basic principles and then surveys best practices in the detailed aspects of Web design in three broad areas: details of architecture or organization, details of interaction design, and details relating to commercial activity, especially shopping. Specific recommendations in each area are offered as examples of best practices based on usage-centered principles.
To test aspects of a theoretical framework on goal detection in social interaction, an experiment examined dyadic initial interactions wherein one participant pursued a goal unbeknownst to another participant. The level of specificity and efficiency at which a pursuer sought a goal interacted to affect the accuracy of the detector's inference as well as the time of onset for that inference. Consistent with hypotheses, efficiency was unrelated to accuracy and negatively correlated with onset latency when pursuers had an abstract information-seeking goal, whereas efficiency was positively correlated with accuracy and onset latency when detecting a concrete (i.e., specific) information-seeking goal. Unexpectedly, efficiency was unrelated to accuracy and onset latency for a midlevel information-seeking goal. Other results focused on the role of individual differences (i.e., perspective-taking and suspicion in others' motives) and perceived communication competence in the goal detection process. A more controlled, second experiment that employed confederates generally replicated results.
Despite the approval, nearly ten years ago, of the ACRL Guidelines for the Loan of Rare Materials, few special collections departments regularly loan materials to other universities. For the researcher, obtaining rare books and manuscripts (or copies of the same) via interlibrary loan continues to be difficult if not impossible. The last ten years have shown a phenomenal growth in the production and marketing of digital facsimiles of rare books. This article examines research on digital facsimile CD-ROM collection patterns and presents the results of a survey on interlibrary loan lending practices in an effort to understand the impact that CD-ROMs may have on interlibrary loan and access to rare materials.
Previous studies have highlighted the high arrival rate of new content on the web. We study the extent to which this new content can be efficiently discovered by a crawler. Our study has two parts. First, we study the inherent difficulty of the discovery problem using a maximum cover formulation, under an assumption of perfect estimates of likely sources of links to new content. Second, we relax this assumption and study a more realistic setting in which algorithms must use historical statistics to estimate which pages are most likely to yield links to new content. We recommend a simple algorithm that performs comparably to all approaches we consider. We measure the overhead of discovering new content, de- ﬁned as the average number of fetches required to discover one new page. We show ﬁrst that with perfect foreknowledge of where to explore for links to new content, it is possible to discover 90% of all new content with under 3% overhead, and 100% of new content with 9% overhead. But actual algorithms, which do not have access to perfect foreknowl- edge, face a more difficult task: one quarter of new content is simply not amenable to efficient discovery. Of the re- maining three quarters, 80% of new content during a given week may be discovered with 160% overhead if content is recrawled fully on a monthly basis.
Lately I’ve been creating context-sensitive help for an online application. As part of my strategy, I’ve been trying to follow Theresa Putkey’s advice in “Usability in Context-Sensitive Help.” In her article, Theresa recommends providing more than just the steps for a specific task in the context-sensitive help window. Instead, she says to show more contextual links, including answers to why, when, and who questions, because too frequently the user who searches for help may have needs outside the specific task you describe.
The International Classification of Diseases (ICD) has been employed by the world's public health officials to chart the nature, frequency, and geographic origins of diseases and causes of death in human populations since the late nineteenth century. The ICD has been modified every decade since the 1890s, and a study by Bowker and Star of these changes, in concert with the work of others on the practices employed in information mapping, can be used to better understand the organization of large-scale web sites. Specifically, web designers must adapt classification schemes to fit multiple social worlds. Additionally, we need to understand that these systems can become so entrenched in our thinking that they become "invisible," thus undermining our ability to adapt them as future needs or insights arise.
An XML data island is a piece of well-formed XML embedded into an HTML file. This article will show you how to retrieve data in an XML format from a database using ADO; you will also learn how to bind this data into an HTML document.
In technical writing, synonyms and variants should be used judiciously and often avoided altogether. The use of one term consistently to express a given concept is preferred so that communication is clear and so that translation costs are minimized. For this reason, when synonyms and variants do exist in popular usage, it is common practice in commercial environments to choose one of the terms as the “preferred term.” This indicator of preferred usage needs to be documented in glossaries. Due to the limitations of markup languages for creating glossaries, usually the so-called preferred term is identified simply by making it the headword in a glossary entry and providing a definition in this glossary entry.
Our goal is to bring people together — think social network organized around XML, DITA, content management and related topics. This blog serves as a hub for white papers and URL resources, contains a calendar of XML-related events and conferences, tracks industry trends, and keeps members up-to-date as to “what’s new” on the site.
DITA is applicable to many publishing applications, including traditional narrative documents that don't seem, at first look, like candidates for ditification.
How a multi-national, regulated medical device company planned its migration to a DITA CMS by identifying stakeholders and defining personas, establishing a high-level process and system requirements, developing a content model, and figuring out what to do with legacy documents.
I’m coming to the conclusion that there are specific types of content that suit a DITA environment, and that the converse is also true: DITA is not the best solution for every content type. (DITA is the Darwin Information Typing Architecture, an XML architecture for designing, writing, managing, and publishing information.)
Here, in no particular order, I cover a miscellany of DITA challenges – content re-use, maprefs, folder structures, ditamaps, topicsets, and authoring-publishing workflows.
The DITA-OT plugin transforms a map into a single file, suitable for publication, and automatically call the xmlrpc API of the blog to publish it. The DITA Wordpress plugin adds a css (a slightly modified version of the DITA-OT commonltr.css) to your Wordpress theme to properly render the standard domains.
I’m continuing with my series about DITA. In this post, I explain parent-child page links, content re-use when the content exists in different elements, a one-folder-for-all-files organization, and a better workaround to transferring relative links to Drupal.
The DITA Open Toolkit is an open source implementation of the OASIS DITA Technical Committee's specification for Darwin Information Typing Architecture (DITA) DTDs and schemas. The toolkit is a major upgrade from its predecessor, the developerWorks version known as "dita132." The toolkit uses open source solution of ANT, XSLT (currently 1.0) and Java to implement transformation functions from DITA content (maps and topics) into different deliverable formats.