Information design (also known as 'information architecture') is the study of the details of complex systems. Among these are websites, user interactions, databases, technical writing documentation, and human-computer interfaces.
Rather than spend hours coming up with a complex numbering scheme, this might be an excuse to implement something far more straightforward discovered by an extensive readability study at IBM, of which I was a part. My work involved sitting behind a one-way mirror with a stopwatch, watching people take tests that involved, among other things, "how fast can you find Figure 3-4?" We had cameras mounted over the participant's shoulders and could watch them thumb through the documents, and we also monitored eye movements. Then we followed up with a short interview where we got feedback.
How is commercial Web site development informed by management decisions, marketing needs, business requirements, and consumer behavior and psychology (in short, the complex rhetorical situation surrounding commercial Web site development)? And how can the development process inform the formulation of a more effective Web commerce solution? I argue that the sense of community on the Web is the building block of retail Web commerce. I use a case study to show that using a communication process model can be an effective method of assessing market needs, business requirements, management decisions, and technology in the development of a retail Web solution.
Over 50% of the work done by the designer on a day-to-day basis is routine design that consists of reusing past design solutions (Moore, 1993). Despite of this fact, there are no tools that rationally support reuse of such solutions. Case-based design (CBD) has been pointed out as a promising aid to help this situation. In order to be of practical use, however, a case-based design system has to be able to use the information that the designer creates during the design process. The design information that the designer creates is today mostly in the form of weakly structured information, e.g. text documents, calculation documents, and 2D-drawings. This paper proposes an approach that enables capturing and representation of weakly structured information for the purpose of case-based structural design. The representation proposed allows us to apply most of the objectoriented abstract principles also on weakly structured information. It is also shown how the conceptual framework, the dependency structure, and the design process can be captured, represented, and used in CBD. The approach is successfully implemented into a prototype for reuse of computerized design calculation documents.
Generation Y are the first generation to fully put the process of ‘prosumption’ into practice. Individuals are proactively seeking to generate and share creative outputs as a result of their online activities, and this produces a set of fundamental questions for business librarians, information management specialists and consultants: does our profession adhere to a logic of service-delivery, which is rapidly becoming obsolete in the context of service-innovation. Suggestions for how information specialists (called librarian 2.0 in this article) can participate in the creation of value for users are offered.
The technical communication field lacks a place for on-line research in the historical, theoretical, and philosophical perspectives. For this reason, the idea of a Technical Communication Research Center was proposed as a way to help move the emphasis off of pedagogical and towards a more evenly balanced web site for technical communication research. Other sites are currently available for on-line research in technical communication, but not everything fits into the academic genre. We have found a market for a comprehensive research site in technical communication. The end product of TCRC will be devoted to both academics and professionals interested in both old and new research in their area of interest. Besides merely a research tool, the TCRC will also be a network for those working in the field. Through databases, email and periodical updates, the ultimate result of the end product is to connect technical communicators all over the world.
Is it better to have more items on a page and requiring fewer pages to be accessed (wide breadth), or to have fewer items per page and require more pages to be accessed (more depth)? Based primarily on studies reported three years ago by Larson and Czerwinski (1998) and Zaphiris and Mtei (1998), designers have been encouraged to construct broad, shallow sites.
As information architects, we are not just architecting information; we are using information to architect change. Bob Goodman shows us how we can use business and management techniques to help us be more effective agents of change.
We can't force people to look at the work we do, but if we want to make them happy, we need to provide them with the information they need in a manner that makes it easy for the top-down mechanisms to work efficiently. It's our job to help them observe, rather than just see.
Often conflicting pressures to produce communications that better fit customer demands as well as stay within tightening constraints on budgets and schedules are leading many technical communications organizations to a topic-based approach to authoring. In fact, 58% of participants in Aberdeen Group's October 2008 DITA and the Technical Communicator’s Transformation study report that they currently follow author content in a topic-based manner, with a vast majority of those remaining planning to implement one in the future. A topic-based approach promotes greater content reuse and is seeing a considerable impact on the authoring efficiency of technical communications projects today. The benefits of topic-based authoring can be compelling, with findings from the The Technical Communicator’s Transformation study indicating that when pursued the right way, topic-based authoring can have a broad range of benefits, enabling an organization to meet authoring and localization cost targets as well as documentation quality expectations, among others. However, as the adoption of this approach spreads, the advantages seen by today's leading organizations will flatten out. This Sector Insight provides a guide for current adoption of topic-based authoring and those still considering it; outlining the changes that are expected to take place in as topic-based authoring goes mainstream.
Documents are made of characters, XML documents are made of Unicode characters. Comparing with SGML, we now have potentially one million characters while SGML only provides a hundred, but on the other hand, we lost the option of defining our own SDATA entities. This puts us to two challenges. The first is, how can we validate that a document, an element, an attribute only contains those characters that we know how to process, how to render, sort, seek, hyphenate, capitalise, pronounce... How can we tell a type setter for which character set he has to find a font? XML Schema provides a simple way of restricting the set of valid characters in an attribute or a simple elememt to a regular expression, that can use some of the Unicode character properties, like the block it is defined in (like Basic Latin or Latin Extended-B) or the General Category (like Uppercase Letter or Math Symbol), but you can't use that in mixed content, like is typical in text markup.
DocBook and DITA both have their places. They're both excellent for single sourcing. DocBook is better for what I call monolithic single sourcing, while DITA is better suited for discrete single sourcing.
If you follow the latest trends or have been to a conference recently, you may find the idea of choosing an XML schema puzzling. Isn't the question really, 'How should I customize DITA to do what I want'? While there are many good reasons to choose DITA, it's not the only schema in town.
We need to develop a rhetoric of objects to understand the new way in which we must create and deliver content over the Web. We are facing a new multiplicity of audiences—niche groups, and even individuals, to whom we offer customization and personalization. With our new tools and new ways of thinking about what we create, we are inventing informative objects that address the needs of our audiences, letting go of the concept of a document, as we plunge into a world of small chunks of content. In this presentation, I consider how this new approach to technical communication affects our ideas of audience, invention, arrangement, style, delivery, memory, and character—the canons of traditional rhetoric.
Most information on the World Wide Web is gathered in short reference documents that are intended to be read nonsequentially. This is particularly true of sites whose contents are mostly technical or administrative documents. Long before the Web was invented, technical writers discovered that readers appreciate short 'chunks' of information that can be located and scanned quickly.
Current user interfaces for textual database searching leave much to be desired: individually, they are often confusing, and as a group, they are seriously inconsistent. We propose a four- phase framework for user-interface design: the framework provides common structure and terminology for searching while preserving the distinct features of individual collections and search mechanisms. Users will benefit from faster learning, increased comprehension, and better control, leading to more effective searches and higher satisfaction.
Los metadatos son información relativa a otra información. Al definir un grupo de metadatos para un objeto dado, estamos describiendo el objeto en cuestión, lo estamos caracterizando. Por ejemplo, HTML permite definir metadatos para una página web a través de su etiqueta . Esos metadatos (author, keywords...) caracterizan la página, describen su contenido. Los metadatos, utilizados tradicionalmente en el entorno bibliotecario, están resultando de gran utilidad en la Web, tanto en Sistemas de Recuperación de Información (back-end) como en Sistemas de Navegación (front-end).
XFML (eXchangeable Faceted Metadata Language), creado por Peter Van Dijck, es un lenguaje o vocabulario con sintaxis XML para definir, distribuir e intercambiar metadatos en forma de taxonomías o clasificaciones facetadas.
La classificazione rappresenta un investimento che comporta dei costi nel breve termine, ma che dà anche notevoli frutti nel lungo termine (se impostata correttamente). Fra i sistemi di classificazione, quello a faccette (o multidimensionale) è sicuramente il più potente e versatile (nonostante gli schemi affermatisi come standard nella maggioranza delle biblioteche sono assai distanti da quello a faccette).
Dopo aver esplorato i principali modelli di classificazione elaborati dalla biblioteconomia, l'articolo si sofferma sulla classificazione a faccette (Faceted Classification), illustrandone le possibili applicazioni al web e i gli indubbi vantaggi che ne derivano.
In this paper, we present a novel method for the classification of Web sites. This method exploits both structure and content of Web sites in order to discern their functionality. It allows for distinguishing between eight of the most relevant functional classes of Web sites. We show that a pre-classification of Web sites utilizing structural properties considerably improves a subsequent textual classification with standard techniques. We evaluate this approach on a dataset comprising more than 16,000 Web sites with about 20 million crawled and 100 million known Web pages. Our approach achieves an accuracy of 92% for the coarse-grained classification of these Web sites.
The words 'click here for...' and 'click here to...' serve no purpose within links. Unfortunately, many news sites still use them. According to Google, 'click here' is on about 8,970 pages at sptimes.com alone.
Wide deployment of XML technology in enterprise applications demands high performance XML processing framework. This results in extensive investigation on building an XML processing infrastructure leveraging a compact, pre-parsed XML format, which could save in the memory and CPU consumption as well as the network bandwidth. In this paper, we will discuss the project building a compact schema-aware binary XML processing framework and compare it with the existing binary XML technologies. The discussion will cover the design of the compact binary XML format, the implementation for the compact binary XML processors, which encode and decode the XML documents, and how the compact binary XML support is integrated with the existing XML processing stack. At the end, we will provide the result testing applications leveraging the compact binary XML processing framework.