Information design (also known as 'information architecture') is the study of the details of complex systems. Among these are websites, user interactions, databases, technical writing documentation, and human-computer interfaces.
What do cognitive psychology and information architecture have in common? Actually there is a good deal of common ground between the two disciplines. Certainly, having a background in cognitive psychology supports the practice of information architecture, and it is precisely those interconnections and support that will be explored.
There are a lot of things that make deliverables good: coherence, context and relevance hardly constitute a comprehensive list. But by focusing on techniques that achieve coherence, context and relevance, information architects can address the challenges of starting a document, focusing the document and explaining its value.
This article examines the process of collaborative information seeking in intercultural computer-mediated communication (CMC) groups. The authors conducted a field experiment in which 86 students from three distant universities (one in the United States, two in Singapore) participated. The students participated in a collaborative learning practice in which they socially recommended information using a CMC system. The results demonstrate that the social context—that is, preexisting social networks, groups, and intergroup boundaries—significantly constrained the flow of information across intercultural CMC groups. The authors also found that the influence of the social context on CMC collaboration could be moderated by other contingent factors such as national culture and individuals' outcome expectancies of Internet use. The authors present the results from testing their hypotheses using multivariate p* and Quadratic Assignment Procedure network regression analyses and conclude with a discussion of the findings and implications for future research.
Companies had decades of experience in using printed materials to persuade readers to contact them, whether by phone, mail, or in person. This model of interaction with customers had worked so well and so predictably that we simply moved it online, largely unmodified. That was by no means wrong, but as Web technology and our comprehension of that technology both evolved, the approach proved limiting.
It seems like a lifetime ago when I asked my boss if I could adopt the title 'Information Architect.' After all, according to Richard Saul Wurman's definition, that is what I was. He laughed at me and said Information Architect isn’t a title, or a role. It’s not a job. That conversation took place only four years ago.
The Semantic Web really is an attempt to reconceptualize and reengineer AI for the Web. Discusses the path forward for successfully selling and developing Semantic Web technology into industry.
The ability to build interactions that support, enable, and improve communication is a valuable skill for help developers, Web-site designers, multimedia content developers, information-rich user interface designers-anyone who designs and develops information to be used online. This paper presents the basics of interaction design for information products and describes some basic underlying human factors and user-interface design principles.
Rene Pawlitzek continues to advance the Hamlets framework, which extends Java servlets and enforces the separation of content and presentation. In this article, he proposes a new refinement: a method of compiling Hamlet templates that can improve application performance.
If you can put together an authoring-publishing workflow that is form-fit to DITA, then using DITA can be a good choice. For example, if you’re using Oxygen to publish to Oxygen’s webhelp output, or using easyDITA to push to MindTouch, or pushing content into Antidot’s Fluid Topics, or Mekon’s DITAweb, or Componize’s Alfreso integration, or some other defined DITA publishing solution, then I think DITA can be a good approach.
The past hundred years have seen the development of numerous systems for the structured representation of knowledge and information, including hierarchical classification systems with notation as well as alphabetical indexing systems with sophisticated features for the representation of term relationships. The reasons for the lack of widespread adoption of these systems, particularly in the United States, are discussed. The suggested structure for indexing the Internet or other large electronic collections of documents is based on that of book indexes: specific headings with coined modifications.
Complexity theory offers a way of understanding our role within the World Wide Web. Postulating a rhetorical object based on object-oriented analysis and design, we can harness a number of ideas from complexity theory to gain a new perspective on the Web. This paper reviews a number of complexity ideas that may help technical communicators grapple with the exponential growth in the volume of inter-related and interacting rhetorical objects on the Web, viewing the rhetorical situation as the result of the law of increasing returns, which has brought us through a phase transition to a new environment, with its own emergent properties, creating new roles for writers, and new work for managers.
While computing the Mathematical Sciences is similar to other scientific areas, often the researcher lacks the resources to carry out those computations. Grid computing and web services provide some possibilities for solutions but they do not address the increasing demand for computing resources and ad hoc computation networks. This paper describes a solution to this that uses peer-to-peer technologies to build ad hoc networks of computational agents that all speak XML to carry out computations.
Designers of digital products and services like ourselves can dramatically improve our work by generating more concepts early in our projects. In this article, I’ll try to make concept design easier to learn by illustrating three simple tools for generating concepts.
This presentation is for beginning to intermediate users of DITA. It's based on my experience with projects on which I'm project manager, information architect, and writer.
This review of Information Ecologies places the text in the mediating tradition that seeks a middle ground between rigid technological determinism and indifferent value neutrality. The biological metaphors for situated technology use make interesting reading,but the stories may not be compelling evidence that users really can shape technological change from the local level.
Conflict resolution is required wherever we have multiple concurrent changes to a single information set. In practical terms this applies, for example, to concurrent editing environments, to replicated database instances which are being updated independently, to address-book changes on a PDA that must be merged into a master database that has itself been changed. Resolving these conflicts very often requires human intervention. This paper looks at the use of XML forms of various types to reduce the drudgery involved and to take advantage some of the greatest strengths of XML, using pipelining and easily-understood representations to allow a decision-maker to work with minimal drag.
Standards for designing data displays—for example, bar graphs, line graphs, pie charts, scatter plots—can be classified into four types: Conventional—emphasis on imitating generic forms that meet readers’ expectations. Perceptual—emphasis on optimizing reader behavior in accessing data visually. Informational—emphasis on transferring information clearly and concisely from designer to reader. Aesthetic—emphasis on taste, cultural values, and expressive elements. While each of these standards has merit, and some overlap occurs among them, they often conflict with each other, leaving the information designer in a quandary as to which standard to follow. Designers can resolve this dilemma by allowing the rhetorical situation—the readers of the display, its purpose, the context in which they use it—to guide the design process, telling designers when to follow, blend, or flout the standards.
The article presents a point of view about analyzing and designing the user experience within pervasive networks made of distributed services and applications, where the user is the primary actor who freely and opportunistically connects and activates the system components following an activity-driven process. A digital content case study is used to outline the main characteristics of this scenario and to introduce a tool for user experience modelling and designing. From the application of this model are proposed some considerations about how the design process could change to support this vision.
The building block system includes several types of Connectors that make it possible for designers and architects to link the different areas of a Dashboard together via a consistent, easily understandable navigation model. The system also ensures the resulting information architecture can grow in response to changing needs and content. There's no special stacking hierarchy for the Connectors. However, they do have an official stacking size (most are size 3) in order to keep Dashboards constructed with the building blocks internally consistent.
In this progression we will examine ways that technical communicators can improve both information usability and product usability. The presentation will center around two major points.
With the growing use of dynamic web content generated from relational databases, traditional caching solutions for throughput and latency improvements are ineffective. We describe a middleware layer called Ganesh that reduces the volume of data transmitted without semantic interpretation of queries or results. It achieves this reduction through the use of cryptographic hashing to detect similarities with previous results. These beneﬁts do not require any compromise of the strict consistency semantics provided by the back-end database. Further, Ganesh does not require modiﬁcations to applications, web servers, or database servers, and works with closed-source applications and databases. Using two benchmarks representative of dynamic web sites, measurements of our prototype show that it can increase end-to-end throughput by as much as twofold for non-data intensive applications and by as much as tenfold for data intensive ones.
Since this is going to be a wild ride across a some disciplines that don’t normally talk to each other, let me start with a short, structural overview to get everyone situated. I’m going to begin by defining some terms. They’re all relatively simple, common terms, but I’m going to attempt to bring them together in a particular configuration; in order for that configuration to make sense, I need to settle on some loose definitions and, at the same time, make the terms relevant to our discussion. Next--and this is probably the bulk of the talk--I’ll be outlining a geneaology of work, particularly as it relates to interface design. In this history, I’m interested in understanding, from a critical perspective, what happens to work as it increasingly takes place within the computer interface. I’ll say here that the end of this history is where the terms “postmodernism,” “work,” and “interface” come together. Finally, I’ll offer some suggestions—and examples—of ways that we -- as teachers, researchers, designers, communicators -- can begin to deal productively with some of the problems I see with how interfaces are currently being designed and used.
Most intranets are not all that different from each other - the same content subjects tend to apply to most companies and organizations. Content-Strategy has developed a universal intranet content chart that you can use directly - or modify - for free.