The final decade of the last century witnessed the dramatic rise of hypertext as a literary, technical, social, and intellectual phenomenon. Today, despite the fact that hypertext provides the conceptual underpinnings for the World Wide Web (among other things), 'hypertext' remains a relatively peripheral term. In this talk, I'll track some of the ways that 'hypertext' has been articulated during the last five decades, describing how the social construction of hypertext inscribed the technology(ies) in limiting and ultimately self-defeating ways. I'll then attempt to track (and construct) some possible futures for a dramatically redefined hypertext, one constructed as an 'ethic of reference' within and among social communities rather than a technical practice.
The nature of hypertext challenges many underlying assumptions for traditional literary critics. Literary critics frequently like to think that they have objectively looked at the lexias of the work, thoughtfully considered them, and constructed a solid interpretation or analysis of the work based on those lexia. Hypertext, however, presents the possibility that two critics who are reading the same work may have differing sets of lexia from which to work. Thus, even if critics objectively consider the lexia before them, they cannot free themselves from the subjectivity of the reading performance that made those lexia (and not others) appear. This raises the concern that, if hypertext critics can only present subjective views of the text, there may be little or no benefit to reading or writing those critiques.
Since this is going to be a wild ride across a some disciplines that don’t normally talk to each other, let me start with a short, structural overview to get everyone situated. I’m going to begin by defining some terms. They’re all relatively simple, common terms, but I’m going to attempt to bring them together in a particular configuration; in order for that configuration to make sense, I need to settle on some loose definitions and, at the same time, make the terms relevant to our discussion. Next--and this is probably the bulk of the talk--I’ll be outlining a geneaology of work, particularly as it relates to interface design. In this history, I’m interested in understanding, from a critical perspective, what happens to work as it increasingly takes place within the computer interface. I’ll say here that the end of this history is where the terms “postmodernism,” “work,” and “interface” come together. Finally, I’ll offer some suggestions—and examples—of ways that we -- as teachers, researchers, designers, communicators -- can begin to deal productively with some of the problems I see with how interfaces are currently being designed and used.
Linking, when properly executed, enhances the value of content by providing a consistent perspective and organizational scheme that enriches the user's experience. Link authoring, like content authoring, is a creative process of making connections between disparate yet related information. Effective link authoring requires intellect, creativity, and domain knowledge to define the relationships among concepts that can support a particular pedagogical objective. The contribution of hypermedia link authoring is often poorly understood and unrecognized by traditional academic and publishing communities. Publishers of commercial and academic hypermedia typically neither formally recognize link authoring as something that should be protected by copyright, nor do they extend to those involved in link authoring the same degree of credit or remuneration given conventional content authors or illustrators.
When I first came to Boeing, my workgroup delivered documents (stored either in Microsoft Word or XyWrite) in hardcopy format. As more modern document delivery options were made available to us, I convinced the customers, development staff and the management to adopt these new technologies to make documentation maintenance and delivery easier. I also converted over 1000 pages of documentation (such as language reference manuals, quick reference guides, installation guides and user guides) from strict text formatting to hypertext. This chapter will share what I learned with you. Here are some guidelines I recommend you follow when you begin to convert your paper-based documents to hypertext. Each guideline will be expounded later in this chapter.
This presentation traces the locations and roles of computer documentation over the latter half of the 20th century in order to construct a model of information/knowledge space as it relates to different forms of work. The paper then provides suggestions about future forms of documentation and interface based on ethnographic research of workers in recently emerging forms of work, including nonlinear audio/video production and videogame playing. The final section of the paper provides concrete suggestions about forms of documentation and interface that will be required to support these new forms of work.
Before coding any part of a hypermedia computer-based training (CBT) system, designers need to decide how much control their users should have over their individual paths through the system. Designers can choose from three different levels of control within a hypermedia CBT system: complete computer control, complete user control, and adaptive computer control. Each level of control is suited to different types of audiences and system goals. Current research provides some guidelines for designers—showing which types of audiences and system goals are suited to which methods of control.
The personal computer has had a significant impact on the delivery of educational material. Hypermedia systems give students the ability to explore concepts in innovative ways. Unfortunately, it appears that many hypermedia designers have ignored the critical early planning stages. This paper provides an overview of three of those planning stages: audience analysis, system goals analysis, and control analysis.
There are many models of hypertext, distinguished by a number of factors such as the underlying semantic data model (link typing and node typing), the degree of dynamic linking in the hypertext, and how dynamism and other behaviours are implemented. This essay examines a particular approach to dynamism in hypertext, based on the degree of similarity between a text passage in a source node and the text of a target node. It reviews work carried out over the past decade in creating systems for markup-based querying and dynamic hypertext, with particular emphasis on a model of dynamic hypertext that computes hypertext links on the fly using queries.
Fifty years before web, 30 years before the personal computer, Vannevar Bush envisioned a new machine to make sense of the growing mountains of information, creating the notions of 'hypertext' and the modern link.
World Wide Web authors must cope in a hypermedia environment analogous to second-generation computing languages, building and managing most hypermedia links using simple anchors and single-step navigation. Following this analogy, sophisticated application environments on the World Wide Web will require third- and fourth-generation hypermedia features. Implementing third- and fourth-generation hypermedia involves designing both high- level hypermedia features and the high-level authoring environments system developers build for authors to specify them. We present a set of high-level hypermedia features including typed nodes and links, link attributes, structure-based query, transclusions, warm and hot links, private and public links, hypermedia access permissions, computed personalized links, external link databases, link update mechanisms, overviews, trails, guided tours, backtracking, and history-based navigation. We ground our discussion in the hypermedia research literature, and illustrate each feature both from existing implementations and a running scenario. We also give some direction for implementing these on the World Wide Web and in other information systems.
In the short term of three to five years, I don't really expect significant changes in the way hypertext is done compared to the currently known systems. Of course new stuff will be invented all the time, but just getting the things we already have in the laboratory out into the world will be more than enough. I expect to see three major changes: the consolidation of the mass market for hypertext; commercial information services on the Internet; the integration of hypertext and other computer facilities.
To understand how much content effluvia we're subjected to, I wanted to see how many links are on the homepage of popular websites. For example, if I go to the homepage of the Huffington Post, I see 720 links, in one shot. Then click inside to a story and you've nearly doubled that number—it ads up pretty quickly. What about the tech blogs? BoingBoing Gadgets, 514. Gizmodo, 468. Engadget 432, all on one page. And on average, fewer than 1% of the links on news sites and blogs actually point to rich content, 99% are navigation and other article headlines. Aggregation site Techmeme has a whopping 1081 links.
This article revisits three past articles about the implications of hypermedia in the 21st century. Each August, the ACM Journal of Computer Documentation reprints a classic article, book chapter, or report along with several analytical commen- taries and a response by the author of the classic document. In this context, a 'classic' document means one that was published at least five years ago but is no longer in print. It also means one that raises issues of lasting importance to the profession.
Few designers explicitly think about their applications' interrelationships. Designers appear not have a deep enough conceptualization of their domains to identify intuitive relationships and realize the full scope and interconnections within domains. RNA (Relationship-Navigation Analysis) gives designers and developers an analysis tool to think about an information domain in terms of its interrelationships. RNA incorporates a complete taxonomy of generic relationship types that would apply to any application domain.
Hypertext is a novel approach to computer-based information management based on associative indexing. The concept in general and the characteristics of typical systems are briefly reviewed. Strategies for applying hypertext techniques to the process of writing a technical document are examined. The way in which hypertext documents are used is discussed, focusing on a commonly encountered problem -- user disorientation within the document. Hypertext-based technical documents are compared and contrasted against their paper-based antecedents.
The Hypertext Functionality field studies techniques for and the impact of supplementing everyday computer applications with hypertext (or hypermedia) functionality (HTF). The HTF approach encourages system developers to think actively about an application's interrelationships, and whether users should access and navigate along these relationships directly. It views hypertext as value-added support functionality. The HTF approach fosters three major areas of research: using HTF to improve personal and organizational effectiveness, HTF and application design,and integrating HTF into applications.
The attention of the audience is a writer's most precious possession, and the value of audience attention is seldom more clear than in writing for the Web. The time, care, and expense devoted to creating and promoting a hypertext are lost if readers arrive, glance around, and click elsewhere.
A discussion of some of the most compelling elements of current hypertext theory. By practicing the theory it preaches, it hopes to explicitly model the theoretical interrogations of the issue.
Because of the nature and complexity of collaborative work, there is currently much interest in examining computer support for team endeavors. Hypertext technology is particularly suited to providing such support. Many current hypertext applications support collaborative endeavors in diverse fields. Rensselaer’s Design Conference Room (DCR) is an Electronic Meeting System facility intended to support mechanical and software engineering design teams. Teams meeting or working in the DCR have access to sophisticated networking and hypertext technologies. Careful study of the processes and products of DCR team will contribute to an understanding of how hypertext (and other computer technologies) can best support team endeavors.
This proposal concerns the management of general information about accelerators and experiments at CERN. It discusses the problems of loss of information about complex evolving systems and derives a solution based on a distributed hypertext system.
Nobody is offering courses in how to prepare hypermedia, nor are there a large number of jobs available for hypermedia authors. As we begin to come up against the limits imposed by the volume of existing knowledge, we will eventually be forced to place more importance on managing our information explosion.
The 'article' approach is better than the 'card' (or 'topic') approach. Concatenate your hypertext nodes and format the headings relatively, for increased comprehensibility of large amounts of conceptual material. Placing node bodies contiguously enhances visibility of information structure.