Technical writers who must maintain complex, delicately interconnected information often look to object-oriented SGML databases as a way of storing, retrieving, reusing, and reassembling the constituent objects of new documents, created on the fly to respond to a particular customer’s needs. The SGML tags help identify structural packages such as procedures, illustrations, or glossary items; in a large database, then, writers can filter out unwanted material, locating only the structural pieces they need for the job in hand. For instance, to produce a quick reference, a writer might pull up the names of procedures and their steps, but not the introductions or explanations. Similarly, a user could search for illustrations only. But illustrations of what? With no subject matter defined, such searches result in hundreds, even tens of thousands of hits. To speed up access to the precise passages wanted, end users and writers need a way to narrow their searches by defining the precise subject matter (the meaning, or semantics) as well as the structural elements they seek.
For programmers, a programming language is a software tool. Its interface consists of its lexicon, grammar and semantic rules. From this view, using a language to do programming is actually using that tool to accomplish something. As we will see shortly, different languages vary greatly in the degree of how they get close to programmer's conceptual model.
This paper presents a semantic portal, SEMPort, which provides better user support with personalized views, semantic navigation, ontology-based search and three different kinds of semantic hyperlinks. Distributed content editing and provision is supplied for the maintenance of the contents in real-time. As a case study, SEMPort is tested on the Course Modules Web Page (CMWP) of the School of Electronics and Computer Science (ECS).
The paper addresses the issue of how online natural language question answering, based on deep semantic analysis, may compete with currently popular keyword search, open domain information retrieval systems, covering a horizontal domain. We suggest the multiagent question answering approach, where each domain is represented by an agent which tries to answer questions taking into account its specific knowledge. The meta–agent controls the cooperation between question answering agents and chooses the most relevant answer(s). We argue that multiagent question answering is optimal in terms of access to business and financial knowledge, flexibility in query phrasing, and efficiency and usability of advice. The knowledge and advice encoded in the system are initially prepared by domain experts. We analyze the commercial application of multiagent question answering and the robustness of the meta–agent. The paper suggests that a multiagent architecture is optimal when a real world question answering domain combines a number of vertical ones to form a horizontal domain.
In this article, the author demonstrates that the semiotic model proposed by Charles Morris enables us to optimize our understanding of technical communication practices and provides a good point of inquiry. To illustrate this point, the author exemplifies the semiotic approaches by scholars in technical communication and elaborates Morris's model through analyzing visual and verbal elements of technical communication brochures from semantic, syntactic, and pragmatic levels. The discussion of semiotic approach reinforced by various examples illustrates that the semiotic model can be a tangible theoretical and practical tool to help students and practitioners study and analyze the use of visual and verbal elements in technical communication.
Since the advent of the Web, we've seen a myriad of design schemas evolve--from the simple navigation/content style of site to the cluttered portal. And as this evolution has progressed, so did the war between UI designers and usability experts. On one side, there are usability experts who want to make every website look exactly like Yahoo because users know Yahoo and so they will automatically know how to use the site. On the other side, there are UI designers who want to design entire sites in Flash and Shockwave just because it's cool. Overly dramatic? Well, yes, maybe a little--but it's not entirely a false analysis. Many UI designers that work with usability folk complain that their creativity is hampered, whereas many usability gurus complain that designers are confusing a site's user with their visual semantics. But are the goals of UI designers and the usability folk that far apart?
Hypertext are implied in complex linear texts like literary analysis. These texts 'model' hypertext nodes and links in their linguistic properties and constitute a system of semantic messages and cohesive ties. A conceptual bridge between conventional and hyper-texts can be constructed for technical and scientific applications based on these linguistic properties. Linguistic analysis can also serve as a cohesive construct extending to visual and auditory displays in hyperdocuments. Hypertext instructional designers are challenged to impose comparable mappings as analogues to these cohesive relationships to test memory, distance, and other variables.
There are many models of hypertext, distinguished by a number of factors such as the underlying semantic data model (link typing and node typing), the degree of dynamic linking in the hypertext, and how dynamism and other behaviours are implemented. This essay examines a particular approach to dynamism in hypertext, based on the degree of similarity between a text passage in a source node and the text of a target node. It reviews work carried out over the past decade in creating systems for markup-based querying and dynamic hypertext, with particular emphasis on a model of dynamic hypertext that computes hypertext links on the fly using queries.
Semantic technology can be as heavy and stifling for any audience as stem-cell research can be to high-school students. But Carla Thompson of Guidewire did a terrific job of coming up with discussion topics and moderating the panel. Everyone survived the ordeal without any sign of dozing.
The grouping and labelling of thematically related controls within a form is an important aspect of providing semantic information so users can understand and complete a form successfully. Differences in quality and implementation of support across user agents can hamper some users' ability to benefit from this information. This must not be taken as disincentive to developers, as the benefits of using these elements outweighs the negatives. But it is clear that some assistive technology vendors need to improve implementation of HTML features that enhance accessibility, so their users can gain the most benefit.
View-oriented group communication is an important and widely used building block for many distributed applications. Much current research has been dedicated to specifying the semantics and services of view-oriented Group Communication Systems (GCSs). However, the guarantees of different GCSs are formulated using varying terminologies and modeling techniques, and the specifications vary in their rigor. This makes it difficult to analyze and compare the different systems.
Until recently, the Semantic Web was little more than a name for the next-generation Web infrastructure as envisioned by its inventor, Tim Berners-Lee. With the introduction of XML and RDF, and new developments such as RDF Schema and DAML+OIL, the Semantic Web is rapidly taking shape. This paper gives an overview of the state-of-the-art in Semantic Web technology, the key relationships with traditional hypermedia research, and a comprehensive reference list to various sets of literature (hypertext, Web and Semantic Web). A research agenda describes the open research issues in the development of the Semantic Web from the perspective of hypermedia research.
According to experts, the Semantic Web, an enhancement of the conventional web, is paving the way for new functionalities in future, web-based applications. The possible scenarios that we could face are reminiscent of fiction and cinema, where you just need to think of the question for which we are seeking answers. An invisible assistant would then perform the search.
The distinction between information architecture and usability may seem like semantics, but there are significant differences between the two disciplines. Though they are often discussed interchangeably, and practitioners are often well-versed in both, information architecture and usability differ in their scope and areas of focus.
This paper presents a method for semantic Information Retrieval(IR) which is implemented on Digital Library. It is well known that Digital Library should have the IR system that user may automatically access every kind of media from anywhere. However, no improvement is made for the retrieval errors based on individual differences of user's request. This is one of the significant problem for the searching efficiency of IR. Our approach does not use the request itself but the concepts. This makes it possible to retrieve semantic information not merely to compare with the word strings of the request.
How do you string together disparate pieces of information to generate working knowledge? For now, you need to put it together yourself as part of a document or other type of master repository. Hopefully, this will change in the near future with linked data applications, personalization of content based on semantic interpretation of information, and information aggregators to capture and present usable knowledge.
As DITA diversifies to occupy more roles within an organization, single- application solutions can no longer provide the specialized support each author or product may require. Instead, a cross-application, cross-silo strategy that shares DITA as a common semantic currency lets groups use the toolset most appropriate for their content authoring and management needs, while sharing content and even moving authoring responsibility between groups throughout the content life cycle. Beyond automation of known processes, we now have the flexibility to combine new applications and sources of content as needed, providing processing flexibility and an adaptable, evolutionary content strategy.
Ontology mapping is a key problem to be solved for the success of the Semantic Web and related technologies. An ontology mapping algorithm aims at finding correspondences (or mappings) between entities of the source and target ontologies by combining several matching components, i.e., individual matchers, that exploit one or more sources of information encoded within the ontologies. In this paper, we investigate linguistic techniques for ontology mapping and underline their importance in paving the way to other matching techniques. We define a general mapping model architecture and discuss an implementation in the Lucene ontology matcher (LOM). LOM leverages the features of the Lucene search engine library. The basic idea is to gather the different kinds of linguistic information of the source ontology entities in Lucene documents that will be stored into an index. Mappings are discovered by using the values of entities in the target ontology as search arguments against the index created from the source ontology. Extensive experimental results using a popular benchmark test suite show the effectiveness of this approach in terms of precision, recall, F-measure and execution time as compared to other linguistic approaches.
XHTML is HTML described as an application of XML. It is very similar to HTML, indeed all the element names and their semantics are identical, but it has some important differences. We will look at the more important of these now.
What is the semantic web, fondly known as Web 3.0 (and sometimes known as the contextual web)? Here it is in nine words, distilled from a jargon-heavy conference: user-experience, meaning, intent, story, relevance, mobile, data, share, and free.
The concept of thumbnails is common in image representation. A thumbnail is a highly compressed version of an image that provides a small, yet complete visual representation to the human eye. We propose the adaptation of the concept of thumbnails to the domain of documents, whereby a thumbnail of any document can be generated from its semantic content, providing an adequate amount of information about the documents. However, unlike image thumbnails, document thumbnails are mainly for the consumption of software such as search engines, and other content processing systems. With the advent of the semantic web, the requirement for machine processing of documents has become extremely important. We give particular attention to electronic documents in XML and in RDF/XML, with a view towards the processing of documents in the semantic web.
It is important that on the Semantic Web, people produce data that is clean and interoperable. Some RDF techniques can currently only be learned through the RDF community, through hours of research, or through implementation experience, so this is an attempt to gather some useful but quick hints and tips into one place.