Information design (also known as 'information architecture') is the study of the details of complex systems. Among these are websites, user interactions, databases, technical writing documentation, and human-computer interfaces.
As XML formatted content and data becomes pervasive on intranets and the Internet the requirement to minimize individual process times becomes great. XPath has been evolving into a rich expression language to query and extract data in a precise way. While it has been designed to be used by a host language such as XSLT and XQuery, an XPath processor can be used quite usefully standalone or as part of an application framework.
This paper presents a semantic portal, SEMPort, which provides better user support with personalized views, semantic navigation, ontology-based search and three different kinds of semantic hyperlinks. Distributed content editing and provision is supplied for the maintenance of the contents in real-time. As a case study, SEMPort is tested on the Course Modules Web Page (CMWP) of the School of Electronics and Computer Science (ECS).
The Mozilla platform offers a rich support of XML techniques, from low level ones (XPath, RDF, DOM, e4x) to rendering dialects like XHTML, SVG, XUL and XForms, thus making this platform a natural choice for the XML inclined. It is becoming a platform of choice when developing rich connected applications. When building dynamic applications, the developer is often facing a common set of programming patterns : gathering data from various remote and local sources, storing data with an optional transformation phase, and updating parts of the GUI to reflect the modifications in the data store. With today's ubiquitous use of XML as a data exchange syntax, a major part of these tasks can be achieved with XML based solutions. In this article we will present an XML centric solution that aims at minimizing the impedance mismatch between different data models that plagues classical architectures involving for instance XML/object/relationnal translation. It combines some of Mozilla's existing capabilities with REX (Remote Events for XML) and a native XML database with XQuery support. REX provides means to update the XUL based GUI and the database, while the XML database is used as a versatile storage engine.
This chapter will take you a few steps further by showing you how to add text and markup to your result tree with XSLT templates. First, you'll addliteral text to your output. Then you'll work with literal result elements, that is, elements that are represented literally in templates. You'll also learn how to add content with the text, element, attribute, attribute-set, comment, and processing-instruction elements. In addition, you'll get your first encounter with attribute value templates, which provide a way to define templates inside attribute values.
We shall present an interactive demonstration of asynchronous XML pipelines. To begin we shall show linear XQuery pipelines developed with a recursive pull pattern. We shall demonstrate that this pattern can be improved by developing pipelines using a declarative scheduling language (DPML). We shall demonstrate in-pipe exception handling, we shall also show pipeline breakpoints and pipeline debugging. In addition we shall show modular pipeline decomposition and layered pipelines written in both declarative and procedural languages.
Polar Bear book co-author Peter Morville shares the inside stories about the making of the new edition--from its original scribblings on an airsick bag to the ideas that didn’t make it in--and his thoughts about how the field has changed since their book was first published.
Every web designer should know and understand a Web site’s parameters before lifting a finger to start designing the site. In this article, you will learn the basics required to start designing business Web sites. While this information is useful if you want to build sites for others, it can also serve as a checklist article for sites you want to build for yourself.
As companies and consumers rely more on Web services, it becomes increasingly important for Web services developers to know how to properly design, develop, deploy, and ultimately manage a Web services system. However, because of the inherent complexities that can arise with a Web service implementation, it can be difficult to grasp practical fundamentals and devise a step-by-step plan for Web services development.
From a software development viewpoint, model-driven architecture (MDA) encourages efficient use of system models. It also encourages reusing best practices as families of systems are produced. One of the main aims of MDA is to separate design from architecture, which places the business analyst in a unique and potentially powerful position within the organization. Learn how you as a business analyst can take an active role in this type of architecture.
Survey of business information services in corporate information services, based on in-depth interviews with leading business information managers. Key findings are: Business information budgets have been stable, with at least inflationary increases built in; Business conditions have been turbulent of late but this has had no real impact on the services to date; With more information rolled out to the clients' desktops, the services are all working to add value through a variety of approaches, including training, evaluation and analysis, business and client development, and generally undertaking more complex work; Some pressure on the staffing headcount in the services during the year; Recruiting suitable information professionals is a difficult process; Offshoring information and research work has not expanded significantly but more companies are considering this option; 'Techno-centric' knowledge management remains important in some companies, particularly law firms, but is fading as a practice in others; There is great interest and envisaged potential in social technology and Web 2.0 tools and techniques -- but not much serious deployment yet; Even in mature corporate information environments, marketing business information services is still seen as crucial by 90 per cent of respondents; Fifty-five percent of the services provide some kind of competitor information function, albeit not at a high level; A significant majority (75 per cent) of services support compliance functions such as 'Know your client' and anti-money laundering checks; Existing copyright provisions are seen as a barrier to effective information dissemination within companies by 80 per cent of respondents; LexisNexis takes over at the top of the expenditure league; The demand for information on Asian business markets is growing; Almost all services are committed to training users in the discovery and use of digital business information sources; The organization, management, and sometimes realignment of services is the highest strategic priority.
Gone are the days when a footer merely ended the page. Now it is just as likely to be an all-encompassing launchpad to other areas of the site. Typically a footer will run the full length of the layout, and it is usually used to display information at the bottom of the content hierarchy.
The paper addresses the issue of how online natural language question answering, based on deep semantic analysis, may compete with currently popular keyword search, open domain information retrieval systems, covering a horizontal domain. We suggest the multiagent question answering approach, where each domain is represented by an agent which tries to answer questions taking into account its specific knowledge. The meta–agent controls the cooperation between question answering agents and chooses the most relevant answer(s). We argue that multiagent question answering is optimal in terms of access to business and financial knowledge, flexibility in query phrasing, and efficiency and usability of advice. The knowledge and advice encoded in the system are initially prepared by domain experts. We analyze the commercial application of multiagent question answering and the robustness of the meta–agent. The paper suggests that a multiagent architecture is optimal when a real world question answering domain combines a number of vertical ones to form a horizontal domain.
This is a simple technique that enables one person or a group of people to create a categorisation of objects so that it is understood which objects belong with which other objects. Objects can be anything: menu items, blocks of content, proposed web pages, URLs. This method can be used by practically anybody after a few minutes practice.
Card sorting is a way to involve users in grouping information for a Web site. Participants in a card sorting session are asked to organize the content from your Web site in a way that makes sense to them. Participants review items from your Web site and then group these items into categories. Participants may even help you label these groups. Card sorting helps you build the structure for your Web site, decide what to put on the home page, and label the home page categories. It also helps to ensure that you organize information on your site in a way that is logical to your users.
Card sorting is a user testing method for organising data into structure. There’s a lot of information about on what they are, how to conduct them. Problem is, they’re all over the place and mostly they’re written by scientists so tend to be a little difficult to grasp and bogged down in analysis (which can take over your life if you let it!) I’ve decided to document my understanding of how to plan, conduct and analyse a card sort, from a practitioners point of view.
While card sorting is described in a few texts and a number of sites, most descriptions are brief. There is not a definitive article that describes the technique and its variants and explains the issues to watch out for. Given the number of questions posted to discussion groups, and discussions we have had at conferences, we thought it was time to get all of the issues in one place. This article provides a detailed description of the basic technique, with some focus on using the technique for more complex sites.
What is the world (wide web) coming to when even us blue-haired English teachers have something to say about the Net? After all, we're supposed to be consumed with the past--a time long before the binary code when writers still used quills, and men, unfortunately, wore tights. (Sorry for the visual.) Well, in defense of red-pen-wielders everywhere, I have to say that just ain't so. Technology, particularly that which furthers education, is our concern. And the Internet (yes, I just started a sentence with the world 'and') is a source of great conflict. On the one hand, it is a storehouse from which vast amounts of knowledge may be retrieved--it provides information that may otherwise be inaccessible. On the other hand, because of its nature as an abyss, it's an illimitable source for the plagiarist. So, ironically, something that should catalyze learning is actually, in a way, simply making it easier for students not to learn.
Maps take many forms, many themes, and many expressions, ranging from the straightforward city street map to the specialized terrain and statistical displays used by scientists and sports affectionados to highly sophisticated cartograms used by researchers and thesis-writers. All this work stands upon the shoulders of giants. Without hundreds of years of work and refinement, without the maps of Ptolemy and the projections of Mercator, there could never have been the common gas-station street map, and neither could there have been the highly technological GIS systems of today. Designorati:Cartography aims to explore the origins, view the state of the art, and guess at the future of the common (and uncommon) map. Maps abstract the world so we may comprehend it better. Maps also speak thier own language, but it’s one we can all learn. Maps are sometimes made with an agenda and a message, and sometimes just to inform. And maps, in an indirect way, reflect our own image. It is, in and of itself, an exploration. Designorati:Cartography hopes to become an explorers guide to that realm.