Information design (also known as 'information architecture') is the study of the details of complex systems. Among these are websites, user interactions, databases, technical writing documentation, and human-computer interfaces.
This paper proposes a general synchronization grammar which can describe synchronization rule sets. For example, when handling three input files, we show that changes to elements can be described in terms of just seven possible permutations. Similarly, PCDATA and attribute changes can be described in terms of a fixed set of permutations. Using these permutations a grammar is proposed, allowing precise description of synchronization algorithms and rule sets and providing a testable framework for their implementation. The paper applies the resulting grammar to existing synchronization tools and technologies and shows how the grammar can be applied to provide solutions for specific application areas, including document workflow and translation.
A step-by-step must read article on SQL 2005 Reporting Services which creates a report and hosts it on an intranet server.
This is the first article in a series concentrating on generating XML Schema dynamically using Visual Basic 2005. The series is mainly targeted at those who are familiar with XML, XML Schema and the .NET framework.
Explains GIS (geographic information systems), which capture and display geographically referenced information) and suggests ways that technical communicators can become more involved with this technology.
The next century will be an XML century, make no mistake about it. All our documents, even checks, credit card slips, personal letters, recipes, technical documents, everything, will benefit from XML technologies. Students are already learning XML in schools, and big businesses are using it to publish their databases on the web. The appearance of the electronic spreadsheet ten years ago changed the way we do business. XML will change the way we write documents.
Given the pressures on firms to provide increased value at lower costs, it’s imperative that they find ways to reduce the costs of creating and managing documents and increase their value to clients and personnel. Microsoft SharePoint provides a range of features to make your firm’s documents “smarter,” from capturing rich metadata to automating workflows to intelligent search. As applied, these features can transform passive documents into active, reusable resources. In this article I’ll describe some of the ways that SharePoint can reduce the effort to create, manage and retrieve documents and increase their value, as smart documents, to both your firm and its clients.
XPath 2.0 is the foundation of two essential recommendations currently in the final stages of development at W3C: XSLT 2.0 and XQuery. It is a major rewrite designed to significantly increase the power and efficiency of the language. In this article, BenoÃ®t Marchal shows how the new data model enables you to easily write more sophisticated requests.
The XML format was developed in the 1990s in a hope to develop a universal format for documents, replacing proprietary binary formats that couldn’t integrate with one another. And we’re beginning to see the results. In this talk, Dr. Geoffrey Sauer will present an introduction to XML, with an overview that will explain to people who’re not familiar with it why this is a good thing, and how we can begin to use XML formats to our advantage as technical communicators.
“Content types” are among the least understood, and yet most potent, aspects of user experience and web design. Most people encounter them for the first time when implementing a grand-scale content management system (CMS) because you have to define content types before building templates for each kind of content you’re going to publish. Because they associate content types so closely with CMS, some make the mistake of equating content strategy with content management. They’re not the same thing, though they are certainly related. Your content strategy specifies the content types that will then be modeled for your CMS.
The UK Local e-Government Standards Body was established late in 2003, and tasked with compiling an XML based data standards catalogue for use by UK Local Authorities. This is to be achieved by mapping existing standards, identifying gaps to be filled, advising and supporting local Councils, their partners and suppliers on the interpretation and adoption of standards, and establishing processes for developing new standards as required. However, UK Local Authorities have been developing e-services for several years already, so this new effort has to take place in a context where many projects are already under way, using a variety of business models, and with diverse approaches to XML interoperability design. An additional factor is the traditional tension between central and local government, which has led to patchy and inconsistent adoption of the national UK e-Government Interoperability framework. This paper is an account of the methodology developed by CSW Group Ltd and the LeGSB to tackle this situation.
DITA is not another tool like FrameMaker or MS Word. It is a standard and a specification that is supported more or less effectively by open-source and commercial tools. As a standard, DITA is a way of working, a way of thinking about the structure of information. It's greatest benefits come from understanding the architecture and deciding if you're ready to make the leap into a new authoring and publishing environment. It's definitely worth the effort because the benefits to productivity and quality are huge.
Since 2005 the German National Library of Science and Technology (TIB) is established as a DOI registration agency for scientific content. Data providers transmit XML-files containing the DC-based metadata descriptions of the scientific data to a webservice infrastructure at the TIB, which was created by the Research center L3S during a project founded by the registration agency for scientific content. Data providers transmit XML-files containing the DC-based metadata descriptions of the scientific data to a webservice infrastructure at the TIB, which was created by the Research center L3S during a project founded by the German research association (DFG). This webservice infrastructure is based on the web application framework COCOON. We have however extended COCOON with full webservice functionalities. Using XSLT the webservice is furthermore able to transform XML-metadata files into well-formed PICA-files to insert the metadata information into the library catalogue of the TIB.
Think of the library system as something akin to the open-source movement before software. Subsidized institutions buy books, subscribe to journals and proprietary databases, and pay people to help you find “stuff”, all essentially at no cost to you.
In a world awash with information, finding what you really want can be difficult. Any database or web index can deliver a set of results. But it's particularly difficult to highlight the most relevant 'stuff.' Web search engines such as Google and Yahoo try their best to recommend some items over others, and now libraries are trying to do this for their holdings.
Most people who create site structures are not trained practitioners of information architecture. Our natural human capacity to organize and relate things gives many people who aren’t IA professionals basic, but useful site structuring skills. But don’t be fooled. Creating and maintaining a site structure can be highly complex. Since complexity tends to increase in an active domain, know when to call in an expert such as an information architect.
97% of sites included global links to the site's top-level categories. While global links to top-level categories help reinforce the breadth of a site's offering, they also consume significant screen real estate for links that arguably are not as relevant to users as page-specific content. This is particularly true at the lower levels of the hierarchy, where there is a larger amount of semantic distance between the global links and the page content.
It’s hard to go to a content management or publishing technology conference these days without there being a presentation on DITA — the Darwinian Information Typing Architecture. For the uninitiated, DITA is an XML architecture for authoring and publishing topic-based content, typically technical documentation. The brainchild of IBM, where it is used internally for many documentation projects, DITA is now an open-source standard under the aegis of OASIS.
When you’re knee-deep in wireframes or CSS it’s all too easy to end up in a bubble of IxD books and blogs. One option is to take inspiration from vintage art and nature, but what about what other smart people are doing in their respective disciplines? In other words, why not steal from them? Here are my picks of a few other fields with ideas worth appropriating, or at least glancing at.
The Google Sandbox is a filter that was put in place in about March of 2004. New websites with new domain names can take 6 to 12 months to get decent rankings on Google. Some are reporting stays of up to 18 months. The Sandbox seems to affect nearly all new websites placing them on probation. Similarly, websites that have made comprehensive redesigns have been caught up in this Sandbox. Does this Sandbox Really Exist, or is it just part of the Google algorithm? This has been a big controversy with many different opinions. Most now believe that this is an algorithm. In either case, the Sandbox functions to keep new sites from shooting to the top of Google in just a few weeks and overtaking quality sites that have been around for many years. This appears to be an initiation period for new websites.
Google's increasing use of anti-spam features has meant that optimising websites for Google has become much harder and it's now not just a case of opening your websites source files in notepad, adding some keywords into your various HTML tags, uploading your files and waiting for the results. In fact in my opinion and I'm sure others will agree with me, this type of optimisation, commonly referred to as onpage optimisation will only ever be 20% effective at achieving rankings for any keywords which are even mildly competitive. Those of us who aced maths in school will know this leaves us with 80% unaccounted for.