The Extensible Markup Language (XML) is an open, general-purpose specification for creating markup languages. Its primary purpose is to help information systems share structured data, particularly via the Internet, and it is used both to encode documents and to serialize data. It is used in a wide variety of technical communication document formats, including Microsoft Word, OpenOffice, XHTML, DITA, DocBook, and RSS, among others.
XML is still the hot technology for technical writing. New XML tools are being delivered at a fast pace Every day, there are new XML initiative being announced. But when you try to learn about this exciting new technology, when you review the many books that are appearing on shelves, or sites popping up on the Web, you'll find that the information that is available is mostly aimed at developers. This session focuses on the value that it brings to technical writers and their users, and will include code samples.
What's wrong with the Web and how can the new XML technology fix it? This XML introduction is geared toward newcomers who have heard the buzz, but don't know what all the fuss is about. The article briefly surveys a number of new Web technologies such as XLL, XSL, RDF, DOM, MathML, SMIL, PGML, and how they relate to XML. Benefits of XML are stressed, as are potential applications in diverse fields. A reference section provides links to key XML resources, as well as to collections of other introductory articles.
With the move to XML, DITA, and other new standards, the entry cost for new tools is lower relative to established tools like Word and FrameMaker, since all tools need to invest to implement these new standards. New workflows are emerging in some cases, such as topic-based authoring and shared content, which give new tools a distinct advantage. The new tools can start with the new paradigm, rather than trying to migrate existing content and provide “backward” compatibility.
One of the exciting aspects of XML is the number of ways it is being used to provide new means of communicating and gathering new information. One such use is VoiceXML, an emerging W3C standard that brings voice to the Web or the Web to the phone.
A website that supports syndication publishes something called a “feed”; that feed can either be collected by a program called a feedreader or news aggregator, or it can be combined (“mashed up”) with another feed. In what follows, I’ll introduce you to some resources to help you get started, and discuss some best practices for managing your feeds.
Since February 1st 2005, millions of invoices have been exchanged between the private sector and the public sector in Denmark. This paper focuses on real life problems, experiences and solutions with syntactical and semantical validation of millions of electronic invoices. Localization and documentation for regional and national use is a massive and important assignment. I.e. decisions on the use of identifiers have to be specified and local payment methods must be mapped to the international standard. The result is a message with many internal integrity constraints that cannot be validated with the UBL schemas alone. In order to provide even stronger validation, non-normative supplementary schemas have been developed. These schemas perform stronger validation based on decisions about the use of national identifiers for companies and persons. In addition to the use of XML schema – Schematron is used for the validation of internal referential integrity constraints. Experiences and theoretical considerations on the localization of international vocabularies are discussed.
The ability to interoperate across disparate vendors, platforms and infrastructure stacks is inherently important to the adoption of Web Services technology. For most organizations, cross platform interoperability and the move to a loosely coupled, Service Oriented Architecture (SOA) is usually the main rationale for adoption of the underlying Web Service technologies. In this paper we will discuss some of the issues and stumbling blocks towards interoperability. We will also demonstrate with an example, how an application developed in Java and deployed in a J2EE 1.4 compatible container can interoperate and be consumed from a different client, developed in C# on the .NET platform.
From initial data modeling, to technical XML Schema design and critical programmatic realization, we have an actionable, real-world set of comprehensive recommendations that can help you formulate a successful XML implementation.
Inline links and citations can be disruptive to the flow of information. Try to delete them because a topic is a discrete unit of information that is meaningful when it is displayed alone.
DITA provides a powerful means of linking using relationship tables. The benefit of using a relationship table is the ability to create and maintain links in one place with the map rather than in the topics. Links can be created both between topics of the same information type and between topics of different information types that are not directly related through parent/child relationships. Therefore, the best practice for linking in DITA is to use a relationship table within a map.
DITA is XML, so any XML editor can be used to write DITA code. But that doesn’t mean every XML editor is best suited for the job of producing DITA. As DITA has grown in popularity in the technical writing field, the market has matured and a number of XML editors have appeared that are in some way optimized for creating documentation based in DITA.
This paper is about the relationship between the topic map and RDF standards families. It compares the two technologies and looks at ways to make it easier for users to live in a world where both technologies are used. This is done by looking at how to convert information back and forth between the two technologies, how to convert schema information, and how to do queries across both information representations. Ways to achieve all of these goals are presented.
XML is a data format concerned primarily with compatibility and flexibility. But as useful as XML is, it's limited without the abilities to find specific portions of a document quickly and to filter and selectively locate data within a document. XPath provides the ability to easily reference specific text, elements, and attributes within a document—and with a fairly low learning curve. Additionally, XPath is key to many other XML vocabularies and technologies, such as XSL and XQuery. This tutorial will teach you the fundamentals of XPath, including all of its various selectors and semantics, in an example-driven and hands-on manner.
Since adaptation and reuse are core ideas of DITA (Darwin Information Typing Architecture), perhaps we'll be forgiven if we adapt and reuse old Beatles standards to explain the newest XML standards (hey, maybe it's the only way to make XML sound catchy). DITA is an IBM gift to the technical documentation community that was approved as a standard this spring by OASIS (Organization for the Advancement of Structured Information Standards), the hosts for many XML interchange standards such as ebXML. Ever since, tech writers have been buzzing about an easier way to get into structured topic-based writing with DITA XML and asking XML Editor vendors to add support for DITA.
When you hear about XML publishing, you mostly hear about databases, workflow tools, and content management systems. These are typically costly systems aimed towards the information management needs of larger enterprises, where the sheer volume of information pumped through these systems provides a fairly rapid return on investment. This fosters the perception that you need one of these complex, expensive, enterprise solutions to use take advantage of the modularity and flexibility of authoring in XML. That is simply not true. You can realize the benefits of publishing from modularized XML, without the expense of an enterprise publishing system, by implementing the authoring environment on top of nothing more than your operating system's file system. Although this environment is not adequate for enterprise publishing needs, it is more than adequate for the needs small writing teams, businesses with a limited number of related products, proof-of-concept demonstrations, and even home users. The AIC documentation group at Cisco Systems has implemented such an authoring environment. We have been able to reuse and re-purpose modular, XML-based information without implementing a database back end. By examining how the AIC team implemented XML in a flat-file environment, you will see: * the decisions you need to make before implementing a flat-file XML system * the trade-offs, drawbacks, and pitfalls of implementing a flat-file environment (as compared to a database publishing environment) * the benefits of XML that are still available, even without the database * a migration path to a more traditional publishing environment
Since the first publication of the XML 1.0 Recommendation in 1998, hundreds of applications, data models, document formats, tools, specifications, libraries, references, tutorials, books, papers, excitement, enthusiasm, and energy have exploded onto the scene, making this relatively simple idea one of the most important developments in the computer industry since the microprocessor. But, it's not always been a smooth ride. It's just as easy to misuse and abuse XML as it is to get it right. In this article, I discuss what I feel are the top ten mistakes you can make with XML.
As traditional magazine publishers continue to build out their e-media products, many are looking to new, more efficient ways to manage their content and bridge the gap between separate production systems. One solution is XML content repositories, which convert a magazine’s content to a format that’s easily reproduced both digitally and in print.
Your publishing workflow has been the same for years, but new technology, different customer requirements, and company growth are making you realize you might need a change. Your print-based processes won’t always be sustainable, and XML is looking like a possibility for the future. There’s just one problem: you have thousands of pages of legacy content that you’ll need to convert, and it’s not exactly XML-friendly.
You’ve made the transition to an XML workflow for publishing your technical content, converted all of your legacy content, and started authoring in the new system, as discussed in part 1 of this post. Although you now have a much better outlook on sustainability, you’re still facing a problem: your content creators are having trouble with the idea of separating content from formatting.
In the same way that the system needs to integrate all the various components from different suppliers, creating the user documentation can create a challenge: How do you take existing content from partners and incorporate it in your documentation set?