The Extensible Markup Language (XML) is an open, general-purpose specification for creating markup languages. Its primary purpose is to help information systems share structured data, particularly via the Internet, and it is used both to encode documents and to serialize data. It is used in a wide variety of technical communication document formats, including Microsoft Word, OpenOffice, XHTML, DITA, DocBook, and RSS, among others.
XPS (XML Paper Specification) est un format de fichier électronique à présentation fixe comme le PDF du concurrent Adobe qui préserve la mise en forme du document et permet le partage des fichiers sans perte dinformation. Le format XPS garantit que, lorsquun fichier est affiché en ligne ou imprimé, il conserve le format souhaité.
The ebXML specifications have matured rapidly over the past year. New components and capabilities have extended the architecture for service oriented architectures (SOA). Learn about this new comprehensive release of ebXML that is available from OASIS.
The emerging XML based web increasingly relies upon ways of presenting content in a just in time manner. Presentation technologies such as SVG and XHTML can do so, yet the power to properly harness them will likely lie in the emergent binding languages such as XBL, sXBL, and XTF. In this presentation, bindings and binding languages will be explored, illustrating how such environments as the Mozilla Firefox 1.5 browser are using XBL as a means for performing component binding into XHTML, SVG and XForms interfaces, looks at sXBL and the W3C's XBL directions, and details why such binding languages likely represent the future of XML presentation and interaction.
Some browsers have difficulty upon encountering the XML Prolog. In some cases, the browser will render all the markup as text. In other cases, when a browser has some XML support, it might attempt to render the document as an XML tree. To avoid these problems, many practicing web professionals prefer to leave the prolog off. This table will help you make that decision by showing you which browsers have known problems with the XML prolog.
RSS (Rich Site Summary, RDF Site Summary, or Really Simple Syndication) has been around since the mid-1990s. Over the years, several variants of the RSS format have popped up and several claims have been made about its ownership. Despite these differences, RSS never ceased to serve its usefulness in distributing Web content from one Web site to many others. The popularity of RSS gave way to the growth of a new class of Web software called the feed reader, also known as the feed aggregator. Although there are several commercially available feed aggregators, it's easy to develop your own feed aggregator, which you can integrate with your Web applications. You'll appreciate this article's fully functional PHP code snippets, demonstrating the use of PHP-based server-side functions to develop a customizable RSS feed aggregator. In addition, you'll reap instant benefits from using the fully functional RSS feed aggregator code, which you can download from this article.
The concepts behind Web Services are remarkably simple, and in this article we'll be taking a deeper look at what's involved. Then, with a little help from our good friend PHP, we'll set up our first Web Service.
It is entirely possible to deliver custom, on-demand documentation that is precisely suited to a user's needs. It can be done today, using web-interface strategies and the right document format. This post shows how such a system could be implemented with the DITA format, and shows why it would be an ideal document-delivery system for programmers.
Some folks here are taking a very strong look at DITA. I'm certainly one of them. But we also have a huge legacy of documents in Solbook format (Sun's subset of DocBook). There are tools for editing such documents, and tools for processing them. and there are many people who are comfortable with those tools. So DITA isn't going to replace the world, just yet. But DITA makes extensive reuse possible. It's a format with a serious future, because "reuse" is a very big deal. It lets you single-source your information content so have one place to make an edit. That sort of thing becomes important when you have multiple revisions of a product, and/or multiple variations. It becomes important when different tools and different products use the same information in different ways. It can drastically improve quality, ensure uniformity of presentation. Finally, structured formats like DITA and DocBook create the kind of consistently-tagged information that allows for useful automation.
OK. So you have your documents in XML. How do you deliver them to readers? You've heard great things about separation of form and content, and would like different kinds of readers to see the documents styled in different ways. And in order to make the collection of documents more useful, you would like to have full-text search. The quality assurance people would like some help with tools for checking documents and finding errors and inconsistencies in existing ones. Oh, and by the way, we just took a budget cut, so can you do it without breaking the bank?
As XML formatted content and data becomes pervasive on intranets and the Internet the requirement to minimize individual process times becomes great. XPath has been evolving into a rich expression language to query and extract data in a precise way. While it has been designed to be used by a host language such as XSLT and XQuery, an XPath processor can be used quite usefully standalone or as part of an application framework.
The Mozilla platform offers a rich support of XML techniques, from low level ones (XPath, RDF, DOM, e4x) to rendering dialects like XHTML, SVG, XUL and XForms, thus making this platform a natural choice for the XML inclined. It is becoming a platform of choice when developing rich connected applications. When building dynamic applications, the developer is often facing a common set of programming patterns : gathering data from various remote and local sources, storing data with an optional transformation phase, and updating parts of the GUI to reflect the modifications in the data store. With today's ubiquitous use of XML as a data exchange syntax, a major part of these tasks can be achieved with XML based solutions. In this article we will present an XML centric solution that aims at minimizing the impedance mismatch between different data models that plagues classical architectures involving for instance XML/object/relationnal translation. It combines some of Mozilla's existing capabilities with REX (Remote Events for XML) and a native XML database with XQuery support. REX provides means to update the XUL based GUI and the database, while the XML database is used as a versatile storage engine.
This chapter will take you a few steps further by showing you how to add text and markup to your result tree with XSLT templates. First, you'll addliteral text to your output. Then you'll work with literal result elements, that is, elements that are represented literally in templates. You'll also learn how to add content with the text, element, attribute, attribute-set, comment, and processing-instruction elements. In addition, you'll get your first encounter with attribute value templates, which provide a way to define templates inside attribute values.
We shall present an interactive demonstration of asynchronous XML pipelines. To begin we shall show linear XQuery pipelines developed with a recursive pull pattern. We shall demonstrate that this pattern can be improved by developing pipelines using a declarative scheduling language (DPML). We shall demonstrate in-pipe exception handling, we shall also show pipeline breakpoints and pipeline debugging. In addition we shall show modular pipeline decomposition and layered pipelines written in both declarative and procedural languages.
In the information age it is widely understood that there is now too much information. Some of this newly created information will most certainly be valuable, but despite marked improvement in search tools, finding the valuable information is a slow panhandle. Perhaps in light of this situation, the W3C under the direction of Berners-Lee has begun to build the foundation for the next phase of the web. This phase, called the Semantic Web, will make information stored with this technology much more processible by machines.
As companies and consumers rely more on Web services, it becomes increasingly important for Web services developers to know how to properly design, develop, deploy, and ultimately manage a Web services system. However, because of the inherent complexities that can arise with a Web service implementation, it can be difficult to grasp practical fundamentals and devise a step-by-step plan for Web services development.
DITA is a difficult thing to explain to the uninitiated. It is difficult because we expect it to be a product or a technology, when it is actually a standard and a methodology. DITA provides an approach to technical writing that embraces best practice ideals such as modularity, single-sourcing, and content re-use. The reasons for moving to DITA are business-focused.
Documents are made of characters, XML documents are made of Unicode characters. Comparing with SGML, we now have potentially one million characters while SGML only provides a hundred, but on the other hand, we lost the option of defining our own SDATA entities. This puts us to two challenges. The first is, how can we validate that a document, an element, an attribute only contains those characters that we know how to process, how to render, sort, seek, hyphenate, capitalise, pronounce... How can we tell a type setter for which character set he has to find a font? XML Schema provides a simple way of restricting the set of valid characters in an attribute or a simple elememt to a regular expression, that can use some of the Unicode character properties, like the block it is defined in (like Basic Latin or Latin Extended-B) or the General Category (like Uppercase Letter or Math Symbol), but you can't use that in mixed content, like is typical in text markup.
Sooner or later, most web designers will be called upon to create an internal site. And will quickly learn that one's own company can be tougher to deal with than any client. Dave Linabury offers tips on surviving the process (and building something good in spite of it).
Having spent some time working with Cladonia's Exchanger XML Editor, I can attest to the claim that this is a good, solid, well-featured and extensible XML editor. However, the software is not suitable for authoring documents. It is designed for working with XML data in many forms, but it is not designed for textual content. Let me explain.
Wide deployment of XML technology in enterprise applications demands high performance XML processing framework. This results in extensive investigation on building an XML processing infrastructure leveraging a compact, pre-parsed XML format, which could save in the memory and CPU consumption as well as the network bandwidth. In this paper, we will discuss the project building a compact schema-aware binary XML processing framework and compare it with the existing binary XML technologies. The discussion will cover the design of the compact binary XML format, the implementation for the compact binary XML processors, which encode and decode the XML documents, and how the compact binary XML support is integrated with the existing XML processing stack. At the end, we will provide the result testing applications leveraging the compact binary XML processing framework.
The Semantic Web really is an attempt to reconceptualize and reengineer AI for the Web. Discusses the path forward for successfully selling and developing Semantic Web technology into industry.
Here is my informal list of firms that are using DITA XML in some form in their documentation efforts. The table is dynamic, so you can search for a particular company by name, industry segment, look up the descriptive Wikipedia link for that organization if it exists, or track down the location of DITA-using firms close to where you live.
If you can put together an authoring-publishing workflow that is form-fit to DITA, then using DITA can be a good choice. For example, if you’re using Oxygen to publish to Oxygen’s webhelp output, or using easyDITA to push to MindTouch, or pushing content into Antidot’s Fluid Topics, or Mekon’s DITAweb, or Componize’s Alfreso integration, or some other defined DITA publishing solution, then I think DITA can be a good approach.