A directory of resources inthe field of technical communication.

Information Design

476-499 of 1,535 found. Page 20 of 62.

About this Site | Advanced Search | Localization | Site Maps
 

« PREVIOUS PAGE 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25  NEXT PAGE »

Information design (also known as 'information architecture') is the study of the details of complex systems. Among these are websites, user interactions, databases, technical writing documentation, and human-computer interfaces.

 

476.
#33897

A Generalized Grammar for Three-way XML Synchronization

This paper proposes a general synchronization grammar which can describe synchronization rule sets. For example, when handling three input files, we show that changes to elements can be described in terms of just seven possible permutations. Similarly, PCDATA and attribute changes can be described in terms of a fixed set of permutations. Using these permutations a grammar is proposed, allowing precise description of synchronization algorithms and rule sets and providing a testable framework for their implementation. The paper applies the resulting grammar to existing synchronization tools and technologies and shows how the grammar can be applied to provide solutions for specific application areas, including document workflow and translation.

La Fontaine, Robin and Nigel Whitaker. IDEAlliance (2005). Articles>Information Design>Programming>XML

477.
#22044

Generate a Site Plan

Generating a site plan is an optimal approach to starting your site.

Tech-Writer (2001). Design>Web Design>Information Design>Project Management

478.
#28522

Generating and Hosting a SQL Server Reporting Services Report Using SQL Server 2005 Business Intelligence Development Studio

A step-by-step must read article on SQL 2005 Reporting Services which creates a report and hosts it on an intranet server.

Krishnaswamy, Jayaram. ASPAlliance (2006). Articles>Information Design>Databases>SQL

479.
#27948

Generating XML Schema Dynamically Using VB.NET 2005: Essentials

This is the first article in a series concentrating on generating XML Schema dynamically using Visual Basic 2005. The series is mainly targeted at those who are familiar with XML, XML Schema and the .NET framework.

Chaterjee, Jagadish. ASP Free (2006). Articles>Information Design>XML>ASP

480.
#20545

Geographic Information Systems   (PDF)

Explains GIS (geographic information systems), which capture and display geographically referenced information) and suggests ways that technical communicators can become more involved with this technology.

Pettit Jones, Colleen. Intercom (2003). Articles>Information Design>Technical Illustration>Geography

481.
#31614

Get on Board the XML Train

The next century will be an XML century, make no mistake about it. All our documents, even checks, credit card slips, personal letters, recipes, technical documents, everything, will benefit from XML technologies. Students are already learning XML in schools, and big businesses are using it to publish their databases on the web. The appearance of the electronic spreadsheet ten years ago changed the way we do business. XML will change the way we write documents.

DuBay, William H. Impact Information (1999). Articles>Information Design>Standards>XML

482.
#35772

Get Smart With SharePoint Documents

Given the pressures on firms to provide increased value at lower costs, it’s imperative that they find ways to reduce the costs of creating and managing documents and increase their value to clients and personnel. Microsoft SharePoint provides a range of features to make your firm’s documents “smarter,” from capturing rich metadata to automating workflows to intelligent search. As applied, these features can transform passive documents into active, reusable resources. In this article I’ll describe some of the ways that SharePoint can reduce the effort to create, manage and retrieve documents and increase their value, as smart documents, to both your firm and its clients.

Gerow, Mark. End User SharePoint (2009). Articles>Content Management>Information Design>Microsoft SharePoint

483.
#27742

Get Started with XPath 2.0

XPath 2.0 is the foundation of two essential recommendations currently in the final stages of development at W3C: XSLT 2.0 and XQuery. It is a major rewrite designed to significantly increase the power and efficiency of the language. In this article, Benoît Marchal shows how the new data model enables you to easily write more sophisticated requests.

Marchal, Benoit. IBM (2006). Articles>Information Design>XML>XSL

484.
#36469

Get to Know XML

The XML format was developed in the 1990s in a hope to develop a universal format for documents, replacing proprietary binary formats that couldn’t integrate with one another. And we’re beginning to see the results. In this talk, Dr. Geoffrey Sauer will present an introduction to XML, with an overview that will explain to people who’re not familiar with it why this is a good thing, and how we can begin to use XML formats to our advantage as technical communicators.

Sauer, Geoffrey. EServer (2010). Presentations>Lectures>Information Design>XML

485.
#34678

Getting a Handle on Your Content Types

“Content types” are among the least understood, and yet most potent, aspects of user experience and web design. Most people encounter them for the first time when implementing a grand-scale content management system (CMS) because you have to define content types before building templates for each kind of content you’re going to publish. Because they associate content types so closely with CMS, some make the mistake of equating content strategy with content management. They’re not the same thing, though they are certainly related. Your content strategy specifies the content types that will then be modeled for your CMS.

Content Strategy Noob (2009). Articles>Content Management>Information Design>Content Strategy

486.
#33753

Getting Standards to Emerge, or, How to Build a Recipe Book While Everyone's Busy Cooking

The UK Local e-Government Standards Body was established late in 2003, and tasked with compiling an XML based data standards catalogue for use by UK Local Authorities. This is to be achieved by mapping existing standards, identifying gaps to be filled, advising and supporting local Councils, their partners and suppliers on the interpretation and adoption of standards, and establishing processes for developing new standards as required. However, UK Local Authorities have been developing e-services for several years already, so this new effort has to take place in a context where many projects are already under way, using a variety of business models, and with diverse approaches to XML interoperability design. An additional factor is the traditional tension between central and local government, which has led to patchy and inconsistent adoption of the national UK e-Government Interoperability framework. This paper is an account of the methodology developed by CSW Group Ltd and the LeGSB to tackle this situation.

Harvey, Anna and Ann Wrightson. IDEAlliance (2004). Articles>Information Design>XML>Standards

487.
#32091

Getting Started with DITA

A brief overview for a couple of fellow Austin writers who have asked me recently how and where to get started with DITA.

Gentle, Anne. Just Write Click (2007). Articles>Information Design>XML>DITA

488.
#38131

Getting Started with DITA

DITA is not another tool like FrameMaker or MS Word. It is a standard and a specification that is supported more or less effectively by open-source and commercial tools. As a standard, DITA is a way of working, a way of thinking about the structure of information. It's greatest benefits come from understanding the architecture and deciding if you're ready to make the leap into a new authoring and publishing environment. It's definitely worth the effort because the benefits to productivity and quality are huge.

Hackos, JoAnn T. Info Management Center (2011). Articles>Information Design>Standards>DITA

489.
#33830

Getting the Most Out of COCOON: A XML-Based Webs Service for a Registration Agency

Since 2005 the German National Library of Science and Technology (TIB) is established as a DOI registration agency for scientific content. Data providers transmit XML-files containing the DC-based metadata descriptions of the scientific data to a webservice infrastructure at the TIB, which was created by the Research center L3S during a project founded by the registration agency for scientific content. Data providers transmit XML-files containing the DC-based metadata descriptions of the scientific data to a webservice infrastructure at the TIB, which was created by the Research center L3S during a project founded by the German research association (DFG). This webservice infrastructure is based on the web application framework COCOON. We have however extended COCOON with full webservice functionalities. Using XSLT the webservice is furthermore able to transform XML-metadata files into well-formed PICA-files to insert the metadata information into the library catalogue of the TIB.

Brase, Jan. IDEAlliance (2005). Articles>Web Design>Information Design>XML

490.
#32638

Getting The Most Out Of Your Library

Think of the library system as something akin to the open-source movement before software. Subsidized institutions buy books, subscribe to journals and proprietary databases, and pay people to help you find “stuff”, all essentially at no cost to you.

Hicks, William. Digital Web Magazine (2008). Articles>Academic>Information Design

491.
#26795

Getting to "the Right Stuff"

In a world awash with information, finding what you really want can be difficult. Any database or web index can deliver a set of results. But it's particularly difficult to highlight the most relevant 'stuff.' Web search engines such as Google and Yahoo try their best to recommend some items over others, and now libraries are trying to do this for their holdings.

Tennant, Roy. Library Journal (2005). Articles>Information Design>Search

492.
#36199

Getting To Know XML Tutorials

Unlike some other software technologies such as HTML or even Java, XML is a little fuzzier in terms of how it is applied in different scenarios.

BrainBell (2009). Articles>Information Design>XML

493.
#38787

Getting Your Web Site’s Structure Right

Most people who create site structures are not trained practitioners of information architecture. Our natural human capacity to organize and relate things gives many people who aren’t IA professionals basic, but useful site structuring skills. But don’t be fooled. Creating and maintaining a site structure can be highly complex. Since complexity tends to increase in an active domain, know when to call in an expert such as an information architect.

Davis, Nathaniel. UXmatters (2013). Articles>Web Design>Information Design>Usability

494.
#33201

Global Navigation

97% of sites included global links to the site's top-level categories. While global links to top-level categories help reinforce the breadth of a site's offering, they also consume significant screen real estate for links that arguably are not as relevant to users as page-specific content. This is particularly true at the lower levels of the hierarchy, where there is a larger amount of semantic distance between the global links and the page content.

Adkisson, Heidi P. Web Design Practices (2005). Articles>Web Design>Information Design>E Commerce

495.
#29465

Global XML   (PDF)

XML alone is not enough to effectively manage your organization's global content. Explore global XML and its benefits.

Hurst, Sophie. Intercom (2007). Articles>Information Design>XML>International

496.
#33727

Going DITA

It’s hard to go to a content management or publishing technology conference these days without there being a presentation on DITA — the Darwinian Information Typing Architecture. For the uninitiated, DITA is an XML architecture for authoring and publishing topic-based content, typically technical documentation. The brainchild of IBM, where it is used internally for many documentation projects, DITA is now an open-source standard under the aegis of OASIS.

Hondros, Constantine. O'Reilly and Associates (2005). Articles>Information Design>XML>DITA

497.
#23180

Good Information Architecture Increases Online Sales

This article explains how information architecture can impact the sales process, and how and effective information architecture can help a site flourish.

Walsh, Ivan. SitePoint (2003). Design>Web Design>Information Design>E Commerce

498.
#35228

Good Interaction Designers Borrow, Great Ones Steal...

When you’re knee-deep in wireframes or CSS it’s all too easy to end up in a bubble of IxD books and blogs. One option is to take inspiration from vintage art and nature, but what about what other smart people are doing in their respective disciplines? In other words, why not steal from them? Here are my picks of a few other fields with ideas worth appropriating, or at least glancing at.

Telnaki, Vicky. Johnny Holland (2009). Articles>Information Design>Interaction Design

499.
#29491

The Google Sandbox and How To Get Out

The Google Sandbox is a filter that was put in place in about March of 2004. New websites with new domain names can take 6 to 12 months to get decent rankings on Google. Some are reporting stays of up to 18 months. The Sandbox seems to affect nearly all new websites placing them on probation. Similarly, websites that have made comprehensive redesigns have been caught up in this Sandbox. Does this Sandbox Really Exist, or is it just part of the Google algorithm? This has been a big controversy with many different opinions. Most now believe that this is an algorithm. In either case, the Sandbox functions to keep new sites from shooting to the top of Google in just a few weeks and overtaking quality sites that have been around for many years. This appears to be an initiation period for new websites.

Williams, Doug. stevenforsyth.com (2006). Articles>Web Design>Information Design>Search Engine Optimization

500.
#29489

Google Search Engine Optimisation and their 80/20 Rule

Google's increasing use of anti-spam features has meant that optimising websites for Google has become much harder and it's now not just a case of opening your websites source files in notepad, adding some keywords into your various HTML tags, uploading your files and waiting for the results. In fact in my opinion and I'm sure others will agree with me, this type of optimisation, commonly referred to as onpage optimisation will only ever be 20% effective at achieving rankings for any keywords which are even mildly competitive. Those of us who aced maths in school will know this leaves us with 80% unaccounted for.

Callan, David. stevenforsyth.com (2005). Articles>Web Design>Information Design>Search Engine Optimization

 
« PREVIOUS PAGE  |  NEXT PAGE »

 

Follow us on: TwitterFacebookRSSPost about us on: TwitterFacebookDeliciousRSSStumbleUpon