Originally published: Angeles, M. (April 15, 2002). K-Logging: Supporting KM with Web Logs. Library Journal.
Web-logging software has received plenty of attention as a quick and easy way to post content to a web site. Web logs (blogs) tend to fall into two categories: personal web logs that function sort of like diaries, and informational blogs that target a readership with a shared interest. But web logging can also be used to support knowledge management (KM)—the effort within an organization to share knowledge and help the organization achieve its mission. This form of web logging, called knowledge logging, or k-logging, is emerging as an inexpensive alternative to large-scale KM solutions.
Within your organization—whether a corporation, school, or museum—there are individuals who either already maintain blogs or could be encouraged to do so. They may be researchers, faculty, curators, or students. The blogs may support research development, share industry information, capture and disperse project information among a team, or just annotate relevant literature for colleagues. Clearly, these k-logs are valuable information capital within your organization. How can librarians support k-loggers? What systems can the library create to make these knowledge assets—the content of the web logs—findable and accessible?
Diversity: a good thing
Bonnie Nardi and Vicki O'Day (Information Ecologies: Using Technology with Heart, MIT, 2000) define an information ecology as a system of people, practices, values, and technologies at work in a local environment. A healthy ecology is one that is dynamic and diverse—comprised of different types of people and technologies. Contrast this with ecologies where homogeneity is enforced through only one technology—where, for example, Lotus Notes is the only means available for knowledge creation.
The idea that diversity in information systems is good might seem contrary to what a lot of IT pundits, consultants, and systems vendors might proffer. But for many organizations, a single-solution KM package isn't practical. A recent market brief by Forrester Research suggests that organizations have begun to move away from single-solution KM packages. A report by Deloitte Research supports this opinion. The latter suggests that bridging the gap between people and systems depends on first creating conditions that allow people to participate in KM locally. This is in opposition to implementing an über-KM solution. These local activities are then connected through networked information appliances, via standard XML formats
Why this shift? A single-solution model requires people to push content into a centralized system in a specific format. The burden of contribution lies with the researcher, scientist, or faculty member. With a decentralized model, content—whether an e-mail, a Word document, or a k-log—can be pulled into a central depository. The burden of contribution is lifted from participants. This is key since the success of KM depends on the willingness of individuals to participate, using tools that are transparent to the knowledge worker.
There are many robust web log tools that are inexpensive or even free. Popular software includes MovableType, Radio Userland, any of the variations of Slashcode, and my favorite, Drupal. They allow individuals to publish content to a web site easily, and some packages even allow for categorization of entries. Most packages also permit authors to publish an XML feed of content. These low-cost tools help knowledge workers with two core concerns of KM: knowledge creation and knowledge sharing.
Web-logging technology is simple to install, typically requiring 15 minutes and no expertise in web publishing. More importantly, it personalizes knowledge development. This means that the software supports the individual's own knowledge capturing. By posting notes, annotations, links to pertinent articles, or just observations, researchers are pursuing their own interests while contributing to KM. In short, k-logging has a good chance of succeeding as a KM solution because it removes barriers to resistance, empowers individuals locally, and requires fewer resources.
The library role: support
Librarians may play different roles as an intermediary between k-loggers and the corporate environment, but the library should be responsible for certain basic actions: support k-loggers by providing content; provide access (taxonomies) to their content; and share knowledge through aggregation and publishing.
K-loggers often create new content by monitoring news feeds in their subject area. They sometimes do this in a news reader integrated with their k-logging tool. When they see a news item of interest, they k-log it and add commentary. Librarians can offer k-loggers XML (RSS) access to vendor-supplied data such as business news and market reports and internal data such as documents and e-mail discussion groups. This data allow k-loggers to monitor their industry or subject area without leaving the firewall. Libraries might also want to incorporate RSS feeds with any standard topic tracking e-mail alerts if they are offered.
Here's how this might work. My organization includes the standard XML/RSS buttons in some search results screens to indicate that results are available in RSS format (see Figure 1). When k-loggers execute a search on a source such as our news database or our market reports database, they see a variety of options for tracking that search. They usually set up an e-mail alert on this search and get search results in XML. Users can copy the URL for the RSS results and track those results via a personal aggregator or news reader such as Radio Userland, Feedreader, or NetNewswire. When they see something of interest in their personal aggregator, they can k-log it.
Figure 1: Presenting XML feeds of database search results
There may be copyright, content licensing, and distribution issues to resolve before you make this content available. You will probably want to offer only RSS feeds of bibliographic data—title, abstract, and URL are common in RSS feeds. We protect our feeds by requiring password authentication to URLs on our site, so usage is closely tracked on a user level. RSS aggregators such as Radio Userland support a login and password.
Subject taxonomies can be used for categorizing k-log entries. If libraries offer a subject taxonomy for enterprise use, k-loggers can tag new entries using the same controlled terminology. When collected by the library's aggregator, entries from different k-logs can then be viewed by subject. To take this even further, if the aggregate of k-log entries is redistributed with classifications intact, it can be redistributed for use in any application you might imagine, providing that it is capable of connecting content via the standardized taxonomy. Many web log tools offer the ability to export blog entries with subject headings using the latest RSS or XFML standards. These formats can be used for remapping individual subject headings to a centralized subject taxonomy.
How might this work from the k-loggers perspective? Imagine that a library maintains a taxonomy of subject headings for the pharmaceuticals industry in XML and HTML formats, along with scope notes and indications of broader and narrower (parent/child) relationships between terms. Users can select slices of this taxonomy, (e.g., pharmaceuticals vendors and families of medication) and enter these terms into their k-log application, later categorizing their k-log entries. Some packages, such as Drupal, will even allow you to insert entire polyhierarchical taxonomies into the k-log application so that k-loggers can do more granular classification of their entries. When k-loggers choose to k-log about a new arthritis medication being distributed by Johnson & Johnson, they can apply these terms from the subject taxonomy. The categorized entries will be distributed with their XML feed. When the KM aggregator reads that new feed entry, it can republish it with the category intact. In this way, readers can find stories about Johnson & Johnson and/or arthritis medications.
Aggregate and share
While this example briefly shows how k-log feeds integrate with our KM aggregator, it glosses over the complexity involved when the library undertakes an aggregator role. When aggregating XML from k-logs, we could just simply install aggregator software and display the stream of new data as it is collected. Figure 2A shows the process and result of this approach. This is serviceable, but shouldn't we add value to this archiving and redistribution by further classifying k-log entries?
Figure 2: Aggregation methods
Some k-loggers will use subject taxonomies to categorize k-log entries at the point of authorship, but not all will. While it's important to establish the conditions that allow creation and sharing, it's also important not to mandate behaviors that interfere with that process. Expect some indexing to be required on your end. After all, if knowledge isn't findable and usable, it's just more useless data in the organization's ether.
Open-source, server-based news aggregator software can collect and publish k-log entries on a periodic or on-demand basis. But most do not offer the classification features mentioned above. These features may be available soon, but if you want them now you will need to do some custom development involving text-parsing and automatic classification of newly collected entries. This can be a great undertaking. Furthermore, while improvements have been made to autoclassification in CMS tools, many believe that the best direction is to take a semi-automated approach using automated classification in conjunction with some manual classification. Figure 2B illustrates what this process might look like.
Another method requires watching news feeds as they arrive and classifying them manually, effectively acting as another k-logger but only watching the k-logs in your enterprise. This brute force method is fairly easy to implement providing you have the human resources. This is the process I used in the two years that I blogged information architecture news on the iaslash.org site.
Factors for success
There are a few additional factors to think about when considering how the library will get involved with k-loggers. These issues affect how your solution will be received, its success, and its sustainability. The greatest consideration is strategic. Do you have the connections and influence to insert the library into the minds and processes of k-loggers? If you build tools, it doesn't necessarily mean that people will use them; you must create awareness of the benefits of sharing knowledge.
Training is important as well. Do you have the resources to train staff to use the new tools? Most will not have experience using subject taxonomies.
Do you have the money and human resources to support taxonomy and information system development and maintenance? Manual classification will mean the use of human resources if you plan on a semi-automated system. When the push comes to extend the functionalities, custom programming will be required as well.
Web-logging is more than just a cool form of web publishing. Savvy corporate k-loggers have proven that it can be a useful way to capture and share knowledge, but the many tools that people will choose for k-logging are diverse. This shouldn't be a problem in your information ecology, however, because XML-formatted data feeds are the glue that will pull together content from disparate k-logs. The final message is not to fear the k-loggers but to embrace their willingness to share knowledge. Empower them to spread their message across the enterprise. Remember that if the right messages reach the right people at the right time, you will positively affect your organization.