Facilitating the social annotation and commentary of web pages

UPDATE: For a response to some comments by James Farmer, Stephen Downes and Ian Kallen, see the bottom of this post.

Subtitle: A postscript to my work on Distributed Textual Discourse (DTD)

At last year’s 16th Annual Instructional Technology Institute Conference at Utah State University, I presented a paper on Distributed Textual Discourse. DTD is a model for facilitating asynchronous online conversations right at the source of the content, and can be used to enhance the collaborative features of existing tools such as blogs and wikis, or simply to provide the opportunity to annotate and comment on any section of a web page.

Although to this date I have not been able to produce a proof-of-concept, I have continued to conduct research in this area. Here, I would like to review some of the projects I have encountered that offer solutions to problems similar to the ones I tackled theoretically in my paper. [Some of the work I will discuss obviously precedes my paper on DTD, although I was unaware of this work at the time. Other projects seems to have appeared afterwards, although I doubt the developers had any knowledge of my work. Viva el collective unconscious!]

Distributed Textual Discourse

Before reviewing these projects, I would like to summarize what are —in my opinion— the challenges that new forms of online discourse face, and how I thought DTD would help address them.

In the paper Online Discourse: Past, Present and Future, I identified two characteristics of online discourse that differentiate it from other forms of discourse such as oral and written speech; I refer to these two characteristics as hypertextuality and distributed discursivity. Based on an analysis of these characteristics, I discussed the limitations of some current online discourse tools (discussion boards, blogs and wikis), and described the challenges that new models and tools would need to address. In my opinion, new online discourse technologies need to posses the following characteristics:

  • Discourse on demand. Online discourse can be initiated right at the source of the content published online. There is no need to leave the content behind to go somewhere where discourse can be supported.
  • Hypertextual granularity. Discourse participants are able to hypertextually annotate every fragment of an online text, instead of having to refer to online texts as wholes which cannot be annotated.
  • No separation of author and respondent roles. Hypertextual features are available to all participants, not just to the authors of speech acts.
  • Balance between local and networked, individual and collective. New models of distributed discursivity combine the best features of individual-based, dispersed discourse (like blogs) and community-based, centralized discourse (like discussion boards). A balance is achieved between the needs of the individual and the community to create discursive meaning through hypertextual aggregation.
  • Social filtering. To manage information overload, discourse participants have access to filters that sort content according to group membership or peer-reviewed quality assessment.
  • Decentralized, open infrastructure. If desired, discourse participants are able to collaborate directly and spontaneously, without unwanted mediation or management. Access to online discourse tools is free or very low-cost, and discourse can unfold without access to privately-owned servers.

Although in the paper I describe how DTD would address each one of those challenges, here I will merely summarize the functionality in broad terms.

Imagine that the figure below is any web page, or a blog or wiki page. As the participant moves her mouse over this DTD-enabled online text, each word becomes highlighted. This signifies an opportunity to click on that word and enter a new comment. Doing so displays a form where the participant enters the comment related to that location in the text. Upon closing the form, the new comment is saved and “[1]” appears next to the word in question (if previous comments already exist, the number in brackets would indicate the total number of comments related to that word; e.g. “[3]” indicates a total of three comments at that location):

The possibility of word-by-word annotation results in a highly granular mode of discourse> It would also be possible to insert comments related to whole sentences, paragraphs or pages.

In order to access the comments, the user simply clicks on the number in brackets. This displays a new window with a list of the corresponding comments:

Clicking on an item in the list opens a new page with the comment. The text of this comment is DTD-enabled, so that users can add comments to it in the same manner described above. Thus, discourse participants can add comments to the comments of the comments, and so on. In other words, a comment becomes a new speech act that can be further annotated by participants in the same way that the original speech act was annotated. This way, discourse takes the form of a network of hypertextually linked speech acts.

Below the list of comments is a menu of options. Discourse participants can take advantage of a variety of features for aggregating or filtering comments in the list (useful when that list becomes too large). Items can be filtered by searching for specific text strings, or arranged by author, date, etc. (much like we can filter long lists of emails). Comments can also be filtered by setting quality thresholds (like those used in Slashdot or Plastic) or by applying various social filters (e.g., displaying only comments that belong to a particular group of people, etc.).

In the paper, I also discuss how DTD can be implemented as a peer-to-peer online discourse management system, allowing participants to collaborate, if desired, without the use of software running on centralized servers. All of the features described are well within the realm of what is technologically possible today.

Alternative approaches to web page annotation

While a DTD proof-of-concept remains to be built, there are many projects out there that take different approaches to the same challenges. Some of these precede my work on DTD, although I only became aware of them afterwards. Others seems to have appeared later, although like I said, I doubt the developers had any knowledge of my work. Because the project sites don’t always give dates, I cannot make any definitive claims about the chronology. Furthermore, I have chosen to list them below not chronologically but in an order that sort of represents the complexity of the approach (which in some cases may indeed parallel chronological development). This list is by no means an exhaustive one. I have merely chosen these projects because they are representative of different approaches.


Fisking is not a piece of software, but an online behavior. According to its Wikipedia entry:

Fisking, or to Fisk, refers to the act of critiquing, often in minute detail, an article, essay, argument, etc. with the intent of challenging its conclusion or theses by highlighting logical fallacies and incorrect facts. The practice was named after British journalist Robert Fisk… Fisking can be thought of as a side effect of the way weblogs behave as social software. Someone (often a mainstream media columnist) writes an article claiming “X because of Y”. Someone else (usually a blogger) takes it apart [by] showing that not only is X not correct, neither is Y, and the author egregiously failed to mention A and B…” (my emphasis; retrieved on May 17, 2005)

The reason I am listing it here is because I believe Fisking is a symptomatic recognition of the need to provide some form of granular annotation in online discourse. The low-tech version of Fisking consists of simply quoting (cutting and pasting) the selected words, sentences or paragraphs from the original online text and then inserting one’s comments. This approach does not take full advantage of hypertextuality or distributed discursivity. Clearly, there is a need to electronically facilitate the process, as the projects below attempt to do.

Purple Numbers


To be able to comment or annotate an online text at a granular level requires that one can single out and link to a specific part of the text. An early attempt to do this was Eugene Eric Kim’s Purple Numbers (inspired by the work of Doug Engelbart, whose influence is also felt in the Liquid Information project, below). The project site states:

“Its purpose is simple: produce HTML documents that can be addressed at the paragraph level. It does this by automatically creating name anchors with static and hierarchical addresses at the beginning of each text node, and by displaying these addresses as links at the end of each text node… If you want to link to a paragraph or a list item, you use the purple number as an address. Because the purple numbers are links, you can copy and paste the link into your own document, rather than typing the whole thing from scratch.”

Below is an example of a Purple-annotated text, which can be accessed here.

While the approach is relatively simple, I guess it was not widely adopted. One important characteristic to note is that the program, not the user, decides with what degree of granularity the text is indexed for annotation (in this case, paragraphs). Also, while the hyperlinks provide a convenient way to link to a particular part of the text from somewhere else, there is no way to facilitate a discussion right on the page, at the source of the content.

Liquid Information


Liquid Information, a research project headed by Frode Hegland at UCLiC in London in cooperation with Doug Engelbart in California, takes a different approach to making online text more interactive.

A January 2005 article in Wired News summarizes the mechanics of the project:

“Hegland’s idea is simple — he plans to move beyond the basic hypertext linking of the web, and change every word into a “hyperword.” Instead of one or two links in a document, every single word becomes a link. Further, every link can point to more than one place, pulling up all kinds of background context from the web as a whole.”

The animation below is a simple illustration of what this is intended to look like:

Moving the cursor over the word Fleur opens a menu that allows the user to show only those sections of the text that contain (or do not contain) that word, as well as perform different operations on the word such as a Google search, etc. The same thing can be done with any word of an online text (a more complex demo of a Liquid Information-enabled BBC page is available here).

The initial focus of the project is on the hypertextual possibilities of an online text, not so much on the possibilities for facilitating distributed discourse. As the FAQ page declares, the focus is on “helping people navigate through any web page text”, not on providing them means to insert their own comments or annotations on a page.

Annozilla (Annotea)


Annozzila is a Mozilla project owned by Matthew Wilson. The project is

“designed to view and create annotations associated with a web page, as defined by the W3C Annotea project. The idea is to store annotations as RDF on a server, using XPointer (or at least XPointer-like constructs) to identify the region of the document being annotated.”

The Annotea project (lead by Marja-Riitta Koivunen) defines annotations as “comments, notes, explanations, or other types of external remarks that can be attached to any Web document or a selected part of the document without actually needing to touch the document.”

The screenshot below shows a web page where various comments have been inserted at the beginning of the first two paragraphs (notice the pencil icons), and the comments are available for viewing on the side bar.

This approach starts to combine hypertextuality and distributed discursivity opportunities. Users can annotate sections of a text or the whole text, and they can also reply to each other’s annotations (see this page for screenshots of the process). Another important feature is that the process does not involve making changes to the original online content (clearly, any solution that asks authors to adopt this or that standard and re-publish all their content is going to run into adoption problems).



Don’t wikis allow for many of the behaviors being discussed? The answer is yes, although users would have to insert annotations manually, and the content to be annotated would have to be part of the wiki (i.e., not any external web page). However, John Cappiello has come up with a way to attach a wiki to a web page in order to facilitate social annotation and commentary. The name of his project is Wikalong:

Wikalong is a FirefoxExtension that embeds a wiki in the SideBar of your browser, indexed off the url of your current page. It is probably most simply described as a wiki-margin for the internet.

The image below is a screenshot of a Wikalong-enhanced web page (notice that it provides an RSS subscription button, which opens up a whole new set of social affordances):

As I argue in another one of my papers, Online Discourse: Past, Present and Future, wikis are not the most elegant solutions to manage turn-based online discourse (although as I point out here, they do promote new interesting forms of social literacy). However, wikis do provide hypertextual affordances that can empower all discourse participants, not just the original authors of the content. Plus, wikis are becoming more popular, so using them as tools for annotating web pages seems like a logical step (perhaps Ross Mayfield and his group have some ideas regarding this topic).



GoodNotes is the final thesis project of Christina Goodness, Masters candidate at NYU’s Interactive Telecommunications Program. GoodNotes is built on the Annozzila code library, but it goes beyond previously discussed projects by adding some nice collaborative features that directly address many of the items in my list of challenges to online discourse (see above). The web site states:

“GoodNotes helps you categorize, leave notes on and share web pages with your fellow students without leaving the browser… The core of GoodNotes allows you to add certain people as part of a group, then share your tagged bookmarks with them online. Additionally, you can leave threaded annotations… [which] can be accessed by anyone who you consider a friend. Special people who you think influence your thought can be marked “guru.””

Besides the opportunities for hypertextuality and distributed discursivity that previous solutions offer, GoodNotes addresses the issue of social filtering by facilitating the formation of groups and the opportunity to tag or label comments.

The image below is a screenshot of the interface:



The Co-Link project began in 2003 with a concept by Alex Primo, and the programming was done by Ricardo Araújo. The project website offers various papers describing the approach (e.g., Participatory creation of multidirectional links aided by the use of Colink technology (PDF) ). The projects is summarized as follows:

“Several associations can be made from one single word. As one knows, while a text is read many previous readings are mentally articulated. With co-links technology this remission network can be registered in the very text. But, more than an individual storage, the hypertextual document becomes the cooperative registration of different particular visions.”

The following is a screenshot of a Co-Link-enabled page (the demo can be accessed here):

When accessing a Co-Link’ed text, hyperlinked words signify that one or more notes have been entered there (unfortunately, there is no number, like in DTD, to indicate how many notes have been entered at a particular location, which would help users recognize which parts of the text are generating more activity). Notes in this case means strictly hyperlinks, not comments by other users. Clicking on one of these words opens a pop-up menu with a list of the links. Filtering or searching the list is not possible, but clicking on a magnifying glass icon allows users to access information about the person who posted the link, along with a short description (if entered by the author) of the link, date stamps, etc. There’s also a menu option to add another link to the current word.

If users desire to add links to a word not currently active (i.e., a word that is not yet a hyperlink), they click on a global menu option called “Insert new link” which turns hyperlinks off and allows users to click on the words that do not currently have any links attached to them. This switching back and forth between the two modes would be unnecessary if all words were clickable, and some visual icon (a pencil like in Annotea, or a number like in DTD) indicated which words had links attached to them.



The Gibeo Network incorporates many of the features that have been discussed here, but it ain’t free (it’s free to register and experiment, but to have access to all of the features as well as unlimited publishing, one has to deposit funds in an account). Once registered and logged in, one simply adds .gibeo.net to any URL (e.g. www.cnn.com.gibeo.net). Then, when one highlights any portion of the corresponding online text (between 3 and 300 characters), a menu pops us that offers various annotation and commenting options. In the words of the Gibeo web site:

“It’s easy, simply register and then view web sites through gibeo.net and we instantly and transparently add cool stuff to every page from anywhere and with any browser. Select any text on any page and a menu will pop up, allowing you to highlight that text so that everyone can see it. Who hasn’t been at a site and wanted to yell “this is wrong!” or “wow, that’s amazing”? Now you can, and you can share it with your friends, instantly search, translate, or blog it.”

This is what the menu look like:

The basic functionality of Gibeo is described in the FAQ page. The service offers some neat features such as the ability to create an RSS feed of the text you highlight, define friends, create groups, and choose a privacy setting that allows only your friends to read your comments. Gibeo offers most of the hypertextual and distributed discursivity characteristics I had envisioned in DTD, although the UI is very different. The other major difference is that it is highly centralized: everything is stored in the Gibeo servers. On the other hand, Gibeo is currently operational, while DTD is merely vaporware 😉

I hope this review of some current tools for the annotation and commentary of web pages has been useful. If you know of other projects I haven’t listed here, please share the URLs. Also, if anyone wants to tackle a DTD proof-of-concept (say, for a school project!), please contact me (see About me page).

Response to comments:

James Farmer wonders whether we will annotate the web, or whether it will annotate itself. I don’t think it’s an either/or issue. We depend on search engines and software to make semantic connections automatically. But there will always be a need, I think, to insert our own comments. What is a blog if not a tool to annotate the world? The difference is that a blog collects notes in a central location associated with the user, whereas some of the tools I discuss collect comments and notes at the source of the content. Again, there is probably a need for both of these approaches (see below).

Stephen Downes points out the obvious: out of billions of online documents, how many will we want to annotate with very exact granularity? A very small amount, most probably. But does that mean that the functionality should not be available for that reason? Sometimes it is sufficient to cite a document as a whole. But sometimes one wants to de-construct a text word by word, or sentence by sentence (c.f. Fisking above). Furthermore, one may want to start a conversation amongst various users stemming from a single word in an online document. Think, for example, how valuable that would be in textual hermeneutics exercises.

Ian Kallen has similar concerns, and argues that there already are HTML mechanisms to deal with linking and citations, which makes highly granular annotation along the lines of what I am suggesting “gratuitous complexity.” He admonishes me to “Let the web be the web.” 😉 I wasn’t aware that the web was an exercise in simplicity and minimalism which must remain static in perpetuity. As far as the necessity for highly granular annotation, I would say to him what I said to Stephen.

Additionally, I find the way in which I became aware of the above three responses an interesting example of one of the problems I tried to tackle in DTD: the ability to begin and continue a dialogue at the source of the content. To elaborate: I found out about James’ response because I am subscribed to his blog; in looking through my referrer logs, I saw a hit from Stephen’s domain, and that’s how I got to read his comments; and PubSub picked up the mention of my name in Ian’s blog, and that’s how I became aware of his post. In other words, there was no way for me to know about these gentlemen’s responses to my post if it weren’t for the ‘gratuitously complex’ technologies I just mentioned (RSS, PubSub, logs). Comment forms on blog posts and Trackback were supposed to address this issue, but as we know, they are becoming less and less used (as we can see, none of the above authors decided to leave a comment on my blog or send a trackback, just as I have also decided it is more convenient to post my reply in my own blog).

Part of what I am suggesting is that there is a need to balance the ability to post my responses at an external location (such as one’s own blog) without the original author becoming aware of what I am doing, and the ability to insert comments right at the point of origination so that everyone reading the original content knows that I have something to say about it. Maybe the current mechanisms we have (the mechanisms that alerted me to the responses by James, Stephen and Ian) are good enough. Most likely, however, we will continue to see new and interesting (and complex) solutions such as the ones I described in this article. I think any attempts to make the web more dialogical are a wonderful thing.

Join the Conversation


  1. links for 2005-05-21

    {nid PMI} {{{ i d e a n t: Facilitating the social annotation and commentary of web pages requirements for distributed discourse technologies (tags: ftr purple collaboration) {nid PMJ} }}} {nid PMK}…

  2. I have been working on an annotation feature for the forums in Moodle. It allows for highlighting and margin notes. Unlike the other web annotation systems I’ve seen, it does not require special browser software; rather, it is integrated with the host application (Moodle for now).

    This is it allows for closer integration; so, for example, annotations are associated not only with a web page and a chunk of text, but also with the title and author of that highlighted text. This makes integrated searches and summaries possible (show me all of the annotations in a given discussion thread, for example). Annotations are tied to a chunk of content, so even if it appears on a different page (e.g. a different forum message view), the annotation can still be present. The great disadvantage is that this requires integration with the host application: it can’t be used to annotate arbitrary web pages.

    One other feature I’m playing with is generating an RSS/Atom syndication feed of annotations, so that it’s possible to subscribe to a user’s annotations and read them – along with excerpted text, author, title, and a link to their context – in a feed aggregator.

    I have more information, including GPL source code, at http://www.geof.net/code/annotation/ (or click on my name below).

  3. Here at the Center for New Media Teaching and Learning, we have been developing a GPL annotation implementation, initially within the Plone CMS, called based on the metaphor of sticky notes.

    – The PloneStickies home page.
    – A roadmap of future development.
    Notes on motivations, challenges, and future directions
    – A working demo of just the pure js/css sticky notes, that are designed to be adapted to any online environment.

    In our experience, we have discovered the differences between annotating arbitrary content on the web, and annotating content w/in an environment where you control both the client and the server (ie, a CMS or an LMS).

    The possibilities afforded by taking advantage of server-side intelligence are tough to pass up – things like searchable stickies, workflowable stickies, keyword stickies (think tagging), and runtime configurable placement policy are all problems that are much more tractable when annotating content within the system.

    If anyone is interested in learning more about our project, or potentially collaborating on its future development, please contact me.

  4. I used a product as long ago as the 1990s that provided much of this functionality. It was called iMarkup. It allowed annotation and comments to web pages that were stored locally, but could also be shared via e-mail or a dedicated server. I believe it was specific to Internet Explorer, but it offered features similar to many of the Mozilla add-ons you cover here.

    It seems the company has de-emphasised this aspect of their business to concentrate on server-based knowledge management software, but the product I once used is still available as iMarkup Desktop Client at the included URL (http://www.imarkup.com/client/imarkup_client.asp.) It’s priced at about $40.

  5. i think your ideas are great: why not allow granular comments & social discussion at the source document?

    seems like a good idea to me

    i see it as particularly valuable for group discussion of documents (like goodnotes above)

    maybe “web page” are only an early iteration of what highly networked communication can be

    web 2.0 is allowing more interactivity in any case – extending the page idea into new forms

    interesting, thanks …

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.