Tag Archives: digital divide

Technology Without Ends: A Critique of Technocracy as a Threat to Being

“…we are potentially most ignorant of the impact of technology at the very time when we are most assured that we understand it.”

T.J. Rivers (1993, p. 20)

Although a lot is said about the endless possibilities and futures that technology can place at our feet, and the innovative opportunities for “identity production” that it affords the networked self, I sometimes get the feeling that technology can in fact guarantee only one possible outcome: uniformity (i.e. more of the same, as in standardized futures and homogenous identities at the service of a single driving force). How dare I say this, when we live in a period of endless innovation and relentless progress? Well, in part precisely because of the endless and relentless nature of change. Yes, it is more than obvious that the world is changing as a result of our use of technology. But is this the kind of change that signifies new horizons for humanity, or merely a continuation of changes that, since the Industrial Revolution, are predictable and (more forebodingly) unstoppable? In other words, is “Human 2.0” really a testament to the greatness of the spirit, or simply a collection of useless features that not only fail to improve on the original, but in fact bar the doors to any kind of evolution that deviates from a particular path?

Such are the concerns that, although framed differently, also seem to preoccupy Theodore John Rivers in his book Contra Technologiam (1993, University of America Press). [I had not encountered any reference to Rivers in my previous readings on technology and philosophy. Serendipitously, I stumbled upon an article by him in the journal Technology in Society, which led me to his out-of-print book.] Rivers gives us what I think is one of the most concise and thought-provoking philosophical critiques of technology for our times, devoid of the sensationalism and jargon that characterize more popular offerings of the same genre.

Specifically, Rivers attacks our liberal rationalization of technology, our defense of technology by choosing to focus on the positive even when it is outweighed by the negative, so that the good is used not to provide a counter balance to the bad, but to deny the existence of the bad altogether (which is necessary because a genuine assessment would lead us to the realization that to truly consider the bad in technology would render it unsustainable). It is this critique of liberalism that will make it a difficult read for most folks. The book is not constructed as a traditional scholarly work, replete with references and research data to support the arguments. If anything, it is more of a polemic, a philosophical paralogy for a society obsessed with technology; and it is because of this and its rhetorical power that I appreciate it (which does not mean I agree completely with it). In Rivers we find no superficial neo-Luddism, but an insightful analysis of how technology limits our choices even while proclaiming to expand them. In the context of our current narratives about how technology can redefine social structures and enhance our ways of knowing, I think these are critiques that need to be taken seriously. Rivers forces us to confront the Faustian bargain we have made with technology and ask: Are we in fact not accelerating our de-humanization while believing we are struggling for our freedom?

Technology as a threat to being

Rivers starts by establishing that technology exists because we “invariably see the world in need of alteration” (1993, p. 1). Our needs and desires dictate that we act upon the world in order to transform it, and for that we need technology. We should not conclude, however, that technology is ‘natural’ to our being (the very essence of technology implies artificiality, after all). Rivers makes an important distinction between the ontological status of our openness to being, and the non-ontological status of technology. According to him, our being is open in the sense that it is flexible and dynamic. In other words, the self is continuously undergoing change. These changes generate different demands from the world, which we seek to satisfy through the application of technology. Thus, technology “is a situation conditioned by our being” (1993, p. 9) as a result of encountering the world, but it is not a natural part of our being (which is what Heidegger would try to argue, I think). Rivers’ premise is that technology can in fact threaten whatever is natural about being:

Although openness to being allows technology to come into the world, this truth does not also mean that being is aided by technology because technology inherently is an artificiality. What is natural to us is openness to being, definable by ontological freedom, which in itself cannot account for its own naturalness. The more there is technology in the world, the more this naturalness is challenged. (2005, p. 16)

Not only is being not aided by technology, but technology has a way of subverting being by demanding that our attention and efforts be placed at its service. This is because technology is concerned with action, with doing, and nothing else. “Technology inhibits deep thinking because it is concerned primarily with activity, not contemplation. Because thinking is fundamental to self-awareness, technology is an obstacle to self-identity. It is a threat to internality” (2005, p. 23).

Whereas in pre-modernity actions were viewed as emanating from being, nowadays being is seen as emanating from action. I do, therefore I am. Technology exists only as long as we are engaged in doing things with it, and is unconcerned with what kind of being results from the doing. As Rivers puts it: “[t]he relationship has been reversed: that is, technology is no longer an aid in the perfection of being, but rather being is now an aid to the perfection of technology (1993, p. 10).”

Against the liberal narratives that endow technology with the power to help us re-define or re-discover the self, Rivers argues that technology in fact obstructs and distorts the most fundamental human enterprise: Know Thyself.

One assumption made of technology is that it allows us to think about ourselves, presumably because it gives us more leisure time for reflection; but it does not. Technology fails because we become dominated by its very presence, by its devices and techniques, by the complexities of its rationality and the convolutions of its methodology. Technology cannot help but drive a wedge between us and self-awareness, between us and that relational phenomenon which is grounded in inwardness, that is, in the awareness of the individual of himself [sic], of a kind of self-directedness, a reflection of the self to the self. Until we make a conscious effort to remove ourselves from technology’s driving forces, it will continue to reduce our prospects of liberation. (1993, p. 110)

Technology and (a)morality

Rivers is not the first one to point out the fissure that modernity introduces between the use of technology as a means towards a specific end, and the use of technology as pure means, as action without a particular end (Simpson, 1995, comes to mind as a recent author who explored the dichotomy between praxis and techne). And the preoccupation with how this shift has affected our system of values has been an old concern with philosophers of technology. But what Rivers does particularly well is to look beyond the veil of liberal discourse and expose in no tentative terms the deficiencies of a morality based on a technology without ends, a technology whose only goal is to preserve itself:

… technology, which is never satisfied with its present state of being and continually on the way to its replacement, becomes a perfectionist’s fantasy. It is so consumed by its own means that ends have become anathema to it, and thus the meaning and even the possibility of its ends are lost to itself… the absence of ends is a cause of much devastation, both to nature and to man [sic, and sic for everytime the masculine is used exclusively]. (1993, p. 7)

We are presently, according to Rivers, unconcerned with the consequences of the application of technology. All that we care about is that it works. We celebrate new technologies for their affordances, because they let us do, and we dive right into the doing without paying much attention to the absence of ends. In fact, rather than a moral system, technocracy can be best described as a system of amorality:

…[technology] has been transformed into a way of life. It must not be considered merely in its effect as a morality; whereas morality is always projected toward some end, the end of technology is forever more technique, that is, unending increase in its impact as a means, and ever-continuing augmentation of its influence in the world. (p. 12)

In what follows, I will summarize Rivers’ attack on liberal discourses of technology. I will quote from his work extensively in an effort to retain as much of his voice as possible. While I tend to agree with most of his analysis, I will identify at the end some of the reservations I have about his argument, and in doing so try to suggest some way out.

Technology does not engender freedom, but curtails it

Technology’s raison d’etre assumes that if we can do something, we ought to do it… It is for this reason that technology limits human choices–for if we are powerless to resist technology’s latent power, we can hardly call ourselves free. (p. 30)

Change that is contingent on a limited set of possibilities cannot really be said to be the expression of freedom. In River’s words: “The choices that technology offers are all within the system. Any increase in technology makes the system more, not less, restrictive (p. 62).” This is because “[a]lthough in theory alterable, in practice technology is rigid because its flexibility is manifested only within the perimeters of its rationality, because it is evident only within the boundaries of its methodology (p. 55).” So if technology limits our freedom by making it irresistible to do what it affords, then more technology offers only more opportunities to act against our freedom, even while seemingly promoting it:

[Technology] creates the impression that it liberates us, that it enables us to accomplish more with its aid than without it. But this is a delusion because although technology enhances possibilities on the one hand, it limits them on the other. (p. 20)

It is not simply that for every door that it opens technology closes others, but that technology, not us, determines the path to the doors to be opened. Rivers is unapologetically a technological determinist (under the grip of technology’s logic, he would say there is little society can do to determine how technology develops —although there is the illusion that we are in control). Our surrender to technology is, in his view, a dangerous compromise: we may stand to gain a few things, but in return we put in jeopardy the authenticity of our being. “Technology gives us the feeling that we no longer have to be authentic in order to act authentically (p. 105).” In other words, as long as technology can help us ‘fake’ authentic being through action, it makes our surrender to it seem OK.

Technology does not engender democracy, but mass mediocrity

Rivers points out that population growth is “both a result of technological progress and a cause of it” (p. 67). New technologies make it possible to sustain more human lives, which in turn requires more technology, thereby securing its perpetuation. “The more there is technology, the more there are people” (ibid); not just any kind of people, but people who “contribute little out of the ordinary” (ibid). Technology requires not individuals capable of asserting their freedom, but compliant, ordinary, mediocre masses. Rivers sees the computer as the ultimate exponent of a technology for these masses:

The computer is the universal machine of an egalitarian and civilized world, and it permits anyone to use it. It is the great equalizer, requiring neither unique talents, nor special skills, nor moral preference, nor acute wisdom. It is devised for anyone and everyone. It is the machine par excellence for the masses. (p. 18) [We should keep in mind that he is talking about using a computer, not more specialized tasks like designing software for it, which not everyone can do.]


In Rivers’ mind, the kind of collectivism that technology facilitates does not lead to democracy, but to the stamping out of anything exceptional, to the erasure of the individual by the mass (a similar argument warning against Web 2.0’s uncritical preference for the collective has been made recently by Jaron Lanier. I have some reservations about framing the issue without accounting for the intersections between the individual and the collective, but I will address those elsewhere). While Rivers’ views of what constitutes exceptional individualism are a bit Eurocentric, his point is that “[t]he implementation of technology is the manner by which individuals are mechanized into masses” (p. 61). Looking at the phenomenon of mass education, it would be hard to disagree. Because individuals who achieve higher levels of development are threats to the status quo, technology is about lowering everyone to the lowest common denominator, the mass.

Mechanization is the very organization of technology, so that as the whole world becomes increasingly similar, we have a greater tendency to become trite, banal and commonplace in everything that we do. (p. 20)

Technology does not foster community, it destroys it

Masses are not sites of rich social interaction. If anything, it is the norm to feel totally alone in a mass. While technology advertises new means to ‘reach out and touch someone’ that supposedly make distance meaningless and the world smaller, according to Rivers technology “removes the tangibility between men” (p. 58). He asserts: “Ironically, the sure numbers of the masses are not the only thing that is onerous to an age dominated by technology, for there is also the very inability of the world to bring the individuals in the mass together” (ibid). Technology inserts itself even in our most intimate interactions, becoming our intermediary and deepening our dependence on it. No form of communication is outside its scope. “We are more at a loss in a technological age than in former ages because we have rendered ourselves helpless without it” (p. 120).

Furthermore, access to technology does not guarantee equality, and does not promote tolerance according to Rivers:

[A technological age] leads to fission, not fusion. Its subjects are incapable of attaining homogeneity. It makes everyone ethnically and racially conscious, that is, technology makes us more aware of ourselves: it enhances a greater awareness of not only who one is but also who one is not. Although racism should never become respectable, it is a direct result of life in a technological age. In fact, there is an appreciable difference between racism in the past, which was based on ignorance, and today’s racism, which is based on confrontation, upon a kind of face-to-face conflict. (p. 51)

Elias (1998; see this) had already remarked on how technology’s propensity to shrink the world can result in conflict. But while he held out hope for an eventual “organized unification of humankind,” Rivers is more skeptical:

…a politically democratic multi-ethnic and multi-racial pluralistic civilization is not a victory for mankind, but a permanent obstacle to greatness because a social egalitarianism in which all people intermingle produces a monolithic culture, a massive and uniform obstacle to man’s betterment. This common civilization, this democratization, is most representative of technology in the West and a cause of its sterility.

Rivers’ critique may sound aggressively insular and prejudiced to our liberal-trained ears, but what he critiques is not diversity but precisely the lack of it, the construction of a monolithic culture in which all difference is subsumed under the logic of technology (which is, as I see it, the foundation of technocracy).

Technology prevents critical thinking and political action

The recent trend to simply number new movements sequentially (e.g. Web 2.0, Life 2.0, Learning 2.0, etc.), following software naming patterns, is probably an indication that innovation has become incredibly constrained and predictable. “No irony is meant by saying that a technological age fosters change so long as things remain the same (p. 46).”

To Rivers, it follows that a process of surrendering difference to the logic of technology would result in anything but the loss of critical thinking: “Certainly the last thing that would result from mechanization is the development of a critical, acute and refined discrimination (p. 20).” While technology has increased the amount of measurements we can derive from reality, and given us new ways of absorbing that information, Rivers does not equate that with an increase in self-knowledge per se. If anything, the fragmentization of knowledge prevents us from seeing the big picture:

Because the rapid and seemingly endless proliferation of information has led to the fragmentation of learning, more and more areas of information have resulted in a greater ignorance of all of reality. Although we know more today than we did yesterday, we also know these things from a more limited point of view, as from the perspective of a microscope… (p. 94)

In opposition to techno-liberal discourse, Rivers argues that an increase of specialized knowledge does not signify a transition to a better future when all of that information will suddenly mean something, but is an indication of immobility and impermanence (information without end, and therefore, without meaning):

Indeed, a technological age is not in the least transitory even though it strives to be both current and fashionable. It is an age that produces nothing lasting, marked by ideas which have no chance of introducing truly meaningful changes into the world. (1993, p. 23)

This inability to introduce ‘truly meaningful change into the world’ is perhaps technology’s most dehumanizing effect. We live in an age, according to Rivers, when political action is increasingly seen as unnecessary. Not only does technological doing occupy our minds and distract us from the need to act politically, but in its perverse logic technology represents itself as a tool for political action. Hence, we have started to see the act of doing with technology as satisfactorily political (the premise behind e-democracy). Technologized politics becomes endless means without substantive political ends. This undermines any challenge to the status quo by free-thinking individuals:

Nor is it surprising that there is so little real political struggle in an age that surrenders itself overwhelmingly to technology because politics on the grand scale, when individuals organize and oppose the established order, are rendered meaningless, since technology proposes to do everything for us. Above all, it becomes the spearhead of the democratization of the world; that is technology becomes the agent of the world’s mediocrity. (p. 70)

In this context, even direct challenges to the system become perfectly circumscribed by technology’s logic. “In our present condition, deliberate acts of defiance and their concomitant confrontation rarely happen, except if they conform to technology’s manner of doing things, that is, if they adhere to technology’s methodology or conform to its democratization” (p. 120). Web sit-ins, e-mail petitions, online voting, echo blog journalism, and open source disaster recovery are a few examples of the new form of activism that has replaced meaningful action while presenting the illusion of progress. “[T]echnology promotes the illusion that it is able to respond to changing situations, that it is able to take emergency measures in an endangered world, but in fact, technology is slow to act and slow to remedy problems, and slower still to remedy problems directly caused by it” (p. 55).

Conclusions: Philosophy before programming

To put it simply, we have forgotten how to say no. Because technology is compulsive, we feel driven to do whatever [it makes] possible. (p. 30)

River’s critique is useful only if we acknowledge that he is not talking about technology per se in some reifying manner, but about how we use technology in a particular way. That is, his critique is not of technology but of technocracy (a social system dominated by technology and where everything must give way to the advancement of technology, c.f. Postman 1992). It is technocracy that brings about the kind of homogenization and mediocrity that Rivers describes by subsuming all human agency under its needs. It is technocracy that needs to be challenged in all fronts because its impact is truly global: it knows no ideological or geographical boundaries (democracies, oligarchies and theocracies can be equally technocratic).

It is important to make this distinction between critiquing technology and critiquing technocracy because otherwise technological determinism (i.e., the idea that technology shapes us, not the other way around) becomes too much of a metanarrative, an immutable given. In order to critique technology, Rivers gives technological determinism too much credence, setting it up as a process that applies to all technologies at all times across all situations. This approach gives us the possibility of rejecting technology wholesale on moral grounds, but reduces our agency and limits our opportunities to act, and in the end this paralysis allows technology to take over. Yes, technology robs us of critical agency, but it does not eliminate the possibility that, once aware of this process, we can re-assert our will over technology. So while determinism allows for the opportunity to discursively oppose technocracy, it prevents a more active engagement that can actually contest or rival it (this insight was inspired by a recent post by Tim, who cites Badiou’s remark that “anti-capitalists are not simply opponents of capitalism, but more importantly rivals“). In short, to rival technocracy we might very well have to use technology, something which Rivers’ version of technological determinism would leave us little moral grounds to do. The master’s tools in the hands of a freed slave are no longer the master’s tools (if the latter is acting as a subject, not an object, of history —to paraphrase Freire).

While Rivers’ analysis accurately describes the ways in which we surrender our agency to technology, some of his solutions appear simplistic because a deterministic approach leaves little room for nuanced analysis. Given that a world without the technologies we already have is impossible, Rivers suggests that we should pick and choose from these technologies according to the values they espouse: “We must not look at technology’s values, but through them, questioning every aspect of their manifestation. If they promote well-being, we should keep them. If they do not, we should discard them” (1993, p. 120). But this ignores the complex entanglement of technologies in our world. Almost always, to choose a technology that promotes well-being we must make use of other technologies that do not, oftentimes even without our awareness. This is what makes Actor-Network Theory, with its tracing of complex associations between human and technological actors, such a valuable but difficult exercise.

More practical than the ‘keep the right technologies’ argument is Rivers’ call for a paralogical space to think outside technology (a notion I have been exploring lately in my attempts to re-conceptualize the digital divide). I think Rivers and I agree on the need to secure a (psychological, if not physical) space to take a break from the impulse to act with technology and experience being without it:

It is only when at rest that we have the optimum opportunity to think. In fact, what mobility demonstrates is that an age always in motion makes little substantive progress. Despite high speed travel, we are an age going nowhere fast. (p. 46)

Ubiquitous computing, in other words, is the worst idea in the world. Reclaiming a space without technology does not mean rejecting technology, but exercising the only chance we have to estimate its true meaning and potential. Those outside the grip of technology are best qualified to discern its effects. We must strive not for universal access to technology, but for universal freedom from the all-pervasive influence of technology. The latter jihad is more difficult than the former. But it is also more important because it seeks to foster what technology, by its nature, ends up blocking: a deeper understanding of ourselves. In Rivers’ words:

Because many of our actions can be unconscious, it is imperative that the world in all its diverse forms, including technology, be filtered out by us when we need to understand ourselves. Not that we should say no to the world (how could we do otherwise?), but that we should say no to an automatic, unthinking response to technology’s eternal presence in the world. Otherwise, we may never allow ourselves the opportunity to do so because we will never be alone with ourselves. Since technology is possessed of systems and rationalities already devised and set in place, which in turn are augmented by instantaneous gratifications and self-deceptions, we are at a great risk. But technology posits a threat in other ways because it gives us a course of evasion. It gives us an excuse when we wish to live inauthentically. (p. 108)

Nonetheless, technology is our creation, and although it acquires agency of its own we gain little by demonizing it. Technology should be viewed for what it is: and expression of our openness to being that reflects our historical and cultural conditions:

… the essence of technology is linked with ontological freedom, which means that what we build and create is the result of what we choose. How we choose and act is defined within specific historical and cultural situations that vary over time and place. Technology reflects and augments these situations. If we change present conditions and the demands they make upon us, then we can change technology. (Rivers, 2005, p. 3-4)

The way to proceed, then, is to discontinue the search for technologies that will supposedly liberate us (a search which technology conducts on its own behalf, with us merely as its enablers). Instead, we should begin in earnest the search for ourselves. We should become philosophers before programmers (or even users). We need to take stock of where we have surrendered our agency to technology, and figure out how to transform unconscious surrender into intentional delegation. We need to give technology an end; or to put it differently: we need to counter technology’s bias for means-without-end with our own formulation of ends, ends which are beyond the scope of technology but which may benefit from the application of technology when it’s approached as a delegation, not a surrender. This is very much a task that reflects the ongoing process of becoming, the openness of being, and as such it is always an unfinished exercise. To paraphrase Rivers (who is channeling philosophers across time): one is not what one is, but is what one is not yet (Rivers 1993, p. 106).

Offline References:

Elias, N., Goudsblom, J., & Mennell, S. (1998). The Norbert Elias reader: A biographical selection. Oxford, UK; Malden, Mass.: Blackwell Publishers.

Postman, N. (1992). Technopoly: The surrender of culture to technology (1st ed.). New York: Knopf.

Rivers, T. J. (1993). Contra technologiam: The crisis of value in a technological age. Lanham [Md.]: University Press of America.

Rivers, T. J. (2005). An introduction to the metaphysics of technology. Technology in Society, 27, 551–574.

Simpson, L. C. (1995). Technology, time, and the conversations of modernity. New York: Routledge.

Flickr photo credits (all pictures released under a Creative Commons license):




In Defense of the Digital Divide as Paralogy (v 1.0)

by Ulises A. Mejias

Introduction: Why Won’t Lyotard Go Away?

As I have suggested before, we have not done enough in the field of Education and Technology to address Lyotard‘s concerns about the commodification of knowledge through the digital technologies we use (commodification means the transformation of things with no monetary value into things with monetary value —or commodities— through their subordination to the logic of capitalism). To put it in alarmist terms that are certain to catch your attention: If we are to take Lyotard’s analysis seriously, the gadgets and gizmos we are currently enamored with —edublogs, eduwikis, eduRSS feeds, and such— are nothing more than the tools of hegemonic capitalism.

Even if that sounds a bit harsh, the fact is that Lyotard provides a fertile framework for us to engage in an internal critique of our tools and methods. Only by engaging in such a critique can we guarantee the sustainability of our practice. To that end, Gane (2003) has done us all a big favor by summarizing the central concepts of Lyotard’s theory in his article Computerized capitalism: The media theory of Jean-François Lyotard. Gane describes the central themes in Lyotard’s critique as they relate to the new media, mainly:

that the computerization of society is accompanied by a new stage in the commodification of knowledge (The Postmodern Condition); that we are witnessing the speedup and extension of capitalist culture through the reduction of knowledge to information and information to bits (The Inhuman); and that new media technologies promote the streaming of culture (even oppositional culture) into homogeneous forms of capital that can be exchanged, received and consumed almost ahead of time (Postmodern Fables). (Gane, 2003, p.1)

In this post, I use elements from Lyotard’s theories to explore how the information and communication technologies that facilitate the social construction and aggregation of knowledge contribute to its commodification. I argue for a reframing of the concept of the digital divide as an important paralogical tool to resist this logic. This reframing is necessary because, currently, the digital divide is used in just the opposite way: to rationalize a model of progress and development where those aspects of our lives that are not technologized must become technologized, to the point where ubiquitous computing is normalized as the goal of innovation (and since technologizing and commodification are closely tied in capitalism, ubiquitous computing means ubiquitous commodification). In short, I attempt to reframe the digital divide as an instrument of resistance against the increasing commodification of knowledge, not as an ailment of the underprivileged.

What Is Paralogy?

Challenging the commodification of knowledge requires methods to un-think the logic of capitalism. One such method can perhaps be found in Lyotard’s notion of paralogy.

The etymology of this word resides in the Greek words para —beside, past, beyond— and logos in its sense as “reason.” Thus paralogy is the movement beyond or against reason. Lyotard sees reason not as a universal and immutable human faculty or principle but as a specific and variable human production; “paralogy” for him means the movement against an established way of reasoning. (Woodward, 2006)

A paralogy is a way to see things as more than commodities, to think outside the logic of capitalism. Paralogy plays an important role in challenging the role of innovation as it is traditionally understood (e.g., innovation as the creation of new things to consume and new methods for turning things into commodities). Lyotard sees innovation as “under the command of the system, or at least used by it to improve its efficiency” (1984, p. 61). Paralogy is diametrically opposed to innovation in the sense that it is a “creative and productive resistance to totalizing metanarratives” (Readings, 1991, pp. 73-74). Paralogy, according to Gane (2003),

concerns itself with everything that cannot be resolved within the (capitalist) system. In so doing, this form of resistance works by disrupting the instrumental logic of the modern order, producing, for example, the unknown out of the known, dissensus out of consensus, and with this generating a space for micro-narratives that had previously been silenced. (p.8)

But what ‘totalizing metanarratives’ supported by modern technologies (including technologies that facilitate the social construction and aggregation of knowledge) need to be resisted? Haven’t these technologies improved our lives by giving us new ways of re-assembling the social? Don’t they have the potential to engender more constructivist, active, distributed, connected or [insert your fav buzz word here] forms of learning? In short, what’s there to resist?

The Management of Social Knowledge

Lyotard’s critique of the new media is that it has established commodification and efficiency as the ultimate measures of the value of knowledge. According to Gane’s reading of Lyotard,

the emergence of new media has changed the form and status of knowledge, which is now judged less by its intrinsic value than by its performance, or rather by how economically valuable, efficient and programmable it is. Lyotard’s thesis then is that culture has been transformed by digital technology, which… follows the principle of ‘optimal performance’: ‘maximizing output (the modifications obtained) and minimizing input (the energy expended in the process)’ (Lyotard 1984: 44). (Gane, 2003, p.5)

The Knowledge Management movement, precursor and inspiration for the Social Software movement, sought to capitalize on knowledge that was held collectively by communities of practice. According to Chan & Garrick,

[t]his functional emphasis is traceable in its lineage to the popular belief, characterized by Nonaka and Takeuchi (1995), that tacit knowledge can be converted into explicit knowledge through IT systems. By capturing knowledge, it can be more widely replicated and shared. By inserting human agency into the equation, these authors see possibilities to sort, convert, retrieve, and share knowledge actively. Henceforth, knowledge is transformed into a more tangible commodity. (Chan & Garrick, 2003, p.1)

Is this not the underlying principle and unmentioned mission of social software: to convert tacit individual knowledge into explicit —and commodified— social knowledge? As I have pointed out before in the context of social bookmarking and tagging: “the aggregation of inherently private goods (tags and what they describe) has public value” (Mejias 2005). But what happens when the public value is ultimately controlled by private interests? As anyone reading the blogs when Yahoo! acquired Flickr or del.icio.us could see, the commodification of social knowledge has very important consequences for the interests of users vs. corporations.

Hence the emphasis in the Open Source / Open Content movement to ensure that aggregated knowledge remains in the hands and at the service of the “public” (however one wishes to define this loaded term). This subversive application of technology is possible because the affordances of technology can be exploited either in the interests of commodification or against them, as Lyotard himself recognized even before the Open Source / Open Content movement gained mainstream recognition:

…Lyotard states, in the final passage of The Postmodern Condition, that new media technologies can be more than simply tools of market capitalism, for they can be used to supply groups with the information needed to question and undermine dominant metaprescriptives (or what might be called ‘grand narratives’). The preferred choice of development, for him at least, is thus clear: ‘The line to follow for computerization to take . . . is, in principle, quite simple: give the public free access to the memory and data banks’ (Lyotard 1984: 67). (Gane, 2003, p.9)

But the argument is larger than merely who owns the technology, and hence, who owns the accumulated knowledge. Open Source / Open content projects offer an alternative in terms of ownership (and that alternative is indeed important), but not in terms of what digitization does to knowledge. Even then, according to Lyotard, having free access to the technology would at least allow us to gather the information needed to ‘question and undermine dominant metaprescriptives,’ including the one that says that knowledge should be judged by how “economically valuable, efficient and programmable it is” (Gane, 2003, p.5). Are we in fact engaged in such questioning, or have we become distracted by the speed of innovation?

Innovation as an End

How is recognition achieved in the field of educational technology? While research and analysis are important, there is nothing in terms of generating ‘buzz’ like releasing the next big thing in educational technology: a new software program, a new platform, a new service, a new community or a new collection of content. In a system where attention has become a commodity, and the basis for a new economy, even code or content released under an Open paradigm needs to behave as a commodity in so far as it is forced to compete for the attention of users. Thus, the focus is not on questioning the logic of the system, but on creating more code and content. Even ‘successful’ blogging is characterized by a simple formula: s/he with the most content generated/aggregated in the less time wins. I sympathize with Suchman when she expressed

… hope for genuinely new reconfigurings of the technological, based not in inventor heroes or extraordinary new devices, but in mundane, and innovative, practices of collective sociomaterial infrastructure building. (Suchman, 2005, p. 11)

Instead of the slow and painful work of infrastructure building, we pursue innovation in the form of vertiginous technological and content development. As Lyotard said: “To go fast is to forget fast, to retain only the information that is useful afterwards, as in ‘rapid reading'” (in Gane, 2003, p.10). Suchman (2005), quoting Barry (1999), suggests that:

there might actually be an inverse relation between the speed of change, and the expansion of inventiveness – that “moving things rapidly may increase a general state of inertia; fixing things in place before alternatives have the chance of developing.” (Suchman, 2005, p. 11)

Gane, in his summarizing of Lyotard, remarks:

Technological development then speeds up life and culture, while at the same subjecting them to principles of efficiency, performance and control. The digital transformation of culture, however, also has a further consequence, namely that in our day-to-day processing of short ‘bytes’ of information we ourselves become more like machines. In other words, through our use of new media technologies, we, as humans, become increasingly ‘inhuman’. (Gane, 2003, p.12)

What does this increasing inhumanity look like, and where does it lead?

Ubiquitous Computing, Ubiquitous Commodification

As educational technologists, we are often invested in augmenting the application of technology, which we justify by calling attention to the enhanced learning opportunities engendered in the process. However noble the intentions, this does not detract from the fact that more technology means a pronouncement, not a reduction, of the symptoms that Lyotard describes. Augmenting the application of technology points, logically, to ubiquitous computing, which —we tell ourselves— is all about empowering humans and privileging knowledge by getting the machine out of the way. Galloway (2004) summarizes the ethos of the movement thus:

ubiquitous computing was meant to go beyond the machine —render it invisible— and privilege the social and material worlds. In this sense, ubiquitous computing was positioned to bring computers to ‘our world’ (domesticating them), rather than us having to adapt to the ‘computer world’ (domesticating us). (Galloway, 2004)

But this rationalization of ubiquitous computing is flawed because it equates invisibility with the absence of influence. It is precisely about ‘domesticating’ our behavior to conform to the machine’s rules. Conditioning ourselves to ignore the machines means that they disappear only from our perspective, not from the perspective of someone without the technology, and certainly not from the machine’s perspective. Their agency and their impact on our behavior does not vanish. On the contrary, it is when we reach this state of conditioned forgetfulness that the commodification of knowledge becomes absolute, and that the status of certain metanarratives becomes incontestable.

We should not seek to domesticate or naturalize technology. Instead, we should strive to retain its artificiality in our lives. This is not the same as prescribing a Neo-Luddism. I firmly believe, like Lyotard, that the flexible affordances of technology can open up ways for critique and introspection. But this can only be achieved if we avoid taking technology for granted to the point of making it invisible. The key is to reaffirm the differences between those aspects of our lives that we have (intentionally or unintentionally) opened up to technologizing and those that we have not, not with the intention of establishing a wall between them but, on the contrary, with the intention of mapping the tensions, influences, and overlaps between the two. This is where I believe the digital divide as paralogy can re-enter the picture.

Digital Divide Redux

Most of the discourse surrounding the digital divide (cf. Sassi 2005) centers on the ‘problem’ of those who have no access to technology, and what the role of those who do have access should be in addressing this problem. The digital divide has become a metanarrative in its own right, establishing that the inevitable goal is more technology, applied to more aspects of our lives, and available to more people. Only then will the playing field be leveled, and true progress will be achieved, we are told.

I do not mean to suggest that some of the problems of our age could not be alleviated with more technology or, more accurately perhaps, with a more even distribution of the technology we already have. But here I am interested in the discourse invoked by the word ‘divide.’ As I have summarized elsewhere (Mejias, 1999), the discourse of Modernity relies on a distinction between modern societies and pre-modern societies to establish a primacy of the former over the latter, a primacy defined to a large extent in terms of technological progress that pre-modern societies must strive to achieve. Massey (1999) has argued that this dynamic enacts in space what is assumed to be a lag in time:

When we use terms such as ‘‘advanced’’, ‘‘backward’’, ‘‘developing’’, ‘‘modern’’ in reference to different regions of the planet what is happening is that spatial difference are being imagined as temporal… The implication is that places are not genuinely different; rather they are just ahead or behind in the same story: their ‘‘difference’’ consists only in their place in the historical queue. (Massey 1999, quoted in Rodgers 2004, p.14)

However, it is not simply a matter of waiting for those ‘laggards’ to catch up. As anyone who has seriously studied the development of the so-called Third World can surmise, capitalism requires the existence of lack for the many in order to generate plenty for the few. The digital divide, in other words, is there by design. May (2004), reviewing Huw’s The Making of a Cybertariat: Virtual Work in a Real World (2003), remarks:

Huws’s corrective reminds us that the ‘freedom’ of the nomad is bought at the cost of the call-centre operatives’ lack of control over their working life (always being available for us means not being available for themselves)… [R]emember that you can only detach yourself from the real because of the continuing drudgery of the cybertariat [the cyber proletariat]. (May, 2004, p. 3, my note)

To reduce very complex political and economic processes to their most simplistic form: the wealth and materials necessary to maintain the lifestyle of the ubiquitous computing nomad are abstracted from the labor of those who are —to paraphrase Freire (1970)— objects, not subjects in the system. [The image of the nomad used here implies ‘wireless’ mobility, and is not related to the Deleuzian imagery that implies the dynamism of being, which I have referred to before.]

Thus, the current discourse on ubiquitous computing operates at two levels. At the personal level, ubiquitous computing implies a decrease of the digital divide by diminishing the demarcation between the technologized and non-technologized aspects of our lives. At the social level, ubiquitous computing implies an increase of the digital divide in the form of a greater demarcation between the digital-have’s and the digital-have-not’s.

It is in this sense that I argue for the need to reclaim the digital divide as a paralogy to resist the ‘rationality’ of capitalism and ubiquitous computing: At the personal level, the digital divide can help us question the ontological assumptions we make with each new introduction of technology into our lives. At the social level, the digital divide can help us disrupt the narrative of underdeveloped digital have-not’s that need to ‘catch up.’

Bridging Divides Through Reconfigured Nearness

By this I do not mean to suggest that we embrace the paralogy of the digital divide as a way to protest the supposed separation that technology effects between us and reality. For example, I begin to suspect a move in the wrong direction when Gane writes:

The point here is not simply that machines are taking over operations that used to be performed by human minds, nor is it that information is evaluated according to instrumental principles of ‘use’. It is rather that the digitalization of data tears both cultural artefacts and sensory experience from their moorings in physical time and space. The result is what Lyotard terms a ‘hegemonic teleculture’, in which writing, the memorization or inscription of culture, and even events themselves take place at a distance. And this situation demands, in turn, that the very idea of experience in the ‘here and now’ be rethought. (Gane 2003, p.13)

Indeed it needs to be rethought, but perhaps Lyotard did not go far enough in reconceptualizing nearness when he wrote:

What does ‘here’ mean on the phone, on television, at the receiver of an electronic telescope? And the ‘now’? Does not the ‘tele’-element necessarily destroy presence, the ‘here-and-now’ of the forms and their ‘carnal’ reception? What is a place, a moment, not anchored in the immediate ‘passion’ of what happens? Is a computer in any way here and now? Can anything happen with it? Can anything happen to it?’ (Lyotard 1991, p.118; in Gane 2003, p.13)

According to Gane, Lyotard “argues that these combined processes ‘abolish local and singular experience’, hammer ‘the mind with gross stereotypes’, and leave ‘no place for reflection and education’” (Lyotard 1991, p. 64; in Gane, 2003, p.14, my emphasis). Here I must take exception with Lyotard. The splitting of reality into two —one local reality, one online— is unsustainable, as it leaves us with a ‘virtual’ reality that we have no way of transforming with the tools of our ‘real’ reality. It also ignores the multiple and complex relations between the online realm and the local (starting with the human body itself!), connections which must be critically explored in order to reaffirm technology’s potential to facilitate “reflection and education.” Not only is the computer indeed here and now, but it serves as a plane in which other here and now’s are actualized. This is the reconceptualization of nearness that we must undertake, in light of the new —and not always unproblematic— dynamics of telepresence and telepistemology. [I am attempting to do this, as well as deal with the misconception of virtuality (using the work of Deleuze and others) in upcoming work.]

Technology And Choice, And The Choice Not To Choose Technology

What will a reconceptualized theory of nearness, a theory of nearness that takes virtuality and telepresence into account, give us? For one thing, it will open up ways to use the paralogy of the digital divide in redefining our relationships with technology. We know that humans do not exclusively determine technology (social determinism), and that technology does not exclusively determine humans (technological determinism), but that both mutually determine each other. But, as Suchman (2005) argues, “mutuality does not necessarily imply symmetry… persons and artifacts do not constitute each other in the same way” (Suchman, p. 5).

The acknowledgment of this asymmetry, engendered by the paralogy of the digital divide, is important because it can help us realize the ways in which technology cannot improve us. According to Rivers (2005), technology can’t improve us not because technology is evil or biased or anything like that, but because from an ontological perspective, it is we who improve technology:

The assumption that technology is innate in humans implies that it may lead to improvements in ourselves, but this assumption is untenable. The reverse is true: it is we who help to improve technology. It would be naive to suppose that technology is capable of improving us ontologically. It cannot be used to improve and perfect our own being. (Rivers, 2005, p. 3)

Rivers means that, although Being is fluid and dynamic, ontologically Being is what it is, and cannot be improved into Being Plus, or Super-Sized Being. Technology and science can make our lives easier, or laden with commodities (usually at the cost of making someone else’s lives less easy and less filled with commodities), but they cannot really change the nature of our being.

But if we improve technology, and not the other way around, how do we realize technology’s potential for reflection and education? Not through technology itself, but through the choices we make about the use of technology in our lives, choices that can become clearer if we keep the paralogy of the digital divide in mind. Rivers explains how, on the positive side, technology shapes the world by offering us more possibilities to act upon it, while on the negative side, technology often presents us with the illusion that the only way to change the world is by choosing technology:

Technology influences the manner by which we express ourselves in the world, and it does this by creating possibilities. These possibilities exist in a relationship to technology which makes them possible; that is, technology gives rise to possibilities that did not exist before. Yet these possibilities have a direct bearing on how we act. The world is continually shaped by technology because it is one of the several conditions that allow us to choose. But technology also gives the false impression that it is the only possibility, especially the only solution to problems, which either ignores or distorts the diversity of options. Although technology increases some possibilities, its success suppresses others; that is, increased technology reduces alternative possibilities. (Rivers, 2005, p. 9)

In other words, more technology (à la ubiquitous computing) reduces alternative choices. The digital divide as paralogy increases our awareness of the diversity of options, in the sense that it introduces the possibility not to use technology in some aspects of our lives, or to choose to use it purposefully by thinking about how the technologized parts of our lives should relate to the non-technologized parts. As Suchman argues, citing Barad (1998), it is in establishing these boundaries that we create meaning and assume accountability:

As Barad points out, boundaries are necessary for the creation of meaning… Because the cuts implied in boundary making are always agentially positioned rather than naturally occurring, and because boundaries have real consequences, “accountability is mandatory” (187)… As Barad puts it: We are responsible for the world in which we live not because it is an arbitrary construction of our choosing, but because it is sedimented out of particular practices that we have a role in shaping (1998: 102). (Suchman, 2005, p.10-11)

The Digital Divide, Technology, and Openness To Being

Establishing the digital divide in our lives is not about drawing a clean boundary that delineates mutually exclusive realms of action (the digital and the non-digital). On the contrary, the digital divide is a fuzzy, permeable, ever-shifting boundary that calls attention to the fact that we are constantly negotiating how meaning is created across these two realms. Thus, drawing the digital divide at a personal and inter-personal level implies making a series of conscious decisions about the extent to which, for example: our participation in online communities is balanced with our participation in onsite communities; the knowledge gained online is applied onsite, or vice-versa; social agency is delegated to code in order to create new social formations; etc.

To Gochenour (2006), for instance, it no longer makes sense to talk about online communities versus onsite communities. Instead, he talks about distributed communities that encompass both online and onsite elements. These distributed communities are assemblages of digital and non-digital actors where the boundaries of the digital divide are constantly having to be redefined, and this is in itself a form of reflection and action, a praxis:

In the realm of distributed communities, we engage in daily acts of bringing forth new worlds with others, we recognize others as human subjects with whom we wish to co-exist, and who have equal rights to realizing their individual becoming… We do this not in an immaterial realm devoid of relation to the real [‘cyberspace’, or ‘virtual reality’], but in a concrete world of community, where the linguistic worlds that we bring forth with others have the potential to bloom into worlds of action. (Gochenour, 2006, p.16, my note)

Or as Rodgers (2004) puts it:

There is no such place as ‘cyberspace’. Rather, there a millions of on- and offline spaces, frequently intersecting and each having an impact on both the user and non-user in how space is constructed and how it evolves. (Rodgers, 2004, p. 15-16)

To become aware of the intersections we have come to occupy, and to become involved in critically asking how our choices about technology have given shape to those particular intersections, is to use the digital divide as a paralogy, and to create opportunities for authentic reflection and education.

However, the increasing presence of technology in the world, its move towards ubiquitousness, can make it more difficult to engage in this critical exercise. Technology is an expression of our openness to being (Rivers 2005), our freedom to choose even against freedom. Because technology simply mirrors and amplifies the context and conditions that have given it shape, it can result in less freedom, not more. Thus, in order to change technology we need first to change ourselves: “If we change present conditions and the demands they make upon us, then we can change technology (Rivers, 2005, p. 3-4).” This process is, however, sometimes made more difficult by technology. To quote Rivers again: “although technology increases some possibilities, its success suppresses others” (Rivers, 2005, p. 9).

For a long time, educational technologists have put their faith in technology as a way to change education, and even the world. But meaningful change cannot come from the technology as long as the technology contributes to the commodification of knowledge. Thus, if we hope to change technology we must change “the present conditions and the demands they make upon us” (Rivers, ibid) first. This might not require something as dramatic as a revolution or the overthrow of capitalism, but a ‘simple’ reaffirmation of the digital divide: a critical awareness of what aspects of our lives have been commodified by technology, which ones we want to reclaim, and the tension which results from this division. Recognizing this divide is key to challenging the logic of ubiquitous commodification.


Barad, Karen (1998) Getting Real: Technoscientific Practices and the Materialization of Reality. differences: A Journal of Feminist Cultural Studies,10, 88-128.

Chan, A. & Garrick, J. (2003). The moral ‘technologies’ of knowledge management. Information, Communication & Society, 6(3), 291–306.

Freire, P. (1970). Pedagogy of the oppressed. New York: Herder and Herder.

Galloway, A. (2004). Intimations of everyday life: Ubiquitous computing and the city. Cultural Studies, 18(2-3), 384-408.

Gane, N. (2003). Computerized capitalism: The media theory of Jean-François Lyotard. Information, Communication & Society, 6(3), 430–450.

Gochenour, P. H. (2006). Distributed communities and nodal subjects. New Media & Society, 8(1), 33–51.

Huws, U. (2003). The Making of a Cybertariat: Virtual Work in a Real World. New York: Monthly Review Press.

Lyotard, J.-F. (1984). The postmodern condition: A report on knowledge. Minneapolis: University of Minnesota Press.

Lyotard, J.-F. (1991) The Inhuman: Reflections on Time. Cambridge: Polity.

Lyotard, J.-F. (1997) Postmodern Fables. Minneapolis: University of Minnesota

May, C. (2004). The cybertariat and the nomad [Review of the book The Making of a Cybertariat: Virtual Work in a Real World]. Information, Communication & Society, 7 (3), 423–439.

Massey, D. (1999) Power-Geometries and the Politics of Space-Time. Heidelberg: University of Heidelberg Press.

Mejias, U. A. (1999). Sustainable Communicational Realities in the Age of Virtuality. Critical Studies in Media Communication, 18(2), 211-228. [A version of this article can be found at http://ideant.typepad.com/ideant/2003/08/sustainable_tec.html]

Mejias, U. A. (2005), Tag Literacy. Accessed on Feb. 24, 2006 from http://ideant.typepad.com/ideant/2005/04/tag_literacy.html

Readings, B. (1991). Introducing Lyotard: Art and politics. London: Routledge.

Rivers, T. J. (2005). An introduction to the metaphysics of technology. Technology in Society, 27, 551–574.

Rodgers, J. (2004). Doreen Massey: Space, relations, communications. Information, Communication & Society, 7(2), 273-291.

Sassi, S. (2005). Cultural differentiation or social segregation? Four approaches to the digital divide. New Media & Society, 7(5), 684–700.

Suchman, L. (2005). Agencies in Technology Design: Feminist Reconfigurations. Published by the Department of Sociology, Lancaster University. Accessed on Feb. 24 2006 at http://www.lancs.ac.uk/fss/sociology/papers/suchman-agenciestechnodesign.pdf

Woodward, A. (2006). Jean-François Lyotard. In The Internet Encyclopedia of Philosophy. Accessed Feb. 21, 2006 from http://www.iep.utm.edu/l/Lyotard.htm