“Socialist” Software

A case can be made that Social Software contributes to the commodification of knowledge and social interactions, or that it is simply a way for companies to make money off your labor/data. But as we know, there’s more to it than that. Social Software can also embody a set of social practices that are downright, well, socialist!

I was thinking of that as I was reading Andrew Feenberg’s essay Democratic Rationalization: Technology, Power, and Freedom (originally published in 1992, before social software and the internet were really mainstream). Feenberg speaks of technology in the context of democracy. A truly democratic society is one where people have a say in determining what technology will produce through their labor, and Feenberg uses Marx’s concept of socialism to refer to a society where political agency is derived from work:

[Marx] claimed that we will remain disenfranchised and alienated so long as we have no say in industrial decision-making. Democracy must be extended from the political domain into the world of work. This is the underlying demand behind the idea of socialism. (p. 652)

How we work is a very political issue, and democracy (in this Marxist view) is the result of a system where workers have control over production processes and the fruits of those processes.

Of course, technology is a part of all aspects of our lives, not just work. Accordingly, Feenberg sees democracy as being enacted in everyday social life through the technologies we use. In other words, democracy is closely tied to how technology is actualized or put into practice. One of the problems of our age is that we tend to see our use of technology as inherently de-politicized. To save democracy, according to Feenberg, we need to stop thinking of it as something that politicians enact in government buildings, and start thinking of it in terms of our everyday technological practices:

The common sense view of technology limits democracy to the state. By contrast, I believe that unless democracy can be extended beyond its traditional bounds into the technically mediated domains of social life, its use value will continue to decline, participation will wither, and the institutions we identify with a free society will gradually disappear. (p. 653, my emphasis)

This point might sound familiar to those who have read Lessig’s (2004) views on free culture, in particular the way he associates the technological practice of ‘re-mixing’ content with a healthy democratic culture, and the way this practice is currently endangered by those who put unreasonable costs on our ability to remix. The irony is that many times those costs can be enforced by the same technologies that allow re-mixing! That is why Feenberg’s rejects views of technology as deterministic or neutral, and instead sees technology as “a scene of social struggle, a “parliament of things,” on which civilizational alternatives contend (p. 656).” To him, technology is not a static given but something that needs to be interpreted:

As a social object, technology ought to be subject to interpretation like any other cultural artifact, but is generally excluded from humanistic study. We are assured that its essence lies in a technically explainable function rather than a hermeneutically interpretable meaning. (p. 656)

Which is why Actor-Network Theory, I guess, sees technology as an actor in a complex network of associations, an actor whose role is open to interpretation depending on where you are standing. When I speak of the open affordances of technology, I refer to this issue: the fact that the same technologies can be used for different purposes according to different political agendas, and evolve accordingly. Feenberg argues that:

…differences in the way social groups interpret and use technical objects are not merely extrinsic but make a difference in the nature of the objects themselves. What the object is for the the groups that ultimately decide its fate determines what it becomes as it is redesigned and improved in over time. If this is true, then we can only understand technological development by studying the sociopolitical situation of the various groups involved in it. (p. 657)

So when people complain that social media undermines final communities and real commitment (Dreyfus, Borgmann), that it commodifies knowledge (Lyotard), or that is sets up a virtual domain that undermines reality (Baudrillard et al.), they are right to the extent that they are describing how technology is being used by a hegemonic authoritarian system. But that doesn’t mean that this ‘machine v. (human) nature’ model is the ONLY way technology can be used:

This is the point of Herbert Marcuse’s important critique of Weber. Marcuse shows that the concept of rationalization confounds the control of labor by management with control of nature by technology. The search for control of nature is generic, but management only arises against a specific social background, the capitalist wage system. Workers have no immediate interest in output in this system, unlike earlier forms of farm and craft labor, since their wage is not essentially linked to the income of the firm. Control of human beings becomes all-important in this context. (p. 657)

Which brings us back to technology, socialism, and democracy. Technological rationalization that puts emphasis on efficiency at the cost of the workers’ freedom is a function of capitalist reasoning, not just any kind of logic. Alternatives exist. Of course, some of those alternatives are now failed experiments (the wise words of Homer Simpson come to mind: “In theory, Communism works. In theory.”). But as Feenberg acknowledges, at least in socialism the democratization of technology was formulated as a goal. Unfortunately, because this point was made by Marx (and anything related to Marx must be evil and why don’t I go back to Russia), the power of this critique has been lost:

Machine design mirrors back the social factors operative in the prevailing rationality. The fact that the argument for the social relativity of modern technology originated in a Marxist context has obscured its most radical implications. We are not dealing here with a mere critique of the property system, but have extended the force of that critique down into the technical “base.” This approach goes well beyond the old economic distinction between capitalism and socialism, market and plan. Instead, one arrives at a different distinction between societies in which power rests on the technical mediation of social activities and those that democratize technical control and, correspondingly, technological design. (p. 658)

What Feenberg describes here (democratizing technological control and design) is starting to sound a lot like (certain applications of) Social Software. But the majority of applications do not aspire to this goal because, as Feenberg argues, hegemonies legitimatize certain applications of technology and not others:

The narrow focus of modern technology meets the needs of a particular hegemony; it is not a metaphysical condition. Under that hegemony technological design is unusually decontextualized and destructive. It is that hegemony that is called to account, not technology per se, when we point out that today technical means form an increasing threatening life environment. It is that hegemony, as it has embodied itself in technology, that must be challenged in the struggle for technological reform. (p. 663)

But how do we challenge the hegemony that has been coded into the technology? How do we set about reforming technology? Is violent revolution necessary or do we need, as Latour would say, to change the way we change?

The legitimating effectiveness of technology depends on unconsciousness of the cultural-political horizon under which it was designed. A recontextualizing critique of technology can uncover that horizon, demystify the illusion of technical necessity, and expose the relativity of the prevailing technical choices. (p. 658)

In other words, we need to un-think the encoded hegemony by becoming conscious of the agendas that motivate a particular application of technology, by questioning the choices embedded in the machine. This is similar to the notion of the digital divide as paralogy I’ve been thinking about recently.

But we must be careful to avoid falling into a chicken-egg trap here: Which comes first, the sociopolitical systems that engender truly democratic technologies, or the technologies that facilitate more democratic societies? Neither. Remember, we are talking about “a scene of social struggle, a “parliament of things,” on which civilizational alternatives contend (Feenberg, p. 656),” not a zero sum game of good v. evil that will be decisively won at some point in the future.

Technology can facilitate more than one type of technological civilization, and each generation must struggle to define which type of civilization it wants, or have someone else’s desires imposed on them. There is no point in waiting for the democratic technologies of the future, because they have always been at our reach. This is certainly true when we look at what is going on in the Open Source, Open Content and Open Learning movements (greatly facilitated by Social Software). And it is also true when we look at other grassroots expressions of democracy that do not require the kind of affordances embodied by Social Software (let’s not assume that only a society with access to these technologies can give expression to democracy!).

The Open Source, Open Content and Open Learning movements might seem like an insignificant contribution in light of the magnitude of the World’s problems, in particular when we take into account the small percentage of people involved in these movements. But as I have noted before, these movements can transform the benefits of Social Software into other kinds of benefits for larger sections of the world. And as far as manifestations of democracy go, I believe they are a worthy challenge to a status quo that revolves around private ownership and profit.

If —by whatever combination of strategies and happy historical accidents— Social Software manages to change the way we produce things (artifacts, knowledge), will these changes in the means of production result in more egalitarian societies? In other words, will Social Software prove Marx was right about the link between democracy and technology?

Feenberg, A. (2003). Democratic rationalization: Technology, power and freedom. In R. C. Sharff & V. Dusek (Eds.), Philosophy of technology: The technological condition: An anthology. (pp. 652-665). Malden, MA: Blackwell Publishers. Accessed on May 5, 2006 from http://dogma.free.fr/txt/AF_democratic-rationalization.htm

Flickr Photo Credit:
Kim Pierro


Join the Conversation


  1. “But how do we challenge the hegemony that has been coded into the technology?”

    The important step is to recognize that it has in fact been coded into the technology, which means that the challenge to the hegemony can also be coded into the technology.

    My own view is that a denser and more distributed network of connections acts directly counter to the hegemony, because it lessens the influence and importance of the central nodes. This is the view I try to advance in Community Blogging.

    In particular, the sorts of network applications that will promote just such a network can be, again in my view, identified via four salient properties:
    – autonomy – they empower individual users
    – interaction – they foster peer-to-peer connections between users
    – openness – anybody can read anything, anybody can write anything
    – diversity – a multitude of technical, social and political systems is supported
    These are just my rough characterizations; I would not say that this is the definitive list, but this is an approximation of what the list would look like. I argue for this list in Connective Knowledge.

    The answer to the question posed in this essay, therefore, boils down, in my view, to this: we build and select and use software that instantiates the four principles, and where possible, we foster and encourage the use of such software in our institutions.

    It will be very difficult for the hegemony to resist the use of such software, for the principles it embodies are fundamentally democratic principles, which means that efforts to oppose such software will appear more and more authoritarian (as does, for example, the recent campaign against social software).

  2. Always good to have you drop by, Stephen (and I hope your retreat was a meaningful one).

    I think the problem is not that a hegemony “resists” networks configured according to the properties you describe, but that it finds a way to co-opt those same properties and apply them in anti-democratic ways. After all, the beauty (if we can call it that) of a hegemony is that it does not need to rely on force or coercion at all — by ‘naturalizing’ its dominance it can count on the consensus of the groups it dominates. To paraphrase a Love & Rockets song: You can not change hegemony / because even if you do / that’s part of hegemony too. Thus, the properties you describe become corrupted: autonomy becomes rampant individualism, interaction and openness become commodification, and diversity becomes tokenism. It’s not the authoritarian responses we have to worry about, but the ones that we ourselves endorse because they look like perfectly reasonable proposals.

    Effecting change under these circumstances takes more than just fostering and encouraging the use of a particular kind of software in our institutions. It takes more sustained ways of paralogical thinking, and that’s something I’m not sure we can encode into the technology (although I do believe we can use new behaviors enabled by technology as opportunities for deterritorializing the self).

  3. To be sure, co-option is a problem. Social activism becomes a marketing tool. The image of Che becomes a brand logo.

    But still…

    “Thus, the properties you describe become corrupted: autonomy becomes rampant individualism, interaction and openness become commodification, and diversity becomes tokenism.”

    This described how the hegemony responds, but does not describe how technology nor individuals need to change.

    For example, the hegemony (which we would have to define at some point, but we can leave it implicit for now) would like us to recast ‘autonomy’ as ‘rampant individualism’ but in technology that allows us to make our own decisions there is nothing that would suggest that we need to disregard and cease contact with the rest of humanity.

    Indeed, this observation gives us a tool that helps us analyze arguments fostering a hegemonic point of view. For example, it is suggested that personally designed news aggregators will cause people to listen only to voices that echo their own opinions, that serendipity would be lost – and that therefore this is why they should be forced or required to hear some (presumably, their) other point of view.

    But just as there is no reason to belive autonomy becomes individualism, there is no reason to believe this argument. People, when given the choice, are just as likely to choose community. This fact gives us our response to the serendipity argument, and the empirical basis to support that response.

    Each of the other three migrations has a characteristic signature that helps us recognize and react to the arguments of the hegemony. Commodification, for example, depends entirely on the restriction of channels (what I once called ‘channeling’); otherwise, there is no ‘mainstream’ to commoditize.

    What is important to note, as well, is that these migrations are not inherently present in the technology. That is why the technology becomes a tool that can create, and not merely permit, the type of democracy we are looking for (‘create’, not in the sense of historical determinism, but in the sense of, ‘all other things being equal, would lead toward’).

    Consider software that supports autonomy, for example. Part of autonomy is having the choice of who to talk to and listen to. So long as that software exists, the choice remains, and our disposition to engage with others is supported. It would require different software to foster rampant individualism, software that stressed individual over collective action.

    “Effecting change under these circumstances takes more than just fostering and encouraging the use of a particular kind of software in our institutions. It takes more sustained ways of paralogical thinking, and that’s something I’m not sure we can encode into the technology…”

    In my view, if we require that the population as a whole change its way of thinking before we can have change, then we will not have change. The change in the way of thinking comes as a result of a change in the environment, and not as a cause of it. The only way, for example, the population as a whole will want to talk to strangers halfway around the world is if there exists an environment where they can do so. The concept, and defense of, ‘freedom of the press’, comes after the invention of the press, and not before.

    What is really important is that the changes in social attitude fostered by technological innovation are not limited to that innovation. ‘Freedom of the press’ was accompanied by ‘freedom of speech’, ‘freedom of conscience’, and more. And in a similar way, the four freedoms (autonomy, openness, diversity and interaction) described in my paper will very likely spawn additional, non-technology-based, freedoms. Such as, say, the rise of digital cultures and digital nations. To speculate.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.