Archive for the ‘Summer Doctoral Programme’ Category

Liveblog – Stephen Coleman’s lecture

July 24, 2006

Stephen Coleman

Try to cover and map out contemporary research going on in the area of e-democracy.  An opening question is, how do we frame a definition?


One approach for doing so might be to think of historical context: The questions the raised by e-democracy are similar to the questions raised by older technological developments during their early phases i.e. TV in the 50s and 60s.  There are definite parallels. 


The historical context driven-question is whether are seeing a recurring pattern of technological emergence or whether the Internet represents a discontinuity. 


What is government?  Theoretically, the study of government has slipped away from a focus on hierarchical institutions (Rose, Barrie and Fuko).  We are moving beyond traditional instructional politics and seeing a process of political slippage.  From a policy point of view, this process is extremely important.  We now live in a push-pull landscape, not a one-way system where government would tell people what to do. 


Questions of e-democracy have been subject to four interpretative strategies:

  1. The utopian view (also can be thought of as a hyperbolic view): Deterministic, has an assumption that technology is somehow magic.  Regards the Internet as an unassailable democratic force.
  2. The mobilisation theory.  Something happens when people have access to new media that makes it more convenient for them to partake in politics.  There are fewer barriers and they no longer fear the consequences.  Groups can more easily participate in collective action.  Essentially, “things become easier”. 
  3. Normalisation thesis.  Resnick and Margolis, Davis, etc.  What people do online is a replication of what they do offline.  The barriers that previously existed remain in place. 
  4. The distopian interpretation.  Politics gets harder and atomised (the balkanisation hypothesis).


Instead, let’s look at this as an historical process, as a medium to long-term change.  What will things be like in ten years or fifty years time?  Lots of questions that were asked about TV can be re-asked about the Internet (Jay Blumer, a leading writer on the rise of television is highly influential here).  There is value in asking the same questions, even if we don’t get the same answers. 


Theory, practice and policy.  Aim of forthcoming book is to link these three things together.  Lots of these ideas are linked to democratic theory – freedom, access to resources, social justice etc.  In the process of design, cultural and technical matters become indistinguishable from each other. 


Micro-questions: What happens to the individual user?

  1. Resource required to be a democratic system 
  2. Uses
  3. Effects


Macro-questions (systemic): What happens to institutions, culture, relationships etc. 


The question of policy in this area is of interest to lots of big players and important people: the UN, the EU, the UK and US governments. 

 Four dimensions of practice:

  1. Information.  Democracy relies on common knowledge.  Social protocal on the tube.  Certain things put into the social system etc.  However, costly to disseminate; costly to access; epistemological issue between information status – official and unofficial.  The Internet may change this relationship and mechanism. 
  2. Consultation.  The way in which governments learn from the public.  The most basic way of finding out is through elections. 
  3. Participation.  We are in a period of change in which we think about participation – originally took a behavioural approach.  We are now thinking about a different approach, because citizens get to define participation.  We look at activities and then ask “what is the impact?”.
  4. Representation.  The process of speaking for the public (and the process of inventing the public). 


Information – Theory

  1. Abundance (Bimber).  Raises all kinds of questions about how information environments change the way people are able to act.  “Accelerated pluralism” – facilitates small groups, their position improves. 
  2. Value.  Generally, scarce information is of greatest value (i.e. how to make a useful machine).  However, civic information reverses this equitation i.e. traffic lights – they are completely valueless unless everyone know about them.   
  3. Trust.  How do we know what to believe?  This is a question of power. 


Information – Practice

  1. Searching (Richard Rogers). 
  2. Transparency.  Counting features.  Creative a normative framework, and didn’t get into the bigger questions about transparency. 
  3. Literacy.  How do you make sense of it?  How do you convert information into knowledge?  Information in isolation is fairly valueless. 


Information – Policy

  1. FOI, Data pro, PSB.  How do you create public spaces online.
  2. Public space needs to be designed and protected.


Consultation – Theory

  1. Co-governance.  Governments must become learning organisations.  This is a very trendy notion at the moment.  How does the Internet fit into this process?  Can you create additional linkages (Ostrom, Rhodes)? 
  2. Deliberation.  30 per cent of all work on e-democracy is done on deliberation.  Increasing focus on the institutional context of these changes i.e. Shulman. 
  3. Design.  Limited work done on institutional design (Novak etc, Street and Wright). 


Consultation – Practice

  1. Why are we doing this?  A very fundamental question that is not asked often enough. 
  2. Inclusiveness?  Existing offline consultation inequalities at replicated when those processes move online. 
  3. Outcomes. We only ask simple questions – how many people?  Etc.


Consultation – Policy

  1. Regulation (Arthur Edwards).  Some work has been written about moderation and facilitation.  
  2. Policy cycle.  When in the policy development process should you run an online consultation?  The Internet is a very fast moving place. 
  3. Devolutions.  How, when and at what levels do we devolve? 


Participation – Theory

We are moving away from formal notions of participation.  Online spaces become political despite their creators intentions. 


[Note: There seems to be conflict between the intentionally conceived, preserved and regulated political spaces in strand one, and these organic political institutions – will there be conflict between them?]. 


For example, Big Brother viewers were politicised.  Football fansites used the web influence the team’s management. 


Participation – Practice

Collusion:  To what extent do people become domesticated and sucked in?  Politicians (who unlike academics have a more practical bent) will always ask; “how do we make this more convenient?”  If they are social democrats, they will tend to ask “how do we make it easier for people with fewer resources?” 


Participation – Policy

New methods:  More people can participate, but we need to ask what impact that will have?


Representation – Theory

Presents lots of political theory problems.  Representation occurs when a group that needs to appear to be present, but cannot physically be so.  Essentially we are creating a form of technological ventriloquism.  This creates significant problems of dealing with time, distant and cognitive inequalities, as well as matters of inclusion and recognising different forms of expression (Iris Young).  We are partaking in an age-old discussion about the relative benefits of plebiscite type and indirect democracy.


Representation – Practice

  1. Online polling.  Online polling impacts upon the results you get.   People are, for example, more likely to say “don’t know” than they are in offline polls.  None the less, structural inequalities of Internet access may damage sample size. 
  2. Semantic analysis of mass-deliberation.  Large scale conversations and examine the semantic connections between one word and another.  “A weather map of research”. 
  3. Visualisation.  How do you turn it into a text? 


Aggregation.  What are groups of people thinking?  Could we use an ebay style recommendation system?  What voice do we speak with?  How do we represent ourselves online? 


Future research:

  • There are huge gaps.  Putting e-democracy back in the context of wider e-government.  This distinction is more blurred than it used to be.  There is a huge research and policy question there. 
  • How do we think about questions of global accountability (i.e. UN, World Bank, IMF etc)?
  • Remarkably small amount of work on journalism.  Bloggers and journalists regard each other with mutual distrust.
  • Policy research – social side; inequality. 
  • Techniques of listening – how do you create democratic outcomes from participatory outcomes online?  “Conversations”; encourage big name bloggers to not behave as publicity freaks.     


[Leadership – what role does it play.  Is it different in the traditional context?]. 


Leadership has a role to play certainly.  Often requires one person to really want to make them work.  However, this comes with two important qualifications.  We are not necessarily talking about traditional political charisma.  Often the people concerned are fairly quiet, fairly non-ideological; but very powerful in their own – e-domains.  Secondly, there is the capacity for collective leadership, which does not focus on an individual. 


[Disjuncture between designed and organic institutions]. 


To create democratic space you need money.  But, it is also true that all the most empowering activities seem to be happening from the bottom up and we are seeing leakages from the conventionally politically.  Government involvement is often seen as a curse i.e. Netmum.  We have not escaped the conventional problems of political engagement.  Civil society stops being civil society when government tries to manage it, but equally civil society without interfacing with government just creates hot air.  We need a continual dialectic challenge when these methods of interactivity are critiqued from the alternative perspective. 


[Measures of public participation]


A practical example of research going on here, on humour.  In many ways what is being studied is democratic participation.  Many of the jokers are making a statement about power.  A joke can be a statement of values, position etc.  Our political space is constantly expanding through these kinds of associations. 


Big Brother voters are interested in simple moral values – who they trust, who would you like to spend time with.  BB viewers are the same as political voters in this way.  This is a process of translation. 


However, concepts must have borders; we cannot just say everything is a political act.  But drawing these borders is problematic.  However, perhaps we can observe it more easily online. 


[Freespace vs. government creation]


Democracy needs to be able to operate in an environment where people have limited resources.  What we need to do is open up spaces that have the capacity to impinge upon power but do not require rebuilding every time that they are used. 


The problem of citizenship is that citizens are strangers to one and another.  We can never create a common understanding, but must look to build common acknowledgement and common respect.  We are necessarily building structures for this imperfect relationship. 


[Transnational E-democracy]. 


Huge potential for comparative political learning online.  Far better than lots of the development work currently being done in the area.  Also trans-national organisations should be thinking a lot harder about what they can do online. 


[Archiving things – is this a part of creating these environments?].


BBC is trying to archive.  Making things freely available is very important. 


[Does PSB already exist]

 The existing public broadcaster might be the right space; but that would require the creation of appropriate policy.  BBC Parliament receives hundreds of email, all of which they throw away.


Some ideas derived from the Z-theory seminar

July 24, 2006

I’ve already posted a live blog of the Z-theory lecture given by John Palfrey last week.  In that post, I tried really hard to capture John’s lecture and the comments that came from the floor during the course of it.  The ideas that make up Z-theory are really compelling, indeed to the point of being quite invasive – I have been unable to get it out of my head for the past couple of days and have been trying to figure it out a bit more.  As a result I wanted to post some of my own thoughts.  I should stress that I haven’t had a chance to look at Jonathan Zittrain’s articles on the subject yet (although I do intend to do so when I get the time; probably when the SDP is finished).  This is very much a response based on my interpretation (or misinterpretation) of John’s seminar, and thus might deal with issues that have already been addressed elsewhere.

There are a couple of points I would like to raise.  Firstly, I want to consider whether generativity is an absolute or subjective concept, and whether different forms of generativity stand in opposition to each other.  Secondly, I wanted to think about the issue of conflict between the two prescriptive principles (principles three and four) that make up Z-theory, and how they might be resolved. 

A problem I think a lot of people had with the generativity concept, as John originally expressed it (by linking it with .exe files), was that it seemed to relate to very few computer users.  We did a quick survey around the room, which, let’s face it, is probably as likely as most places on earth to be crammed with members of the digital elite, and less than ten per cent of us had ever written a .exe file.  At the time, this seemed to represent quite a bit of a problem – it seemed like we were talking about a theory that was wholly constructed around a very few computer users. 

We got around this problem by starting to think about different definitions of generativity, and especially “creative generativity” (i.e. blogging) and “information generativity” (wikipedia).  At the time of the discussion, this actually seemed to me to be a fairly satisfactory outcome to the difficulty, not least because it seemed likely that far more people engage in these activities than have in the creation of .exe files (although I concede at what point an elite activity becomes a mainstream activity is wholly subjective – blogging might be a more popular activity than programming, but still isn’t a majority activity). 

However, it has subsequently occurred to me that there might be a huge problem with this broadened definition of generativity.  Actually, I say “subsequently occurred to me”… when I really have to thank a conversation with Tanya for organising this argument. 

Let’s think about two blogging platforms that both seemingly offer creative generativity – the original wordpress tool and hosted wordpress.  The original wordpress tool gives the user tremendous (almost absolute) power to control their blog and its design.  Furthermore, as an OSS piece of software, it gives its user’s control of its source code too.  It is probably pretty fair to say that the user’s skill and imagination offer the only sizable limitations on what they can do.     

Although hosted wordpress offers a load of options to its users when they create their blog, in contrast to its OSS cousin, it is undoubtedly a limiting piece of software – there are only so many presentation options you can cycle through.  OK, there are hundred, maybe thousands of combinations, but it is essentially a locked down system.  Take the example of the master templates.  There are a limited number of them to choose from (I was using it today, and I think there are possibly about twenty-five).  If you don’t like any of the options, you cannot create your own (as you could in OSS WordPress), but instead you have to wait for the webmangers to upload new ones.  This would seem to be a simple question, then.  If, generativity is good, OSS wordpress is good, hosted wordpress is bad. 

However, I think that is too simplistic a statement.  Any assessment of whether something is “generativity good” or “generativity bad” requires a more dynamic calculation of loss and benefit.  If we accept the notion of “creative generativity”, that equitation becomes even more complex, as the potential for greater “creative generativity” may, in certain circumstances, be inversely proportional to the potential for “programmable generativity”.  I envisage this conflict will become even more common as drag and drop-type packages (for example Windows Video Editor) become more ubiquitous, and enable far more people to act in a creative generative way and do things they have never done before, but, simultaneously, limit the options available to them heavily.  

If it seems possible that different versions of generativity can conflict with each other, we are then faced with far more difficult choices when dealing with the third proposition – crucially, the simple statement that generativity is good becomes insufficient.  

I want to move onto the second issue that has being running around my head – the question of primacy amongst the principles which make up Z-theory and in particular how a conflict between the third (generativity) and forth (wisdom of the crowd) principle would be resolved.

There are, as far as I can think of, three possible solutions to a hypothetical conflict between the third and forth principles.  Firstly, it could be claimed that the third and forth principles are intrinsically compatible with each other and that any hypothetical conflict could not become a reality.  Secondly, the principles could be structured in such a way that one of the two principles enjoys lexical primacy over the other – that is, the prime principle must be fulfilled to the absolute maximum degree possible before the secondary principle is even considered.  And thirdly, that any conflict between the two will be resolved through some kind of dialectic mechanism.

In order to consider the first solution, I think we need to track back to one of the original justifications for Z-theory.  It has been argued that the Internet is inherently unpredictable, and that there is no way we can envisage the social and communicative structures it will give rise to in the future.  As evidence of this; could anyone have predicted the development of wikipedia or ebay fifteen years ago?  It thus follows, it is claimed, that those thinking about the regulation and organisation of computing and the Internet, rather than trying to regulate the world we have at the moment (or for that matter, the world we imagine we have in the future), should instead seek to avoid any regulations or lock-downs that hamper any unexpected development.  In this sense, the generativity argument is almost Rawlsian, being conceived behind what we could term a veil of ignorance.  The propositions do not derive their rationality from the world, Internet and applications of computing that currently exist, but rather from the contribution they will make to the creation of a future world; but a future world of which those creating the principles have little or no knowledge.  However, it is the aim of creating that unknown world which makes the generativity principle rational, none the less. 

If we can think of the generativity principle in these terms and agree that it is something that would be agreed by rational actors, then we might arrive at a resolution to any conflict between principles three and four.  As long as we believe, firstly, that generativity is rational (for the reasons outlined above), and, secondly, that the crowd is comprised of rational actors, then it might be argued that there is no possibility of there ever being a clash.  Such an argument is in many ways problematic, not least because it relies on the assumption that human beings are rational actors.  But, if accepted, it also has huge implications – essentially it destroys the distinction between the third and the fourth principles of Z-theory; they become manifestations of the same idea, derived from the same source.

If we accept it is possible that the crowd might act in a matter divergent to the generativity principle, then we have to look for an alternative solution.  One possible method for solving any conflict between the third and fourth principle is the assertion of lexical primacy, which I shall define as a process where one of the principles is satisfied to the absolute maximum degree possible before any attempt it made to satisfy the other principle in the slightest.  Obviously, there would be two ways of structuring such a solution – assigning lexical primacy to principle three or assigning lexical primacy to principle four.

Lets start with the first approach.  Essentially, it requires us to claim that generativity is the most important concept, and if at any point the will of the crowd contravenes it, the crowd’s power will be restrained.  I find this solution deeply problematic.  For starters, as I have already outlined above, I not wholly convinced that “generativity” could be defined as an absolute concept.  As a result, it might not always be abundantly clear whether the crowd’s preferences endorse or confound generativity (or indeed, simultaneously do both).  A second major difficulty with this approach is that it is necessarily elite driven.  As our quick survey around the class showed, very few people programme files, even amongst our group of disproportionately able, computer-aware people.  Yet, by assigning lexical primacy to principle three, we are saying that the absolute maximum level of generativity must always be created, regardless of the wishes of the population as a whole in the crowd (I would add, for the purposes of this scenario, I am assuming the crowd represent the population and is not constituted of an elite; that is another, also problematic, question).  This is a scenario that, certainly on some occasions, has the potential to benefit a very small proportion of the population, whilst restraining the will of many more. 

Giving lexical primacy to the fourth principle over the third is even more problematic.  Indeed, it rather ruins the purpose of the whole theory, because it becomes very hard to see where generativity fits into the crowd’s deliberations at all.  If the word of the people is the word of god, then more complex principles, no matter how compelling or how attractive, are no more than ideas floating around in the ether, which the crowd is at liberty to adopt or ignore if it wishes to.  But that reduces generativity to the level of being just “another argument for something that is good” (unless that is, we hark back to the first argument, and believe that the generativity is not just a good argument, but a uniquely rational solution; but that also takes us back to the problems with that first argument).

When I was in the lecture, I assumed that lexical primacy would be necessary in order to make the theory work.  After some consideration though, I now think quite the reverse.  The notion of lexical primacy seems like a busted flush; whichever way we were to go, we end up creating huge problems for ourselves and an unworkable mechanism.

This leaves us with a third and final solution to a conflict between principle three and four – some kind of dialectic arrangement which balance the principle of generativity with the concerns and wishes of the crowd.  In many ways, I think this fits with some of the things that Ken Cukier said after John’s lecture; that we shouldn’t think of this as being a binary choice between generativity and sealed boxes, but instead a spectrum of arrangements, all of which can co-exist. 

On the face of it, this is probably the most attractive solution we have.  And, if one thinks about (and takes at face value) the lecture given by Zaid Hamzah, which I blogged on here, and in particular his argument that there was a growing détente between Microsoft and the Open Source Community, then maybe we can actually see some of these dialectic processes at work.  However, despite this optimism, there are still a lot of questions.  How might this dialectic arrangement be managed?  Indeed, should it be managed – or does a market offer the best solution?  Would a market mechanism inevitably lead to the neglect of the wishes of some members of the population, whose desires are out of step with the vast majority?  Furthermore, will the legal and political advantages enjoyed by big corporate players constitute an element (and arguably a distorting element) of the dialectic?  Would this necessitate some kind of government intervention to level the playing the field?

Generativity is a compelling and very attractive theory.  As well as giving a compelling answer, I think it’s greatest strength is that it offers a powerful framework for asking many further questions about what exactly we desire in Internet and ICT development.

Mr Microsoft

July 24, 2006

Live-Blog: Software Piracy, Proprietary vs. Freesoftware

Driving Software Value In The New Innovation-driven Business Equation (Zaid Hamzah)

Zaid Hamzah works with intellectual property at Microsoft; his specific responsibility is for value creation, viewed with a holistic perspective. 

Central to understand the situation MS finds itself in is appreciating that there is a new economic model.  In industry, everyone is reaching for the centre of the OSS / Corporate spectrum; as a result the MS model of production is very similar to OSS.  Governments, for example, have access to source code.  The boundaries between the two approaches that have existed in the past are shifting. 

We are best understanding the new environment as being a software eco-system, with all the connotations and implications that has.  As a part of this, there is evolution occurring in MS – they are moving to patent driven situation, rather than IP. 

Five key points

  1. This is a hybrid environment.  MS accept the reality of OSS.  This is “The shift to the centre” (as an aside to this, it is important to appreciate that access and ownership are different concepts). 
  2. It is all about innovation.  An essential element of this is peaceful coexistence between corporate and OSS creators.  We will compete based on value and quality of product.  Governments should choose the model that best meets their requirements. 
  3. This neutral approach is beneficial to innovation.
  4. Under copyright law, you can protect code only; the idea is not protected.  Patents in contrast will protect the idea.  This is beneficial to innovation.   
  5. The protection of software with patents does raise other issues we have to consider.   

It is important to realise that software no longer just acts as an enabler.  It is now at the very heart of innovation that is occurring.  For that reason government’s cannot take a prescriptive approach.  The key thing that drives development is incentive.  It is also important to remember that OSS adopts the commercial source model, but they just have a different model. 

The challenges

There is no system for registration of copyright; it happens automatically.  Because it isn’t registarable, it becomes really hard to prove any kind of infringement.  In order for it to stand up in court, proof of ownership requires evidence.  A patent (which you do register) makes it considerably easier to prove. 

MS seeks to demonstrate that countries should adopt a neutral model of procurement, and not have a preconceived idea of what model (corporate or OSS) is best.  It can be argued that adopting OSS immediately deals with any issues of legality; this is a mistaken solution. 

[Clarification questions: What is a neutral procurement policy?]

Lets look at an example.  Malaysian government says, all other things being equal, we will prefer OSS software.  By neutral MS means they should adopt the solution that suits them best. 

[Hang on; OSS is free, and the code can be changed to make it specific to the end user.  Surely this makes it an ideal for government].

The distinction between company and country has broken down – large companies all have a government relation’s officer. 

[Do MS try to change the legislative framework in the countries it is doing business with?  Does this raise social justice issues?]. 

Yes, MS does try to influence the legislative environment.  MS also provides resource training, for example to judges. 

[In Nigeria, the work that MS is doing isn’t having an effect; they are training the wrong people.  For this reason OSS is an option for many people].

Different approaches are adopted in different countries – in Asia, MS has done lots of stuff, for all the different elements of society.  This is not just an elite thing.   

[It is possible to benefit from not having a strict IP system i.e. Singapore.  How will the patent system be better?]. 

A company that owns a patent get a higher premium than one that only has IP protection.  Generally IP is regarded as a weak level of protection.  Software patent encourages innovators to continue to invest, because the benefits of the investment are likely to be larger and the investment contains less risk. 

[Patenting does not lead to transparency in Europe, but the reverse]. 

MS does not have a global patenting strategy, but operates locally, dependent on the conditions it finds in different markets.

John Palfrey’s Response – in italics

The idea of a move to the centre is certainly true.  Ten years ago, the relationship between Microsoft and OSS would have been very different.   This is the logic for the change: Step One:  There is rampant piracy, especially in East Asia (in some countries as high as 90 per cent – that is 90 per cent of people using MS products, but did not pay for them).   Step Two:  US and other western countries start to argue for greater IP protection in other markets.   Step Three A: The China example; in order to comply with the American demands to cut the piracy of MS products, the government starts to favour Open Source.  MS respond to this by saying that this is a bad idea. Step Three B: It starts to be argued that copyright protection is insufficient.  In the US, you had to register copyright until 1976.  That system was abolished and it has become a more informal arrangement.  This is the same in most parts of the world. Step Four B: We should do software patents instead of copyright. There is a twin-track logical process occurring.  Why is their Open Source Software anyway?   [In order to change the software; you need access to the code.  Can also be linked to the desire / attractiveness for anarchy.  People’s negative reaction to MS]. It is important to remember that people hated MS.  This idea was closely linked to the hacker mentality.   From this desire, we can state it is important to have a legal regime that permits and facilitates the creation of free software.   Three versions of the law, seeking to do three different things:

  1. You can’t do this (you can’t drive faster than 70 mph);
  2. Levelling the playing field to enable the ecosystem;
  3. Enabling something i.e. voice skype (could also be grant based, as in offering financial support). 

MS has a huge advantage in the market place.  We might be less concerned about the 90 per cent piracy than the 90 per cent penetration.   Government needs to level the playing field.  Patents would prevent OSS innovation; and then we reach an argument against patenting.  Don’t give these monopolies to already very powerful people.   

[Is this about geopolitics?].

This is about geopolitics and people are fighting for different things in different regions.  The US has been very prominent in campaigning for its interests.  But there is also a localist element to the discussion. 

[We should not legislate].

This does not happen anywhere.  US pressure has been instrumental in making most countries create copyright protection to some level. 

JP: The US uses trade to threaten countries to fix their piracy issues.  Even though some governments have moved to OSS, it is not obvious that this solves the problem; after all does not impact business or private users who continue to use pirated MS products. 

[The Internet / Operating System are not the same as other technology forms.  MS’s success is based on their big market share, not innovation]. 

This is an incompletable argument.  MS invests lots of money, and the patent offers a protection of the investment.  This isn’t about MS, but the small company, who should get returns on their innovation. 

[A patent law suit in the $4-5 million.  Small companies will not be able to afford it]. 

A patent would be a major asset to a country and a company. 

[Normally a cease and desist letter will crush innovation.  No one wants to go to court.  Money speaks.  Also how diligent is the patent regime – average time in the US is eleven minutes].

Stephen Ward’s Lecture – My notes

July 24, 2006

Stephen Ward’s Presentation – My Notes

  1. Research agenda
  2. Method and data problems
  3. Research evidence – some key findings
  4. Shaping online campaigns and conclusions

Party Competition: Increases pluralism, fringe and minor party candidates will be helped – it will level the playing field. 


Why is this?

  1. The Internet is an unmitigated communications field; unlike other media which is filtered.
  2. Low cost (in comparison with TV, billboards etc).
  3. Anarchic and favours fringe interests.
  4. The multiplier effect is in play; appearance of size and power can be created online.   

Participation:  There is a conflict about whether we are seeing mobilization or reinforcement.  What aspects of the Internet lead to increased participation:

  1. Efficiency and convergence.
  2. Increasingly the options for participation and channels through which it can occur.
  3. Increased information, which generate participation.
  4. Allows for the creation of new virtual networks.
  5. Increases depth and quality of participation. 

Post-modern electioneering (Norris, 2000; Farrell and Webb, 2001; Blumer & Kavanagh, 1999).  Norris sees three ages of political campaigning (essentially pre-television, television and new media) and holds a very technocentric view of the world.

The post-modern campaign has a number of aspects:

  1. Permanency of campaigning.
  2. Increased targeting – this is regarded as the magic bullet of politics; the ability to swing the voters who are able to win the election in key constituencies.
  3. Increased inactivity (the electorate move from being passive to active).
  4. Decentralisation of campaigning.  TV is regarded as a centralising power, whilst the Internet has decentralised it. 
  5. Americanisation / globalisation. 

Researching Elections Online

  1. The public face of online political campaigning – the websites themselves.  There is also some experimental work going on regarding hyperlink analysis. 
  2. The private face of online campaigning.  Interviews often used to understand it i.e. Newell, 2003.  However, as time has gone by parties have become far less likely to grant interviews to academic researchers.  Alternatively, we can turn to miner parties and MPs.  Internal surveys of party members are very hard to do, as parties are not hugely supportive (some example include Pederson & Saglie, 2005; Lusoli and Ward, 2004).  Log file analysis of BBC data from the day after the election in 2005. 
  3. The public response to online campaigning.  Lots of US work, limited study done of elsewhere. 

Party Competition: Research Evidence

Rhetoric.  Small parties are offered opportunities by the web, for example the far right and Greens.  Participatory culture is long existing in the Green Party.  The far right does not get attention in the mainstream media.
Content.  Major parties still dominate and have far greater level of content than smaller parties.
Organisational tools.  Great for small parties, or so argued the BNP, who are heavily reliant on email.  

It might be the case the Internet allows small parties to survive rather than prosper. 

The public response to online campaigns: 

  • Online campaigns in the UK have small audiences (in the UK in 2005, 3 per cent viewed a party website and 1 per cent went to a candidate). 
  • Generally it seems the Internet is preaching to the politically converted.  There might be a slight widening effect amongst the 18-24 yr old group. 
  • The “student” effect – one group who is very likely to engage online. 
  • Intensification of action amongst the engaged.  Are we seeing the birth of the 24/7 activist?  Will the Internet facilitate changes that turn the member into an activist?  Generally people who have joined parties online tend to become passive members. 
  • Different Internet tools have different effects – websites, email etc. 

A direct comparison with Coleman (2001) shows that lots of aspects of “Internet politics” did go up between 2001 and 2005. 

A Postmodern Election?

Web is a top-down tool.  Plays an informational role and offers a fund raising facility – most of all in the US, to some degree in the UK and not very much in Europe.
Low level of interactivity on political websites – widely perceived that the costs outweigh the benefits.
Interest in targeting swing voters.  Votervault has been very successful for the Republicans, but it didn’t really work for the Conservatives in the UK in 2005.
Decentralisation of elections argument doesn’t stand up.  National parties are still very strong, although often we see a “top down localism”.
Americanisation is not occurring.  Instead, tools are being adapted to fit the circumstances.   

Shaping Online Campaigns

Systematic factors.  Media environment (ownership rules, access etc).  Campaign environment.  Electoral law.  Candidate and party.  Organisational factors.  Resource and capacity.  F/T staff, culture and goals (i.e. whether parties are vote maximising or participatory). 

Sub-systematic.  The marginality of the constituency.  Whether an MP is an incumbent.  The profile of the candidate.

A different kind of talk

July 21, 2006

I have been to loads of great seminars and events in the first week of the conference, but I think my favourite thus far was given by Seeta Peña Gangadharan, who is a student on the programme and doing a doctorate at Stanford University.  Seeta is interested in the relationship between the Internet and deliberative democracy.

This is in itself a very interesting when viewed from an instrumental point of view, but what struck me most about this particular presentation was how normative Seeta’s approach was – she was quite happy to say that she thinks deliberative democracy is a good model for government, she fears for the future of democracy in the US and that she is interested, as part of her study, in developing methods of enhancing and safeguarding it.  We can of course have the argument about whether her normative premise is correct or not, but it felt really refreshing for someone to unashamedly make a value statement and link it with some fantastic academic work… and in the process trigger off what was perhaps the best ding-dong discussion at the SDP so far.
Perhaps my admiration for this taps into my own fears that the more educated I become, the more I tend to adopt the role of the impartial observer.  Of course, I know that on some occasions that is a good thing.  But equally, it is good to be reminded that we have a responsibility to change the world as well as describe it.


July 20, 2006

John Palfrey asked me to liveblog his lecture and the subsequent discussion on Jonathan Zittrain’s Z-Theory, which took place on Wednesday morning at the OII.  I have done my best to get a flavour of the theory and the debate that followed across.  John’s take on Jonathan’s argument was clear and had a beautiful logical flow to it, so I have tried to encapsulate that as clearly as possible in the notes Where a question or comment was made from the floor, I have placed them in square brackets.  As ever, any errors, omissions, misinterpretations and misattributions are entirely my fault (this blog is in fact an edited and tidied up version of what I took down during the seminar – you can find a pdf of the original notes here). 

Z-Theory (Jonathan Zittrain’s Theory Of Generativity) 

A brief history of related arguments

1982 – End-to-end argument

1996 – Two major arguments.  John Perry Barlow: Declaration of Independence of the Internet.  Largely a rhetorical argument.  Also, Post and Johnson paper in Stanford Law Review on the law and borders.  In many ways, this is Barlow’s argument extended and placed in a legal framework.  Certainly has a similar libertarian view, but Post and Johnson are normative and more descriptive.  They note that the Internet makes it harder for governments to regulate.  In observing this, they raise a key disciplinary question; is the Internet different?  Post and Johnson’s answer is yes, because of its transnational nature. 

1999 – Larry Lessig, in a direct response to Johnson and Post, offers the most forceful legal argument to date.  This argues that the Internet can be regulated.  Although we might not be strongly aware of it, it is regulated through four means:

  1. Technology
  2. Law
  3. Social norms
  4. Markets (this factor, although now regarded as very important was a later edition to Lessig three-strand typology. 

It is significant to note that these regulations shape one and another, and are capable of acting in tandem and against each other.  This is very new for lawyers and a radical argument.  It moves the discipline away from a reliance on statute law. 

Lessig conclusion is also significant.  As well as describing the Internet as being regulated, he also says he does not like it. 

2002-3 – The rise of the notion of the wisdom of the crowds, argued for by Bankler.  This relies on a different view of Internet structure, claiming that it has an hourglass architecture.  As the Internet is constructed in layers, it means that different elements are subject to different forms of regulation. 

Central to this idea is the belief that Internet activity is different to what has gone before and powerfully subverts notions that are central to legal and economic theory.  This starts with an interest in Open Source software, which is claimed to represent a new model of production, as it is non-compensating.  Later we can, in addition, think about such non-compensating activities as blogging.  That this creativity is occurring leads to a “do no harm” argument, wherein it is claimed regulation would be detrimental to “good things” that we see on the Internet.  This is where we arrive at Jonathan’s theory. 


The theory consists of four claims (two descriptive, two normative):

  1. There is a huge security threat online and a “Digital 9-11” is not only possible but also probable.  At the heart of the Internet’s vulnerability is the end-to-end network design, which could be fatally undermined by viruses and worms etc.
  2. The response to that very real security threat is the lockdown of the PC.  The desktop element of electronic interaction is as important as any of the network layers.  At the moment, we see automation and the end user losing control over their computer and the products they use.  This is the technological solution to problem one that is currently practiced by the likes of Microsoft. 

In reality, the value of the Internet is not be found in its end-to-end architecture, but in the concept of generativity.  We should care about systems that are generative, and which allow us to place things on top of them i.e. Microsoft operating system, Microsoft Office.  From this we derive the third principle:

  1. Therefore “If it is generative, it is good”.   
  2. The way to get to a generative environment is through there is to think of new solutions.  We need to rely on the wisdom of the crowds and develop a peer production model to create decision-making institutions. 

[This raises an important question.  The crowd might be an elite community, such as the open source community or the blogging community.  Do their views reflect everyone else’s? How mass can the crowd really be?] 

An important element of this is to refocus the debate on the PC.  A core element of the PC is the .exe file.  Anyone can (if they have the skills) code.  Furthermore, the Internet’s architecture is open source; in theory anyone can understand it.  But are we then tailoring the Internet for a small elite? 

IETF (Internet Engineering Task Force) principles:

  1. Keep it simple;
  2. Keep it open – growth could come from anywhere;
  3. Technical meritocracy;
  4. People are reasonable;
  5. People are nice. 

How do these principles works in practice?  Someone will issue a request for comment (rfc) and then a consensus voting decision will be taken.  Everything on the Internet thus far has been decided through this process and thus far it seems to work (and from which we can infer that propositions 4 and 5, above, must be true).   

Crucially, we cannot predict how the Internet will work and what will develop. For example, the idea of Wikipedia was completely unpredictable, and wholly reliant on generativity.  If the future Internet is engineered or structured in such away that generativity is relinquished, many important social forms that we cannot imagine today may never be created. 

The lock-down of the PC has been fuelled by events and public policy.  The peer-to-peer crisis saw lots of changes, whilst cyberspace security has also become a matter of national security. 

Crucially much of the lockdown has become automatic.  When we work with security certification, how many of us really understand what is happening?  The whole computing environment has become “scary”.  At the moment, in order deal with our fear, a huge proportion of people place a huge amount of trust in a single corporation. 

In many ways this pulls us to the central question: Whom are we going to trust?  A large multinational corporation?  Or should we trust our peers – “the wisdom of the crowds?” 

[This raises the question of primacy amongst the last two principles – what if the wisdom of the crowd calls for generativity to be sacrificed?  This leads us to a potential fifth principle wherein we could figure the relationship between the four principles].

Let’s go through the statements.  How sure are we of the descriptive statements?

[A few sceptical voices on the first principle, but by and large accepted by the group.  It is argued that the security threat might be a product of the wisdom of the crowd – a perception that there is a security threat].

We might think of two critiques of proposition one:

  1. It couldn’t happen (a variation of this is that it is real, but not significant).  
  2. There are real solutions that make the issue redundant. 

Moving on to the second principle:

[It is possible to be more convinced of this principle than the first one.  Could there be a self-correcting mechanism, which will kick in when people see the downsides of the solutions they have adopted.  Furthermore, what about Macs and Linux?  Do they not show that people are making different decisions?  But this is a very elite scenario, and by far the dominant response continues to be lock down.] 

Jonathan argues that we may see a red zone and green zone develop.  Safe PC’s for those who want it, and everyone else can have Linux.  This will create a two-tier world, where it is possible to get Grandma a safe computer. 

[A quick survey found that only two people in the room had ever written an .exe programme.  However, if we broaden the term generativity to blogging, wiki’s etc this changes that equitation]. 

What do we think of principle three?

[Would it include technical and cultural products?  Blogger is culturally generative – you can’t code, but you can write, post videos etc.]

End-to-end says that all things are permissible on the Internet.  Generativity is geared to activities that are socially desirable.  Normative value judgements need to be made – generativty can be both good and not as good.  But now we are getting into dealing with value comments and judgements, which is very problematic.

The fourth principle raises a number of key questions.  What we want to know is what software should I run?  Who should I trust?  And how do you establish a mechanism for organising it? 

This leads to a number of core critiques:

  1. This might be the preserve of elites. 
  2. Solutions of this sort are still governed by money and social norms. 
  3. No recourse.  What happens if the crowd get it wrong or prejudiced?  Is there a way back?
  4. Privacy critique.  What happens if the crows put about information about you?  Again, a lack of recourse. 

[Does this not lead to a great danger of information overload, especially if everyone can / has to choose their own expert?]

This fits with the two tier solutions.  The argument is not really about someone who spends a lot of time working with computers. 

How do we define the institutions that allow the crowd to communicate?  This is a design question.  Informal associations can be both good and bad.  The Ebay has become the paradigmatic example of a crowd driven institution (although it by no means prefect).

[How different is this from Barlow?  Is this necessarily suggesting a rule of the cyber-people?] 

It is important to realise that The Z-theory is not a law free zone – this is an additional mode of regulation. 

Ken Cukier’s comments: This all seems very sensible.  However, are we wrong to posit as either/or – is it really Microsoft or a wisdom of the crowds?  This is multi-tier system or a market place.  The choices we are faced with comprise a spectrum of options.  This is not MS verses the mob.  Already the Internet we have is not the Internet we idealise. 

My Presentation

July 20, 2006

This post really dates back from Monday, but I have been having so many problems with my blog it has taken me this long to get it online anywhere.  On the first day of the course, I was scheduled to give my presentation.  My seminar was tutored by Steve Ward, who was really supportive and offered great advice. Additionally, all the doctoral students (many of whom are further advanced on their research than I am) offered really good tips on how to develop my ideas and move on with my work. I will now make the admission that this was my first proper presentation as a research student, and I am genuinely very happy with the way it went. The best thing about it, from my perspective, was that the seminar had a really good atmosphere that was in no way intimidating. Although I was very nervous to start out with, within a few minutes I felt very happy running through my slides and talking about my ideas. If anyone is interested, you can find the slides here and the bibliography for the presentation here.


July 19, 2006

Right, first of all big apologies.  I was planning to blog the Oxford Internet Institute’s Summer Doctoral Programme at my normal domain, but for some reason the ftp has packed up completely.  I don’t want to miss this opportunity to do some exciting and useful blogging, so I have set up this temporary blog to store SDP stuff.