How Do You Measure Democracy Work?

How do you measure democracy work? There is no balance sheet for political parties that records how healthy they are. We can’t give inoculations against authoritarian ruling schemes. In the absence of measuring sticks, most of us engaged in democracy and governance (D&G) work around the globe have put emphasis on anecdotal reporting and on numbers: numbers of people trained, polling and election results.

But over the past few years there has been increased attention to the need for a more rigorous assessment of the impact of D&G programs. Impact refers to the behavioral or attitudinal change connected to programming: after training parties to develop issue-oriented platforms, do more voters vote for parties based on their platforms? That’s impact.

In 2008, a U.S. Agency for International Development (USAID)-funded study entitled Improving Democracy Assistance recommended that the agency dedicate more resources to measuring impact of D&G programs using randomized studies and case studies. Many others are following in USAID footsteps, as attention is turning to more substantive evaluations of D&G programs.

International Republic Institute (IRI) is one of the organizations applying new methods of evaluating the impact of democracy assistance in the field. In Iraq, IRI’s team working with political parties and their candidates came up with a way to track and measure the impact of campaign training.

It divided campaign training into 20 different segments. Candidates who attended the sessions had to evaluate each segment, such as public speaking, separately from others.  In doing so, it was possible to compare IRI’s perceptions of which segments should be most important to valuation by actual candidates and what, if any, factors affected their delivery.

But even then it’s not always certain that a specific training can produce the behavioral change. There are many exogenous factors that can affect the behavior of a participant in a training program. For example, what if the participant attended another campaign training? How can you isolate the effect of one organization and a specific training program from everything else?

Enter impact evaluation. It’s a form of evaluation that assesses the change that can be attributed to a particular activity or intervention. That means it isolates the target variable from other exogenous factors. Ideally, it does that by comparing randomly selected groups – one that receives the intervention and one that does not. The use of control groups gives impact evaluation its distinguishing characteristic — the use of a counterfactual: “what would the situation have been if the intervention had not taken place?”

If you think that sounds like medical trials, you are right. The thing about D&G work is that the programming doesn’t always lend itself to randomization. Imagine telling a political party in X country that they can’t participate in a training because they were selected to be the control group.

In Colombia, IRI identified a democratic governance program that meets the criteria for an impact evaluation. It has a large enough sample size to detect impact, and it lends itself to random selection of participants. After securing funding from USAID for this impact evaluation initiative, IRI undertook a competitive bidding process to select an impact evaluator (a key criterion is that the evaluation be done independently).

IRI staff then worked closely with the evaluator to design the evaluation of the impact on citizens’ perceptions of a new Office of Transparency in Cucuta that will facilitate public access to city government documents and a One-Stop Shop in Cartagena that will provide multiple municipal services to citizens in one place. Baseline surveys, to which final results will be compared, began in the two cities on March 1.

In the case of these two services, the target variable is increased citizens’ confidence in democracy as a system of government. That will be determined through surveys to measure what the “feel good effect” and the good service effect in each city. The “feel good” effect will measure the extent to which citizens’ confidence in democracy has increased as a result of the establishment of each office based largely on whether they have used the new facilities or not. Since it may be difficult to detect with confidence the impact of one intervention over one year in a city with hundreds of thousands of residents, the good service effect will measure the extent to which citizens who actually use the services experience an increase in confidence in democracy.

Thus far, IRI has learned much from this ground-breaking endeavor in attempting to isolate the impact of democracy assistance programming.  Importantly, you need to identify the right kind of program for impact evaluation.  Further, it can be expensive and highly technical so you must plan far in advance. But the real learning is ahead. The Office of Transparency and the One-Stop Shop are just getting underway. They will function for a year or so before we measure their effects. Stay tuned.

This post is part of a series of guest posts by the International Republican Institute (IRI).

Published Date: April 15, 2010