Counterfeit Goods and Election Observation
The European Stability Initiative (@ESI_eu) recently published a provocative report about election observation, focusing on the potential implications of Azerbaijan's presidential election. The report details disputes between ODIHR and other European election observers, describes the activities of many accredited monitoring groups with dubious methodologies, and concludes that professional election observation is at great risk of losing its value.
The report notes that fifty groups observed the election, but only one (ODIHR) issued a critical assessment.* ODIHR was openly challenged at its press conference, in part because it could be portrayed as an outlier among international observer assessments. But, ODIHR also uniquely deployed well-trained long-term and short-term observers, and relied on clearly articulated principles to guide its analysis and assessment.
ODIHR's problem is analogous to manufacturers whose products are counterfeited and distributed all over the world. In markets across Eurasia and elsewhere, consumers can find "brand name" products, but they are quickly exposed as cheap knock-offs. The materials and manufacturing lack quality and detail, and they deteriorate at a faster pace than real goods. Scholars in economics, business, and marketing have assessed this problem, emphasizing issues that can counteract counterfeits such as strong branding, advertising that helps consumers identify fake products, reliable customer service that allows consumers to resolve problems with authentic goods, and the regular introduction of new, innovative products to undermine cheap copies (see, for example, Yang and Fryxell 2009 and Qian 2012).** These lessons are also valuable for the election observation community.
While ODIHR produces extensive reports (although it could do more - see below), its contribution is too easily summarized as a simple judgment: free and fair/not free and fair. The media may prefer this concise sound bite, but the oversimplification of an extensive process allows other organizations to offer competing judgments that can be perceived as equivalent when they are not. It may be useful to present a more nuanced summary of compliance/lack of compliance with international standards in multiple categories. Long-term observation permits assessments of ballot access and competition, voter registries and suffrage, media access, and dispute adjudication. Short-term observation permits assessments of the process in the polling stations and the compilation of votes at district/territorial commissions. The OSCE could easily disaggregate its assessment, indicating what parts of the process meet or do not meet international standards. While the text of its reports often make these connections, it could also develop a summary table/figure that addresses the components of the electoral process. Most other organizations are not well-equipped to provide these assessments, distinguishing the depth and scope of ODIHR's efforts.
On a related note, ODIHR should consider more aggressively comparing and contrasting its methods and conclusions with those of other organizations. While it is a diplomatic group that tends to avoid direct confrontation, the organization's ability to conduct election observation will be hindered by the proliferation of observation reports that are of lower quality (especially if ODIHR does not directly challenge them).
ODIHR's election observation model has not changed much over the years, and many aspects function well. But, perpetrators of fraud have been much more nimble and flexible, altering their on-the-ground tactics to be less visible by STOs deployed to polling stations. Opportunities exist to adapt methods and innovate in the face of the changing landscape.
===========================================================
* The US State Department was also critical, but refers to the ODIHR mission as the primary source material for its assessment.
** My summary of the literature is cursory as it is not my specialty. But, I found many articles related to the issue of counterfeits in various journals and other publications. The issues I note above were raised in the pieces cited and in other papers. I suspect that my limited assessment of the literature has missed some nuance and additional findings.
*** While it is true that observer teams may vary in their assessments of "problems", and these observations could vary across time and space, it would be valuable to present these types of visualizations and openly discuss any problems of cross-national and temporal comparisons.
**** Please see notes in the paper about the definition of "crowdsourcing" and its implications.
***** We also address issues with data reporting in the paper.
The report notes that fifty groups observed the election, but only one (ODIHR) issued a critical assessment.* ODIHR was openly challenged at its press conference, in part because it could be portrayed as an outlier among international observer assessments. But, ODIHR also uniquely deployed well-trained long-term and short-term observers, and relied on clearly articulated principles to guide its analysis and assessment.
ODIHR's problem is analogous to manufacturers whose products are counterfeited and distributed all over the world. In markets across Eurasia and elsewhere, consumers can find "brand name" products, but they are quickly exposed as cheap knock-offs. The materials and manufacturing lack quality and detail, and they deteriorate at a faster pace than real goods. Scholars in economics, business, and marketing have assessed this problem, emphasizing issues that can counteract counterfeits such as strong branding, advertising that helps consumers identify fake products, reliable customer service that allows consumers to resolve problems with authentic goods, and the regular introduction of new, innovative products to undermine cheap copies (see, for example, Yang and Fryxell 2009 and Qian 2012).** These lessons are also valuable for the election observation community.
While ODIHR produces extensive reports (although it could do more - see below), its contribution is too easily summarized as a simple judgment: free and fair/not free and fair. The media may prefer this concise sound bite, but the oversimplification of an extensive process allows other organizations to offer competing judgments that can be perceived as equivalent when they are not. It may be useful to present a more nuanced summary of compliance/lack of compliance with international standards in multiple categories. Long-term observation permits assessments of ballot access and competition, voter registries and suffrage, media access, and dispute adjudication. Short-term observation permits assessments of the process in the polling stations and the compilation of votes at district/territorial commissions. The OSCE could easily disaggregate its assessment, indicating what parts of the process meet or do not meet international standards. While the text of its reports often make these connections, it could also develop a summary table/figure that addresses the components of the electoral process. Most other organizations are not well-equipped to provide these assessments, distinguishing the depth and scope of ODIHR's efforts.
On a related note, ODIHR should consider more aggressively comparing and contrasting its methods and conclusions with those of other organizations. While it is a diplomatic group that tends to avoid direct confrontation, the organization's ability to conduct election observation will be hindered by the proliferation of observation reports that are of lower quality (especially if ODIHR does not directly challenge them).
ODIHR's election observation model has not changed much over the years, and many aspects function well. But, perpetrators of fraud have been much more nimble and flexible, altering their on-the-ground tactics to be less visible by STOs deployed to polling stations. Opportunities exist to adapt methods and innovate in the face of the changing landscape.
- Make data available to the public. ODIHR must protect the identity of poll workers who could be at risk of punishments, but it could strip identifiable information out of reports and publish the raw data on polling station assessments. Not only would public release of data permit others to evaluate how the polling-station level observations are translated into final reports, but it would set a new standard for every other organization. The scrutiny may be uncomfortable at first, but it could yield more precise measurements as data gathering instruments are improved.
- More explicitly take advantage of random assignment to produce a representative sample of polling stations. I have served as an STO on several observation missions, and my assignments were never randomized. To be fair, logistical considerations (terrain, distance), concerns about deterring improper behaviors, and other factors influenced the visitation schedule on election day. Based on my interactions, STOs also seem to prefer polling stations with fewer ballots in close proximity to the territorial commission for the overnight vote count. Whether or not these anecdotal observations are indicative of a wider tendency among STOs in site selection, managing STO itineraries to approximate random selection would improve the quality of reports designed to present an accurate picture of the election process across the whole territory of the country.
- Better exploit data on the back-end to develop more rigorous assessments. ODIHR has talented statisticians who work for it on observation missions; their capabilities should be more fully utilized. Data visualization and more sophisticated analysis could support ODIHR's assessments. For example, data from past elections could be compiled and scaled so that each election could be placed on a range of outcomes for observable items, such as the proportion of observed polling stations reporting problems in the vote count.*** While more extensive analysis and visualization could not be completed in time for preliminary reports that are issued the day after the election, they could be incorporated into final reports.
- Add technology for data gathering to complement the work of official LTOs and STOs. I have co-authored a working paper that uses data from crowdsourced reports**** collected and vetted by Ukrainian NGOs. The Ushahidi platform is becoming more widespread, and adding this layer of information to the arsenal (with all of the attendant caveats)***** could be another useful innovation. ODIHR, or other organizations, could recruit and train citizens who would document and report findings via mobile phones, websites, or other means of communication.
===========================================================
* The US State Department was also critical, but refers to the ODIHR mission as the primary source material for its assessment.
** My summary of the literature is cursory as it is not my specialty. But, I found many articles related to the issue of counterfeits in various journals and other publications. The issues I note above were raised in the pieces cited and in other papers. I suspect that my limited assessment of the literature has missed some nuance and additional findings.
*** While it is true that observer teams may vary in their assessments of "problems", and these observations could vary across time and space, it would be valuable to present these types of visualizations and openly discuss any problems of cross-national and temporal comparisons.
**** Please see notes in the paper about the definition of "crowdsourcing" and its implications.
***** We also address issues with data reporting in the paper.