Monday, January 16, 2017

Developing the “culture of polling” in Georgia (Part 1): Survey criticism in Georgia

[Note: This is a guest post from Natia Mestvirishvili, a Researcher at International Centre for Migration Policy Development (ICMPD) and former Senior Researcher at CRRC-Georgia. This post was co-published with the Clarion.]

Intense public debate usually accompanies the publication of survey findings in Georgia, especially when the findings are about politics. The discussions are often extremely critical or even call for the rejection of the results.

Normally criticism of surveys would focus on the shortcomings of the research process and help guide researchers towards better practices to make surveys a better tool to understand society. In Georgia most of the current criticism of surveys is, unfortunately, counterproductive and mainly driven by an unwillingness to accept the findings, because the critics do not like them. This blog post outlines some features of survey criticism in Georgia and highlights the need for constructive criticism aimed at the improvement of research practice, because constructive criticism is extremely important and useful for the development of the “culture of polling” in Georgia.

Often, discrepancies between the findings and the critics’ opinion about public opinion cause criticism of surveys in Georgia. Hence, the survey critics claim that the findings do not correspond to ‘reality’. Or rather, their reality.

But, are surveys meant to measure ‘reality’? For the most part, no. Rather, public opinion polls measure and report public opinion which is shaped not only by perceptions, but also by misperceptions i.e., the views and opinions that people have. There is no ‘right’ or ‘wrong’ opinion. It is equally important that these are opinions that people feel comfortable sharing during interviews –while talking to complete strangers. Consequently, and leaving aside deeply philosophical discussions about what reality is and whether it exists at all, public opinion surveys measure perceptions, not reality.

Among the many assumptions that may underlie criticism of surveys in Georgia, critics often suggest that:

  1. They know best what people around them think;
  2. What people around them think represents the opinions of the country’s entire population. 

However, both of these assumptions are wrong, because, in fact:

  1. Although people in general believe that they know others well, they don’t. Extensive psychological research shows that there are common illusions which make us think we know and understand other people better than we actually do – even when it comes to our partners and close friends;
  2. Not only does everyone have a limited choice of opinions and points of view in their immediate surroundings compared to the ‘entire’ society, but it has also been shown that people are attracted to similarity. As a result, primary social groups are composed of people who are alike. Thus, people tend to be exposed to the opinions of their peers, people who think alike. There are many points of view in other social groups that a person may never come across, not to mention understand or hold; 
  3. Even if a person has contacts with a wide diversity of people, these will never be enough to be representative of the entire society. Even if it were, individuals lack the ability to judge how opinions are distributed within a society.


To make an analogy, assuming the opinions we hear around us can be generalized to the entire society is very similar to zooming in on a particularly large country, like Canada, on a map of a global freedom index, and assuming that since Canada is green, i.e. rated as “Free”, the same is true for the rest of the world. In fact, if we zoom out, we will be able to see that the whole world is all but green. Rather, it is very colorful, with most of the countries being of different colors than green, and “big” Canada is no indication of the state of the rest of the world.



Source: www.freedomhouse.org

People who think that what people around them think (or, to be even more precise – who think that what they think that people around them think) can be generalized to the whole country make a similar mistake.

Instead of objective and constructive criticism based on unbiased and informed opinions and professional knowledge, public opinion polls in Georgia are mostly discussed based on emotions and personal preferences. Professional expertise is almost entirely lacking in those discussions.
Politicians citing questions from the same survey in either a negative or positive context, depending on whether they like the results or not, is a good illustration of the above claim. For example, positive evaluations of a policy or development by the public is often proudly cited by political actors without doubting the quality of the survey. At the same time, low and/or decreasing public support for a particular party according to the findings of the same survey is “explained away” by the same actors as poor data quality. Subsequently, politicians may express their distrust in the research institution which has conducted the survey.

In Georgia and elsewhere, survey criticism should be focused on the process of research and should be aimed at its improvement rather than the rejection of the role and importance of polling. It is the duty of journalists, researchers and policymakers to foster healthy public debate on survey research. Instead of emotional messages aimed at demolishing trust in public opinion polls and pollsters in general, rationally and carefully discussing the research process and its limitations, research findings and their meaning/significance and, where possible, pointing to possible improvements of survey practice is needed.

Criticism focused on “unclear” or “incorrect” methodology should be further elaborated by professionally specifying the aspects that are unclear or problematic. Research organizations in Georgia will highly appreciate criticism that asks specific questions aimed at improving the survey process. For example, does the sample design allow for the generalization of the survey results to the entire population? How were misleading questions avoided? How have the interviewers been trained and monitored to minimize bias and maximize the quality of the interviews?

This blog post argued that survey criticism in Georgia is often based on inaccurate assumptions and conveys messages that are not helpful for research organizations from the point of view of improving their practice. These messages are also often dangerous as they encourage uninformed skepticism towards survey research in general. Rather than these unhelpful messages, I call on actors to engage in constructive criticism which will contribute to the improvement of the quality of surveys in Georgia, which in turn will allow people’s voices to be brought to policymakers and their decisions to be informed by objective data.

The second part of this blog post, to be published on January 23, continues the topic, focusing on examples of misinterpretation and misuse of survey data in Georgia.

Tuesday, January 10, 2017

Sex selective abortion is likely less common in Georgia than previously thought

[This blog post was co-published with EurasianetThe views presented in this article do not necessarily reflect the views of CRRC-Georgia.]

Sex-selective abortion in Georgia is a topic that has caught international attention. From an Economist article published in September 2013 to a 2015 UN report, Georgia tends to be portrayed as having one of the worst sex-selective abortion problems in the world. Closer inspection of the data, however, suggests the issue may be blown out of proportion.

The first study to draw attention to the sex-selective abortion issue in Georgia was published in 2013 in the journal International Perspectives on Sexual and Reproductive Health, and relied on statistics compiled by the World Health Organization. The authors found a sex-at-birth ratio of 121 boys for every 100 girls born in Georgia from 2005-2009. That number suggested there was a problem: one of the most common estimates of the natural sex-at-birth ratio is 105 boys for every 100 girls, or 95.2 girls for every 100 boys. Any difference between the natural and observed ratios in favor of boys is generally thought to be an proxy for sex selective abortion.

The study suggested that Georgia had one of the largest sex selective abortion problems in the world.

However, a missing data issue, a rounding error, and an anomalous sex at birth ratio in 2008, in the original study drove up the reported sex at birth ratio in Georgia. In the article, the sex at birth ratio between 2005 and 2009 is actually the average of the ratios in 2005 and 2008.  Martin McKee, one of the co-authors of the study stated, "The figure of 121 boys to 100 girls in 2005-2009 was calculated on the basis of the data submitted to the WHO at the time, from which several years were missing."

The missing data had a very large effect on the results of the study. In 2008, the ratio of boys to girls born in Georgia was exceptionally high at 128 boys born for every 100 girls. In 2005, 113 boys were born for every 100 girls, another high year for Georgia. Using these two years leads to an average of 120 boys born for every 100 girls between 2005 and 2009.

Notably, when asked about the discrepancy between the article reported 121 boys and the 120 boys to 100 girls ratio in the data, McKee acknowledged, “A very small rounding error crept in.”

With the full data between 2005 and 2009, however, the average sex at birth ratio drops to 113 boys for every 100 girls, rather than 120 – about half the reported deviation from the natural rate.


On top of the missing data, the fact that 2008’s sex at birth ratio is an outlier further exaggerates the reported magnitude of sex selective abortion in Georgia. If between 2005 and 2009, the average ratio was 113 boys for every 100 girls, the average ratio for the same period excluding 2008 is 110 boys for every 100 girls. That is to say, by excluding 2008, there were 5 excess boys born for every 100 girls rather than 8.

To flip the statistic around by looking at the ratio of girls born for every 100 boys, the average between 2005 and 2009 was 88 including 2008 and 91 when excluding it.  Translating this into the number of missing girls by subtracting the number of girls expected from the number born according to official data, suggests 6.74 missing girls for every 100 boys born when including the 2008 data. Without 2008, this drops to 4.20.


The exact causes of the situation recorded in 2008 are unknown. Although a higher than natural sex at birth ratio favoring boys is often explained by sex selective abortions and infanticide, comparing an estimate of the number of missing girls to the number of abortions over time suggests that some other factor may be at work.

Dividing the number of missing girls by the number of abortions in a year provides an estimate of the share of abortions that would need to be sex selective for it to explain the sex at birth imbalance. These calculations would suggest that sex selective abortion increased from 6% of all registered abortions in 2007 to 24% in 2008.

The calculations suggest one of three things: there was a dramatic increase in sex selective abortions in 2008, the number of unregistered abortions dramatically increased and they were also predominantly sex selective, or something else was driving the anomalous sex at birth ratio.

In the other category, many possible explanations exist. Notably, given the often poor state of data collection at the municipal level in Georgia, where births are recorded, recording error could explain the discrepancy.

The data alone cannot tell us whether 2008 saw a dramatic increase in the number of sex selective abortions or something else drove the anomalous sex at birth ratio. What is clear is that Georgia’s problem with sex-selective abortion is smaller than often portrayed.

That isn’t to say it is not a problem. In 2015, there were still about 4 missing girls for every 100 boys born.

Understanding the magnitude of the problem though is a first step towards addressing it.

Dustin Gilbreath is a Policy Analyst at CRRC-Georgia. He co-edits the organization’s blog Social Science in the Caucasus

To view the data used to calculate the figures used in this article, click here.

Monday, January 02, 2017

Three months before the 2016 Parliamentary elections: Trust in the Central Election Commission and election observers in Georgia


The June 2016 CRRC/NDI Public attitudes in Georgia survey, conducted three months before the Parliamentary elections, provides interesting information about trust in the Central Election Commission (CEC) and election observers, both local and international.

The CEC’s role in conducting elections in Georgia has been subject to contentious political debates about the organization’s impartiality. The survey data demonstrates the public’s lack of trust in the institution. In June, only 29% of the population of Georgia believed that the CEC would conduct parliamentary elections “well” or “very well”. In contrast to this general opinion, a majority (60%) of likely voters for the incumbent Georgian Dream party believed the same, while less than a third of likely voters for the two other parties that won seats in parliament (the United National Movement and Alliance of Patriots of Georgia) believed that the CEC would conduct the elections “well” or “very well”.


Note: The shares of those reporting they would vote for either Movement State for People or Alliance of Patriots of Georgia was very small (respectively, 4% and 3%), and the results for the supporters of these two parties are only indicative. 

Unsurprisingly, trust towards Georgian and international observers also differs. Overall, the population of Georgia tends to trust international observers more than Georgian observers. Forty eight percent report either “fully trusting” or “trusting” international observers, compared to 34% who report trust in Georgian observers. There are even wider gaps in trust in these two groups of observers depending on party support: while 63% of United National Movement supporters report either “fully trusting” or “trusting” international observers, only 29% “fully trust” or “trust” Georgian observers.


Note: The shares of those reporting they would vote for either Movement State for People or Alliance of Patriots of Georgia was very small (respectively, 4% and 3%), and the results for the supporters of these two parties are only indicative.

To explore the CRRC/NDI June 2016 survey findings, visit CRRC’s Online Data Analysis portal. On the topic of anomalies in the voting process, CRRC-Georgia recently conducted the Detecting Election Fraud through Data Analysis (DEFDA) project regarding the 2016 parliamentary elections. Preliminary findings can be found here. CRRC-Georgia has also previously published blog posts on the electoral process in Georgia, including on government spending before elections and public opinion shifts before and after elections.


Thursday, December 29, 2016

New Year’s twice, even if you don’t believe in Santa

[This piece originally appeared in Georgian on Liberali, here]

December. Cold. Christmas decorations in the streets. New Year. Champagne. Satsivi and Gozinaki. Presents. Santa Claus. December 25. Or January 6? Then New Year’s once again, but the old one. 2017 resolutions and the wish on New Year’s Eve that is bound to come true.

What are the population of Georgia’s New Year’s plans? CRRC-Georgia asked on December 1-13 in a phone survey of adults in Georgia. Unsurprisingly, people in Georgia follow established traditions. A large majority (73%) plan to ring in the New Year at home. Nine per cent more will meet it in a friend’s or relative’s home. Meeting the New Year outside in the street or in a restaurant and café is not yet common, and only one per cent of Georgians plan to. Another 15% had not decided in the first half of December where they would celebrate the New Year.



Since a large majority of people celebrate New Year’s at home, holiday decorations are important. Only 4% of the population does not plan to have a Christmas tree. A large majority (76%) will have an artificial tree and about one tenth (13%) a natural tree. Meanwhile, 59% of people also plan to have Chichilaki at home.

Traditionally, one of the main components of New Year’s celebrations is the New Year’s feast. Of the dishes from the New Year’s table, over one third (38%) of Georgia’s adult population singles out Satsivi, about a fourth (24%) Gozinaki and about one tenth (9%) fried suckling pig as their favorite.



For many, New Year’s is associated with presents. About two thirds of Georgia’s population (62%) plan to buy presents for family members. Some people used to believe or still believe that presents come from Santa (Tovlis Babu). It appears that about one third of Georgia’s adult population believed in Santa through the age of 10. However, almost one fourth (24%) never believed in Santa. Despite this, the magic of New Year’s Eve is not lost on the majority. Two thirds of the Georgian population (66%) have made a wish on New Year’s.

New Year’s, with its feast, Christmas tree and fireworks, is celebrated twice in Georgia. An absolute majority of people (88%) say they celebrate the Old New Year as well as well as the new one. People also seem to be interested in the Chinese calendar and closely follow which animal is the symbol of the coming year. A majority of Georgians (68%) plan to or already have bought a rooster souvenir for 2017, the year of the rooster.

Just as New Year’s has two days, so too does Christmas – the latter is celebrated on different dates by different Christian churches. About two thirds of the Georgian population (64%) believes Christmas should be celebrated on January 7. However, about one tenth of people (12%) say Christmas in Georgia should be celebrated on December 25. At the same time, not so small a share of the Georgian population (18%) reports that Christmas, like New Year’s, should be celebrated on both days.



Having holidays twice is not so uncommon in Georgia after all.

Thursday, December 22, 2016

Electoral forensics on the 2016 parliamentary elections

In order to help monitor the fidelity of the October 2016 parliamentary election results, CRRC-Georgia has carried out quantitative analysis of election-related statistics within the auspices of the Detecting Election Fraud through Data Analysis (DEFDA) project. Within the project we used methods from the field of election forensics. Election forensics is a field in political science that attempts to identify election day issues through looking at statistical patterns in election returns. This blog post reports the results of our analysis of the 2016 proportional election results. The full report of the analysis is available here.

Our analysis suggests that the results of the 2016 elections were roughly equivalent to the 2012 proportional list elections.

Before going further into the results, two caveats and a note on methods are needed. To start with the two caveats:

  • Results are probabilistic. A test may return a statistically anomalous result, and this suggests that a given result is highly unlikely to have occurred by chance alone. The way in which we calculate the test statistics is likely to provide 1 false positive for every 100 tests performed.
  • If a test does suggest a statistical anomaly, it does not necessarily mean that election-related malfeasance caused the result, but that it may have. Statistical anomalies can be caused by benign activities such as strategic voting or divergent voting patterns within a region. Electoral malfeasance does often cause a positive test result, however. Hence, substantive knowledge and judgment of each positive test are required to determine whether malfeasance actually did occur.

When it comes to methods, to be frank, they are relatively complex. Rather than dive into the details here, we recommend that interested readers see Hicken and Mebane, 2015, here. Below we present the results of the following election forensics tests:

  • Mean of second digit in turnout;
  • Skew of turnout;
  • Kurtosis of turnout;
  • Means of the final digit in turnout;
  • Frequency of zeros and fives in the final digit in turnout;
  • Unimodality test of turnout distribution.

Results

In 2016, three of the six tests were set off:



By comparison, in 2012 two of six tests were also set off. However, one test – of the second digit mean – was exceptionally close to being set off. Due to the nature of the method – bootstrapping uses resampling with replacement – this test just as well could have been set off if run again.


Given the borderline nature of the 2012 tests, providing a conclusive comparison of the two elections is somewhat difficult. However, since the test results are roughly equivalent, the tests are indicative rather than definitive, and the elections by most accounts have been considered broadly free and fair, despite having clear issues, and the 2012 elections were considered to be broadly free and fair, despite also having clear issues, we consider the 2016 election results to also be broadly free and fair.
For more on the subject, take a look at our final report for the DEFDA project, available here.

Note: The DEFDA project is funded by the Embassy of the United States of America in Georgia, however, none of the views expressed in the above blog post represent the views of the US Embassy in Georgia or any related US Government entity.

Monday, December 19, 2016

Number of logical inconsistencies in 2016 election protocols decline

Following the 2016 parliamentary elections, a number of politicians questioned the results based on logical inconsistencies on election protocols. Some of the election protocols, which summarize election results for individual voting stations, reported that more voters had come to the polls than actually cast ballots while others reported that more votes had been cast than voters came to the polling station.  While both did happen, the Central Election Commission has made dramatic improvements compared to Georgia’s 2012 parliamentary elections.

In the 2012 parliamentary elections, according to an analysis of data the Central Election Commission provided, in the proportional list elections alone there were over 30,000 more voters that came to the polls than cast ballots. In 2016, there were less than 3000 such voters – a clear improvement.

Not only were there more voters than votes in many precincts – there were more votes cast than voters that came to the polls, again according to the official record. In the 2012 parliamentary elections, there were 696 more votes than signatures for those votes. By comparison, in 2016 there were 76 – again a clear improvement.

A third logical inconsistency present in the data is declining turnout. In the 2012 elections, in 8 precincts, there were more votes at 12PM than at 5PM. That is to say that the precincts recorded declining turnout. In 2016, by contrast, only one precinct reported declining turnout, again, a clear improvement.



While the CEC has clearly improved its recording of the vote in 2016, and small mismatches are bound to happen, any voter may reasonably ask themselves – if the CEC cannot make election protocols add up, how do I know my vote counted? Thus, we strongly recommend that the CEC make efforts to minimize the number of logical inconsistencies in future elections. Some recommendations on how the CEC might do so are available in our report on the 2016 elections.

Note: The DEFDA project is funded by the Embassy of the United States of America in Georgia, however, none of the views expressed in the above blog post represent the views of the US Embassy in Georgia or any related US Government entity.


Thursday, December 08, 2016

Georgians and other ethnic groups: understanding (in)tolerance (Part 3)


As the first blog post in this series highlighted, approval by Georgians for doing business with members of other ethnic groups is, overall, declining. When it comes to Georgian women marrying men of other ethnicities, Georgians are even less approving. These latter attitudes vary by settlement type, age, and level of education. As in the previous blog posts in this series, only the answers of ethnic Georgians are presented in this blog post.

Georgians living in the capital report the highest approval of Georgian women marrying men of other ethnicities. On average, there is a 14 percentage point difference between the population of the capital and rural settlements. The biggest gap is with Americans (19%): 54% of Georgians in Tbilisi approve of Georgian women marrying Americans, while only 35% of the rural population say the same. The gap is smallest with Russians: 55% in the capital and 45% in rural settlements approve of Georgian women marrying Russians.



Note: Only shares of those answering “Approve” are presented on the charts in this blog post. 

Differences between the answers of people of different ages are also noteworthy, though the gaps are smaller. Overall, younger people show greater approval of Georgian women marrying foreigners. The biggest gap is observed in respect to marrying Americans (14%): while 50% of young people 18 to 35 approve of Georgian women marrying Americans, only 36% of older people (56+) say the same.


Differences by education level are also informative. The higher a person’s level of education, the more s/he tends to approve of Georgian women marrying men of other ethnicities. On average, there is a ten percentage point gap between people with secondary or lower education and those with tertiary education. As with settlement types and age groups, the largest gap is observed in relation to Americans. Only 37% of people with secondary or lower education approve of Georgian women marrying Americans, while 51% of people with tertiary education report the same.



Approval by Georgians of Georgian women marrying men of other ethnicities varies by settlement type, education, and, to a lesser extent, by age. Interestingly, the gaps between the groups are consistently greatest when it comes to (dis)approval of Georgian women marrying Americans.

To take a deeper look at the data used in this blog post, try out our online data analysis tool.