Mis)Measuring Sensitive Attitudes with the List ExperimentSolutions to List Experiment Breakdown in Kenya

Abstract:
List experiments (LEs) are an increasingly popular survey research tool for measuring sensitive attitudes and behaviors. However, there is evidence that list experiments sometimes produce unreasonable estimates. Why do list experiments “fail,” and how can the performance of the list experiment be improved? Using evidence from Kenya, we hypothesize that the length and complexity of the LE format make them costlier for respondents to complete and thus prone to comprehension and reporting errors. First, we show that list experiments encounter difficulties with simple, nonsensitive lists about food consumption and daily activities: over 40 percent of respondents provide inconsistent responses between list experiment and direct question formats. These errors are concentrated among less numerate and less educated respondents, offering evidence that the errors are driven by the complexity and difficulty of list experiments. Second, we examine list experiments measuring attitudes about political violence. The standard list experiment reveals lower rates of support for political violence compared to simply asking directly about this sensitive attitude, which we interpret as list experiment breakdown. We evaluate two modifications to the list experiment designed to reduce its complexity: private tabulation and cartoon visual aids. Both modifications greatly enhance list experiment performance, especially among respondent subgroups where the standard procedure is most problematic. The paper makes two key contributions: (1) showing that techniques such as the list experiment, which have promise for reducing response bias, can introduce different forms of error associated with question complexity and difficulty; and (2) demonstrating the effectiveness of easy-to-implement solutions to the problem. Public Opinion Quarterly doi:10.1093/poq/nfz009 D ow naded rom http/academ ic.p.com /poq/advance-articleoi/10.1093/poq/nfz009/5525050 by Vaderbilt U niersity Lrary user on 02 uly 2019 Survey researchers are often concerned with measuring sensitive attitudes and behaviors, including support for political violence, experience with corruption, and racial attitudes. A major challenge for studying such topics with surveys is social desirability bias: many individuals do not want to reveal socially unacceptable or potentially illegal attitudes and behaviors. Scholars have developed a number of strategies for reducing sensitivity-driven measurement error. The list experiment—or “item count technique”—is one approach that is increasingly popular in political science and related disciplines. In this paper, we evaluate two modifications to standard list experiment procedures. The first allows respondents to privately tabulate the number of items in the list that apply, thereby aiding accurate response while creating additional assurance of privacy. The second modification adds visual aids, which is intended to reduce respondent error—particularly among respondents who find the instructions and demands of a list experiment challenging. List experiments (LEs) reduce survey error by asking respondents about sensitive issues indirectly: sensitive items are embedded in a list with several nonsensitive items, and participants are asked how many items they agree with or apply to them, but not which ones (see examples found in tables 3 and 4 later in this paper). This approach reduces the perceived costs/risks of answering honestly. However, enthusiasm surrounding the list experiment has drawn attention away from its potential limitations. The length and complexity of the question format make them prone to comprehension and reporting errors. Importantly, such errors may be concentrated among certain population subgroups—those without experience answering complex survey questions or those who most prevalently hold the sensitive attitude of interest. Unfortunately, identifying the extent to which these issues bias list experimental data is challenging because survey respondents’ “true” answers to sensitive questions are usually unknown (Simpser 2017). Nonetheless, LEs often break down in obvious ways: producing estimates that are lower than the direct question, or even nonsensical ones, such as negative estimates (Holbrook and Krosnick 2010). In that light, we are motivated by two questions: Why do list experiments sometimes “fail” or break down? How can the performance of the list experiment be improved? In this paper, we examine the LE and its ability to reduce survey error in Kenya, where we sought to measure public support for political violence. First, we investigate the performance of the LE using lists of simple, nonsensitive items about food consumption and daily activities. We show that the LE encounters difficulties with these simple and nonsensitive lists: over 40 percent of respondents provide inconsistent responses in LE versus direct question formats. These “failures” are concentrated among less numerate and less educated respondents, evidence that errors are driven by LE question complexity and difficulty. Second, we turn to list experiments designed to measure attitudes about political violence. We find that the standard LE estimates lower rates of Kramon and Weghorst Page 2 of 28 ow naded rom http/academ ic.p.com /poq/advance-articleoi/10.1093/poq/nfz009/5525050 by Vaderbilt U niersity Lrary user on 02 uly 2019 support for political violence than those obtained by asking directly. These underestimates are most pronounced among less educated participants and those who provided inconsistent responses in the nonsensitive LEs described above, evidence that technique difficulty is driving list experiment breakdown. Finally, we evaluate two low-cost, context-appropriate modifications to the list experiment designed to reduce the complexity of the technique. The first allows for private tabulation, and the second combines private tabulation with cartoon visual aids. We find that both modifications improve list experiment performance, including among the subgroups that had difficulty with the nonsensitive LE. This paper contributes to the literature on survey response bias in two ways. First, we show that indirect techniques such as the list experiment, which have promise for reducing response bias, can introduce different forms of error that are associated with question complexity and difficulty. This is important because the survey literature is populated with list experiments that perform well; we highlight limitations that might not be obvious from reading this published literature because of publication bias and the “file drawer problem.” Our aim is not to suggest that all LEs are problematic, but rather to draw attention to these limitations. Our second contribution is demonstrating that relatively easy-to-implement and low-cost modifications can greatly enhance the performance of the technique, especially among populations where the standard procedure is most problematic. Modifications designed to reduce item complexity and difficulty can be adapted by applied survey researchers working in a range of contexts. Measuring Sensitive Attitudes with the List Experiment Attitudes toward violence are emblematic of the challenges of studying sensitive topics. Support for political violence is subject to under-reporting biases because such violence is illegal and its approval is generally socially undesirable. Past research on violence has addressed sensitivity-driven measurement error by alleviating perceived costs/risks of answering truthfully. Strategies include asking about violent behavior indirectly (Humphreys and Weinstein 2006), administering sensitive survey modules separately from a larger survey (Scacco 2016), anticipating or controlling for enumerator ethnicity effects (Kasara 2013; Carlson 2014; Adida et al. 2016), or one of several experimental approaches: endorsement experiments (Blair et al. 2013; Lyall, Blair, and Imai 2013), randomized response technique (Blair, Imai, and Zhou 2015), or the list experiment. The list experiment is a promising alternative to direct questions, offering respondents greater secrecy for sensitive responses (e.g., Kuklinski, Cobb, and Gilens 1997; Corstange 2018; Gonzalez-Ocantos et al. 2011; Blair and Imai 2012; Glynn 2013). The LE presents a sensitive statement as one of many items of a list and asks respondents to identify how many total list items apply to them. Participants are randomly assigned to either a treatment list including the Solutions to List Experiment Breakdown Page 3 of 28 ow naded rom http/academ ic.p.com /poq/advance-articleoi/10.1093/poq/nfz009/5525050 by Vaderbilt U niersity Lrary user on 02 uly 2019 sensitive item or a control list that does not. Because the lists are otherwise identical, and assignment is randomized, the difference in means between treatment and control lists can be attributed to the sensitive item. If successfully implemented, the technique yields an estimate of the prevalence of the sensitive attitude. Two assumptions must be satisfied for LE estimates to be valid: “no-liars” and “no design effects” (Blair and Imai 2012). The first states that respondents “do not lie about the sensitive item” (Rosenfeld, Imai, and Shapiro 2016, 795). The second requires that adding the sensitive item to a list does not change the way respondents engage with control items. Lists are generally designed to avoid “floor” and “ceiling” effects, which undermine how the sensitive attitude is rendered undetectable (Glynn 2013). For single LEs, the estimated prevalence of the sensitive item is the difference-in-means between treatment and control groups (e.g., Blair and Imai 2012; Streb et al. 2008). For example, if the control group mean is 2 and the treatment group mean is 2.2, the estimate in the sample would be 20 percent. In the double list experiment design (DLE), which uses two sets of lists such that all respondents receive one control list and one treatment list, the est
Author Listing: Eric Kramon;Keith Weghorst
Volume: 83
Pages: 236-263
DOI: 10.1093/POQ/NFZ009
Language: English
Journal: Public Opinion Quarterly

PUBLIC OPINION QUARTERLY

PUBLIC OPIN QUART

影响因子:2.7 是否综述期刊:是 是否OA:否 是否预警:不在预警名单内 发行时间:- ISSN:0033-362X 发刊频率:- 收录数据库:Scopus收录 出版国家/地区:- 出版社:Oxford University Press

期刊介绍

年发文量 49
国人发稿量 -
国人发文占比 0%
自引率 10.3%
平均录取率 -
平均审稿周期 -
版面费 -
偏重研究方向 Multiple-
期刊官网 https://academic.oup.com/poq
投稿链接 -

质量指标占比

研究类文章占比 OA被引用占比 撤稿占比 出版后修正文章占比
96.49% 19.75% 0.00% 0.00%

相关指数

{{ relationActiveLabel }}
{{ item.label }}

期刊预警不是论文评价,更不是否定预警期刊发表的每项成果。《国际期刊预警名单(试行)》旨在提醒科研人员审慎选择成果发表平台、提示出版机构强化期刊质量管理。

预警期刊的识别采用定性与定量相结合的方法。通过专家咨询确立分析维度及评价指标,而后基于指标客观数据产生具体名单。

具体而言,就是通过综合评判期刊载文量、作者国际化程度、拒稿率、论文处理费(APC)、期刊超越指数、自引率、撤稿信息等,找出那些具备风险特征、具有潜在质量问题的学术期刊。最后,依据各刊数据差异,将预警级别分为高、中、低三档,风险指数依次减弱。

《国际期刊预警名单(试行)》确定原则是客观、审慎、开放。期刊分区表团队期待与科研界、学术出版机构一起,夯实科学精神,打造气正风清的学术诚信环境!真诚欢迎各界就预警名单的分析维度、使用方案、值得关切的期刊等提出建议!

预警情况 查看说明

时间 预警情况
2024年02月发布的2024版 不在预警名单中
2023年01月发布的2023版 不在预警名单中
2021年12月发布的2021版 不在预警名单中
2020年12月发布的2020版 不在预警名单中

JCR分区 WOS分区等级:Q1区

版本 按学科 分区
WOS期刊SCI分区
WOS期刊SCI分区是指SCI官方(Web of Science)为每个学科内的期刊按照IF数值排 序,将期刊按照四等分的方法划分的Q1-Q4等级,Q1代表质量最高,即常说的1区期刊。
(2021-2022年最新版)
COMMUNICATION Q1
POLITICAL SCIENCE Q1
SOCIAL SCIENCES, INTERDISCIPLINARY Q1

关于2019年中科院分区升级版(试行)

分区表升级版(试行)旨在解决期刊学科体系划分与学科发展以及融合趋势的不相容问题。由于学科交叉在当代科研活动的趋势愈发显著,学科体系构建容易引发争议。为了打破学科体系给期刊评价带来的桎梏,“升级版方案”首先构建了论文层级的主题体系,然后分别计算每篇论文在所属主题的影响力,最后汇总各期刊每篇论文分值,得到“期刊超越指数”,作为分区依据。

分区表升级版(试行)的优势:一是论文层级的主题体系既能体现学科交叉特点,又可以精准揭示期刊载文的多学科性;二是采用“期刊超越指数”替代影响因子指标,解决了影响因子数学性质缺陷对评价结果的干扰。整体而言,分区表升级版(试行)突破了期刊评价中学科体系构建、评价指标选择等瓶颈问题,能够更为全面地揭示学术期刊的影响力,为科研评价“去四唯”提供解决思路。相关研究成果经过国际同行的认可,已经发表在科学计量学领域国际重要期刊。

《2019年中国科学院文献情报中心期刊分区表升级版(试行)》首次将社会科学引文数据库(SSCI)期刊纳入到分区评估中。升级版分区表(试行)设置了包括自然科学和社会科学在内的18个大类学科。基础版和升级版(试行)将过渡共存三年时间,推测在此期间各大高校和科研院所仍可能会以基础版为考核参考标准。 提示:中科院分区官方微信公众号“fenqubiao”仅提供基础版数据查询,暂无升级版数据,请注意区分。

中科院分区 查看说明

版本 大类学科 小类学科 Top期刊 综述期刊
法学
2区
COMMUNICATION
传播学
2区
POLITICAL SCIENCE
政治学
2区
SOCIAL SCIENCES, INTERDISCIPLINARY
社会科学:跨领域
1区
2021年12月
升级版
法学
2区
COMMUNICATION
传播学
2区
POLITICAL SCIENCE
政治学
2区
SOCIAL SCIENCES, INTERDISCIPLINARY
社会科学:跨领域
1区
2020年12月
旧的升级版
法学
2区
COMMUNICATION
传播学
2区
POLITICAL SCIENCE
政治学
2区
SOCIAL SCIENCES, INTERDISCIPLINARY
社会科学:跨领域
1区
2022年12月
最新升级版
社会学
1区
COMMUNICATION
传播学
2区
POLITICAL SCIENCE
政治学
2区
SOCIAL SCIENCES, INTERDISCIPLINARY
社会科学:跨领域
1区