What Do We Measure When We Measure Affective Polarization

Abstract:
Affective polarization—the tendency of Democrats and Republicans to dislike and distrust one another—has become an important phenomenon in American politics. Yet, despite scholarly attention to this topic, two measurement lacunae remain. First, how do the different measures of this concept relate to one another—are they interchangeable? Second, these items all ask respondents about the parties. When individuals answer them, do they think of voters, elites, or both? We demonstrate differences across items, and scholars should carefully think about which items best match their particular research question. Second, we show that when answering questions about the other party, individuals think about elites more than voters. More generally, individuals dislike voters from the other party, but they harbor even more animus toward the other party’s elites. The research note concludes by discussing the consequences for both measuring this concept and understanding its ramifications. For nearly two decades, scholars have analyzed voters’ issue positions to determine whether the mass public is, in fact, polarized (Fiorina 2017). In recent years, however, there is a growing awareness that this does not fully capture partisan conflict in the contemporary United States. Regardless of where they stand on the issues, Americans increasingly dislike, distrust, and do not want James N. Druckman is the Payson S. Wilder Professor of Political Science and faculty fellow in the Institute for Policy Research at Northwestern University, Evanston, IL, USA. Matthew S. Levendusky is a professor in the Department of Political Science, and by courtesy, in the Annenberg School of Communication, as well as distinguished fellow in the Institutions of Democracy at Annenberg Public Policy Center, University of Pennsylvania, Philadelphia, PA, USA. The authors thank the Annenberg Public Policy Center for funding this project (M.S.L., Principal Investigator), Sam Gubitz and Natalie Sands for research assistance, and Joe Biaggio, Shanto Iyengar, Yanna Krupnikov, Yphtach Lelkes, the anonymous referees, and the editors for helpful comments. The study was preregistered with AsPredicted.org as study #7041. *Address correspondence to James N. Druckman, Northwestern University, Scott Hall, 601 University Place, Evanston, IL 60208, USA; email: druckman@northwestern.edu. Public Opinion Quarterly doi:10.1093/poq/nfz003 D ow naded rom http/academ ic.p.com /poq/advance-articleoi/10.1093/poq/nfz003/5486527 by U niersity of Penylvania Liraries user on 08 M ay 2019 to interact with those from the other party, a tendency known as affective polarization (Iyengar, Sood, and Lelkes 2012). This divisiveness vitiates political trust (Hetherington and Rudolph 2015), hampers interpersonal relations (Huber and Malhotra 2017), and hinders economic exchanges (McConnell et al. 2018). Yet, two significant measurement lacunae remain. First, scholars use a wide-ranging assortment of items to measure affective polarization, but there is little sense of how these items relate to one another: Are they interchangeable? Second, these measures ask respondents to evaluate “the Democratic Party” or “the Republican Party.” But whom do voters imagine when they answer such questions: ordinary voters or elected officials? Addressing these questions with an original survey experiment, we document how the different measures relate to one another, finding that nearly all of them are strongly interrelated. The exception is the social-distance measures, which we argue tap a distinctive aspect of affective polarization. Further, the results show that when people think about the other party, they think primarily about political elites rather than voters. While they dislike both elites and ordinary voters from the other party, they especially dislike the other party’s elites. These findings have important implications for how scholars measure affective polarization, and for our understanding of its underlying dynamic. What Is Affective Polarization, and How Do We Measure It? Affective polarization stems from an individual’s identification with a political party. Identifying with a party divides the world into a liked ingroup (one’s own party), and a disliked outgroup (the opposing party; Tajfel and Turner 1979). This identification gives rise to ingroup favoritism and bias, which is at the heart of affective polarization: the tendency of people identifying as Republicans or Democrats to view opposing partisans negatively and copartisans positively (Iyengar, Sood, and Lelkes 2012, 406; Iyengar and Westwood 2015, 691). Scholars typically measure affective polarization via survey instruments (Iyengar et al. 2019). The most common is a feeling thermometer rating that asks respondents to rate how cold or warm they feel toward the Democratic Party and the Republican Party (Lelkes and Westwood 2017, 489). A second instrument asks respondents to rate how well various traits describe the parties. Positive traits include patriotism, intelligence, honesty, open-mindedness, and generosity; negative traits include hypocrisy, selfishness, and meanness (Iyengar, Sood, and Lelkes 2012; Garrett et al. 2014). A third approach is to ask citizens to rate the extent to which they trust the parties to do what is right (Levendusky 2013). A final set of questions gauge how comfortable people are having close friends from the other party, having neighbors from the other Druckman and Levendusky Page 2 of 9 D ow naded rom http/academ ic.p.com /poq/advance-articleoi/10.1093/poq/nfz003/5486527 by U niersity of Penylvania Liraries user on 08 M ay 2019 party, and having their children marry someone from the other party (Iyengar, Sood, and Lelkes 2012; Levendusky and Malhotra 2016). These items are known as social-distance measures, as they gauge the level of intimacy (distance) individuals are comfortable having with those from the other party. How do these various measures of affective polarization relate to one another? Prior studies provide little insight into this question; most studies include only one or two measures and do not explicitly compare them. Two general types of these measures exist: While thermometers, trait ratings, and trust measures are general attitudes about broad objects (i.e., parties), socialdistance items capture attitudes about particular behavioral outcomes (e.g., your child marrying someone from the other party). These two should be only marginally related, given how “[e]mpirical research has shown repeatedly that the relation between general attitudes and specific behaviors [and related measures] tends to be very low” (Fishbein and Ajzen 2010, 278). A distinct question concerns the targets of all of these measures: When someone rates “the Democratic Party” on a feeling thermometer, or rates whether “Democrats” are selfish, whom are they considering? Is it Democratic voters or elected officials like Nancy Pelosi and Chuck Schumer? As Iyengar and his colleagues (2012, 411) acknowledge, the existing measures are ambiguous on this point: “We will not be able to clarify whether respondents were thinking of partisan voters or party leaders when providing their thermometer scores.” The same is true for any of the other items; if someone says Republicans are untrustworthy, is that their Republican neighbor, or is that an assessment of President Trump? This distinction is not only crucial to understanding what people affectively envision when asked about the “party,” but it also underlines that people might feel differently toward other voters than they do toward elites.
Author Listing: James N. Druckman;Matthew S. Levendusky
Volume: 83
Pages: 114-122
DOI: 10.1093/POQ/NFZ003
Language: English
Journal: Public Opinion Quarterly

PUBLIC OPINION QUARTERLY

PUBLIC OPIN QUART

影响因子:2.7 是否综述期刊:是 是否OA:否 是否预警:不在预警名单内 发行时间:- ISSN:0033-362X 发刊频率:- 收录数据库:Scopus收录 出版国家/地区:- 出版社:Oxford University Press

期刊介绍

年发文量 49
国人发稿量 -
国人发文占比 0%
自引率 10.3%
平均录取率 -
平均审稿周期 -
版面费 -
偏重研究方向 Multiple-
期刊官网 https://academic.oup.com/poq
投稿链接 -

质量指标占比

研究类文章占比 OA被引用占比 撤稿占比 出版后修正文章占比
96.49% 19.75% 0.00% 0.00%

相关指数

{{ relationActiveLabel }}
{{ item.label }}

期刊预警不是论文评价,更不是否定预警期刊发表的每项成果。《国际期刊预警名单(试行)》旨在提醒科研人员审慎选择成果发表平台、提示出版机构强化期刊质量管理。

预警期刊的识别采用定性与定量相结合的方法。通过专家咨询确立分析维度及评价指标,而后基于指标客观数据产生具体名单。

具体而言,就是通过综合评判期刊载文量、作者国际化程度、拒稿率、论文处理费(APC)、期刊超越指数、自引率、撤稿信息等,找出那些具备风险特征、具有潜在质量问题的学术期刊。最后,依据各刊数据差异,将预警级别分为高、中、低三档,风险指数依次减弱。

《国际期刊预警名单(试行)》确定原则是客观、审慎、开放。期刊分区表团队期待与科研界、学术出版机构一起,夯实科学精神,打造气正风清的学术诚信环境!真诚欢迎各界就预警名单的分析维度、使用方案、值得关切的期刊等提出建议!

预警情况 查看说明

时间 预警情况
2024年02月发布的2024版 不在预警名单中
2023年01月发布的2023版 不在预警名单中
2021年12月发布的2021版 不在预警名单中
2020年12月发布的2020版 不在预警名单中

JCR分区 WOS分区等级:Q1区

版本 按学科 分区
WOS期刊SCI分区
WOS期刊SCI分区是指SCI官方(Web of Science)为每个学科内的期刊按照IF数值排 序,将期刊按照四等分的方法划分的Q1-Q4等级,Q1代表质量最高,即常说的1区期刊。
(2021-2022年最新版)
COMMUNICATION Q1
POLITICAL SCIENCE Q1
SOCIAL SCIENCES, INTERDISCIPLINARY Q1

关于2019年中科院分区升级版(试行)

分区表升级版(试行)旨在解决期刊学科体系划分与学科发展以及融合趋势的不相容问题。由于学科交叉在当代科研活动的趋势愈发显著,学科体系构建容易引发争议。为了打破学科体系给期刊评价带来的桎梏,“升级版方案”首先构建了论文层级的主题体系,然后分别计算每篇论文在所属主题的影响力,最后汇总各期刊每篇论文分值,得到“期刊超越指数”,作为分区依据。

分区表升级版(试行)的优势:一是论文层级的主题体系既能体现学科交叉特点,又可以精准揭示期刊载文的多学科性;二是采用“期刊超越指数”替代影响因子指标,解决了影响因子数学性质缺陷对评价结果的干扰。整体而言,分区表升级版(试行)突破了期刊评价中学科体系构建、评价指标选择等瓶颈问题,能够更为全面地揭示学术期刊的影响力,为科研评价“去四唯”提供解决思路。相关研究成果经过国际同行的认可,已经发表在科学计量学领域国际重要期刊。

《2019年中国科学院文献情报中心期刊分区表升级版(试行)》首次将社会科学引文数据库(SSCI)期刊纳入到分区评估中。升级版分区表(试行)设置了包括自然科学和社会科学在内的18个大类学科。基础版和升级版(试行)将过渡共存三年时间,推测在此期间各大高校和科研院所仍可能会以基础版为考核参考标准。 提示:中科院分区官方微信公众号“fenqubiao”仅提供基础版数据查询,暂无升级版数据,请注意区分。

中科院分区 查看说明

版本 大类学科 小类学科 Top期刊 综述期刊
法学
2区
COMMUNICATION
传播学
2区
POLITICAL SCIENCE
政治学
2区
SOCIAL SCIENCES, INTERDISCIPLINARY
社会科学:跨领域
1区
2021年12月
升级版
法学
2区
COMMUNICATION
传播学
2区
POLITICAL SCIENCE
政治学
2区
SOCIAL SCIENCES, INTERDISCIPLINARY
社会科学:跨领域
1区
2020年12月
旧的升级版
法学
2区
COMMUNICATION
传播学
2区
POLITICAL SCIENCE
政治学
2区
SOCIAL SCIENCES, INTERDISCIPLINARY
社会科学:跨领域
1区
2022年12月
最新升级版
社会学
1区
COMMUNICATION
传播学
2区
POLITICAL SCIENCE
政治学
2区
SOCIAL SCIENCES, INTERDISCIPLINARY
社会科学:跨领域
1区