Leadership Research Summary:
• Power in experimental research has been commonly induced by methods that raise concerns regarding demand effects. In this paper, researchers investigate the empirical relevance of these concerns. In an incentivized online study (N = 1632), we manipulated the method of power manipulation (power priming vs. resource allocation), the level of power (high-power vs. control), and the presence of a manipulation check after the power manipulation.
• Researchers then assessed risk-taking as an outcome variable in two ways, once as a non-consequential measure (self-report measure) and twice as a consequential measure (incentivized behavioral choices). The study’s results show that both using power priming (vs. resource allocation) and implementing a manipulation check substantially increased the potential for demand effects as measured by the proportion of participants who were aware of the study hypothesis.
• In addition, researchers were able to replicate the positive effect of power on risk-taking previously reported in the literature. However, the study only found a significant (and small) effect for our non-consequential measure of risk-taking; when risk-taking was measured with either of the study’s two consequential measures, power had no significant impact.
• The study’s pattern of results shows that concerns about demand effects in priming studies cannot be dismissed. We advise researchers, especially those studying power, to steer away from demand-prone manipulations of power and to measure outcome variables (e.g., behavior) through consequential choices.
Leadership Research Findings:
• In the study, researchers had a broad array of different risk-taking measures, yet there was no effect of power on consequential risk-taking. There are a number of possible explanations for why the effect of power on risk-taking was not replicated using the RA manipulation despite the large number of observations in this study and the correlations among the risk-taking measures.
• Based on a post-hoc power analysis (Faul, Erdfelder, Lang, & Buchner, 2007), assuming an alpha level of 0.05, and a small effect size of 0.2, by having 400 observations per cell we had a statistical power of 0.88 to detect the effect of power on risk-taking with our setting. Yet, the study still could not replicate the effect. Scrutinizing the studies promoting the power/risk-taking effect, researchers noticed that many used self-report measures or hypothetical designs to capture risk-taking (e.g., see Studies 1 to 5, Anderson & Galinsky, 2006).
• Moreover, the main effect of power on risk-taking was not replicated in the context of consequential designs in several studies conducted by other researchers (see Studies 3 and 5, Jordan et al., 2011; Hiemer & Abele, 2012; Maner et al., 2007; Ronay & Von Hippel, 2010). Therefore, a possible explanation could be that there is no effect of power on risk-taking after all. Researchers are also aware that their measures could not cleanly disentangle the pure effect of using a non-consequential measure.
• Future replication studies on power and risk-taking may want to clarify this state of affairs and consider the effect size in the presence and absence of consequential designs. For instance, one could think about adding a condition to our design in which participants are solely being asked in a hypothetical scenario how they would choose in the lottery risk-taking task (a pure hypothetical design with no consequences). This set-up would allow to compare the different outcomes directly in one study.
• The research addresses concerns raised about the potential demand characteristics in power research and the pervasive use of the PP method (Lonati et al., 2018; Sturm & Antonakis, 2015; Schaerer et al., 2018), as well as about the problems of using manipulation checks before measuring the dependent variable (Ejelöv & Luke, 2020; Fayant et al., 2017; Hauser et al., 2018).
• The study also contribute to the literature discussing experimenter demand effects (de Quidt et al., 2018; de Quidt et al., 2019; Nichols & Maner, 2008; Zizzo, 2010). The results highlight the importance of implementing rigorous research designs in order to derive ecologically valid conclusions valuable to both leadership scholars and practitioners.
• Researchers suggest being prudent about revealing possible clues either through the manipulation method or via manipulation checks because if not, with a little help from demand characteristics, researchers may oftentimes find support for what they hypothesized even if the effect is not really there.