Home > Sample essays > The Replication Crisis and Clinical Psychology: Jennifer Tackett Investigates

Essay: The Replication Crisis and Clinical Psychology: Jennifer Tackett Investigates

Essay details and download:

  • Subject area(s): Sample essays
  • Reading time: 7 minutes
  • Price: Free download
  • Published: 1 April 2019*
  • Last Modified: 23 July 2024
  • File format: Text
  • Words: 2,037 (approx)
  • Number of pages: 9 (approx)

Text preview of this essay:

This page of the essay has 2,037 words.



1. PRIMARY AUTHOR (250 words)

Jennifer Tackett currently is an associate professor and the director of Clinical Psychology at Northwestern University. She received her Bachelor from Texas A&M University where she did her honors thesis on individual differences in helping behavior. At Texas Academy of Math and Science, she received her master’s degree, focusing her thesis on Subfactors of DSV-IV Conduct Disorder: Evidence and connection with syndromes from the Child Behavior Checklist.  She received her PhD from the University of Minnesota in Clinical Psychology, with minors in Statistics, Behavioral Genetics, and Personality.  Much of her work primarily focuses on personality disorders in relation to development and health. Her dissertation focused on Investigating personality-psychopathology relationships in childhood: Highlighting aggressive versus non-aggressive anti-special behavior

She has been a contributing key editor for major journals such as, Collabra: Clinical Psychology, Journal of Abnormal Psychology, and Advances in Methods and Practices in Psychological Science to name a few. Currently Jennifer Tackett is conducting her own research program at Northwestern University. Her lab, Personality Across Development Lab (PAD), studies children and adolescent personalities. Her and her team are examining their client’s personalities relate to behavior(s) and affect(s) relationships with friends and families. Primarily focusing on whether early personality traits may be indicators for behavioral problems later in life, Tackett’s research program examines adolescent decision making and how personality informs academic outcomes.

Source: https://www.psychology.northwestern.edu/people/faculty/core/profiles/jennifer-tackett.html

2. SYNOPSIS

Theme: (250 words)

The main theme of the Tackett’s research article is exploiting the replication crisis within the psychology field. Tackett and her colleagues work towards engaging and discussing this crisis with other researchers.  The team begins by summarizing why the crisis became to be by recognizing to main contributing events. First, there has been a large amount of publicized research replication failures, especially in the social and cognitive psychology. Second, is bringing more attention to questionable research practices (QRPs) and lastly falsified and fabrication of data. Another aspect of concerns that Tackett and her colleagues discuss is why Clinical Psychology has been missing from conversation in the replication crisis conversation. They look at how broad-themed journals such as Psychological Science have focused on bringing the replication conversation to social psychology but have failed to expose the meta-scientific on replicability within the field of clinical psychology. Tackett methodologically identifies this problem due to “detection bias,” which certain domains show less replicable findings than other domains because they have subjected their findings to a higher scrutiny. For example, sample size gathered, amount of flexibility interpreting what is considered “marginally significant” or “trend level,” and what constitute as “successful.” This can be highly recognized between social psychology, focusing on relative general condition and clinical psychology, focusing on relatively rare conditions. Another example of “detection bias” is visible replicating between correlational and experimental designs. By failing to provide adequate replications will result in publishing more Type 1 Errors in the field of psychology.

Author’s Agenda: (250 words)

The author’s main agenda is broadening the conversation of replicability in research. Tackett raises questions concerning the debate of how professionals should move forward with their empirical supported assessments and interventions techniques.  She focuses on three main areas of concern. First, examining why certain clinical psychology and other related fields have not been more involved in the replication conversation. Second, reviewing concerns and recommendations that are more directed and applicable for research in clinical psychology. Third, being able to generate and compile take home messages for psychologists and related fields to critically think and develop some of their own interventions. Tackett and her colleagues suggest five areas of focus to tackle these concerns in clinical science. First, reducing the prevalence of QRPs. This can be accomplished by keeping up-to-date research on p-hacking, HARKing, and other QRPs as well as recording all dependent variables in a study. Secondly, having more preregistration and open data. This allows for easier data accessibility and adaptation to be used in either experimental and correlational research designs. Third, independent replication which producing a new science would be less practice than replicating previous work which saves time and funding. Fourth, self-examinations and replicability operationalization are important in improving the replicability in clinical psychology. By creating parameters for researchers and clients on self-examinations and operationalizing variables will reduce overarching and robust findings. Finally, the fifth goal is to improve statistical power to a study by decreasing error in variable quantification, and increasing the sample size, measurements, and with-in subject research designs. By tackling these challenges and areas of concern will improve the replication conversation within the psychology field.

3. FAIL TO CONSIDER (250 words)

In her article, Tackett fails to consider how often than not intervention treatments lack empirically supported therapies (ESTs). This raises more areas of concerns: a) how successful are these treatments, b) should we be using them in a clinical setting, c) how much can be relied on from the researchers work, and d) are ESTs still relevant. Regardless of how many negative results come out of a study, it only takes two statistically significant results before stating a research is empirically supported. This is important in clinical psychology when diagnosing and treating specific disorders. As Tackett mentions, a one-size-fits-all is no longer a practical approach. The lack of EST used in clinical psychology creates a greater contribution to the replication crises. By publishing a study with little to no ESTs is a failure of publication care by the researcher. Failing to consider how often ESTs are used in therapies creates two problems within the replication crisis. First, the failure of ESTs creates an overarching assumption of how successful a treatment is. Second, a lack of ESTs makes a study much more difficult to replicate. Considering the frequency of ESTs used in treatment studies, should provide a better navigation dealing with the replication crisis. This can lead to two things: 1) ESTs can be used to help dictate whether or not a research study can even be replicated. 2) ESTs provides a closer estimation of the amount of resources (i.e., time and money) needed in order to replicate the original study.

4. RESEARCH (500 words)

Tackett and her colleagues make many great points of concerns and areas of future development concerning the replication crisis. I was most interested in better understanding the prevalence of failed replications within the field of psychology.

Article: Estimating the Reproducibility of Psychological Science

Summary of the Article:

Purpose: Open Science Collaboration (OSC) conducted a large collaborative replication study to obtain an estimate of the reproducibility of psychological science. The reasoning for this is scientific claims gain a lot of recognition based on novelty and not on replicable supporting evidence. Open Science Collaboration wants to change the dominate domain here by testing the reliability of published research practices through replication. Successfully replicating studies increases the confidence and establishes the validity of the original study. Fail to replicate suggest we have presented a false positive because a) the methodology of the original study was incorrect, b) the findings were due to a random error or by chance. Studies that fail to replicate were mainly due to the methodology of the original study.

Hypothesis: Collaborators of OSC hypothesize that the present science culture may negatively be affecting the reproducibility of findings from originally studies.  

Methodology:

Subjects: OSC conducted 100 replications of experimental and correlation studies published in three psychology journals.

Procedures and Measurement: The OSC constructed a protocol used to select and conduct their replications. A project coordinator facilitated and maintained the eight steps of their replication protocol. a) Selecting a study from a sample space of available studies to pick from. b) Contacting the original authors to collect study materials. c) Preparing a study protocols and analysis plan. d) Having both the original author and current member(s) review the present project. e) Registering the protocol to the public. f) Conducting the replication and writing the final report. g) Auditing the process and analysis for quality control.

OSC measured seven characteristics in the original study. For example, they looked at effect size, sample size, and p-value, within journals these studies were published in. They also assessed the experience and expertise of members from the original research. OSC also measured characteristics of the replicated studies such as statistical power and sample size.

Findings:

Results of their replication indicate that replication effects were only half the magnitude of original effects. 97% of original findings were statistically significant but only 36% of replicated results were statistically significant. This is a major decline in statistical significance. 47% of the original effect size was within the 95% confidence interval (CI) of the replicated effect size.  This suggests that there is a 47% replication success rate. Assuming there was no bias in the original study, 39% of effects subjectively replicated the original results. There was a 68% statistically significant effect when combining original and replication results.

Conclusions:

These findings suggest that replication success rate was higher in correlation tests than in experimental studies. The rate of replication success was better predicted by the strength of original evidence than by characteristics of the original and replication team. However, even with the same material used in original studies, a large portion of replications produce weaker evidence of the original findings. This suggests there is still no single answer or method to solve the replication crises. What is evident from this study that replication increases the reliability of the original results but does not establish validity.

Strengths:

• First study to provide a systematic way to measure reproducibility studies

• Replication increases reliability of an original study.

Limitations:

• Individual scientists prioritize their incentives of novelty over replication.

• Replication does not establish the validity of original studies.

5. TIE-IN (500 words)

In Thinking, Fast and Slow, Kahneman discusses the law of small numbers and how that is attributed to overconfidence. Kahneman explains how this risk of error is given in any sample size but small sample size possesses far greater risks. For researchers, smaller sample space provides a lesser risk of failing to reject a null hypothesis. The issue with this is that smaller sample space does not provide a proper representation of the population. Small numbers are easier for our systems to compute and identifying random causal connections rather than through evident-based connections. An important threat to consider due to small numbers is how much a probability outcome change. The law of small numbers can be tied in Tackett’s research article. According to Tackett, publishing on studies that are statistically significant and failing to replicate them may yield extreme results in one direction. This is highly not advisable, especially in the field of clinical psychology, dealing with specific disorders. This is because small numbers create a bias of confidence over doubt. This overconfidence affects our availability heuristic, pushing us to base our judgement on intuition, not statistical facts. This can also be tied in Gawande’s article, On Washing Hands. Gawande discusses how more often than not, people forget to wash their hands. He sets his example in a hospital setting. Hospitals are prime examples of people overestimating how clean the environment is and underestimate the probability of contracting a disease. Childbed fever is a prime exhibit of this issue. More mothers died from this disease when delivering their baby at a hospital (20% death rate) than women delivering their babies at home (1% death rate). A reason for this is because of lack of hand washing. According to Gawande, doctors and nurses only wash their hand one-third to one-half of the time they are supposed to. The reasoning for this is because doctors and nurses underestimate the probability that someone had contracted a disease from them. One is more likely than not to be overconfident with their decision making if they base their judgement off of results from small numbers.

Bibliography

Gawande, A. (2007). Better: A surgeon's notes on performance. New York: Metropolitan.

Collaboration, O. S. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 943-953.

Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus, & Giroux.

Tackett, J. L., Lilienfeld, S. O., Patrick, C. J., Johnson, S. L., Krueger, R. F., Miller, J. D., … Shrout,

P. E. (2017). It’s Time to Broaden the Replicability Conversation: Thoughts for and From

Clinical Psychological Science. Perspectives on Psychological Science, 12(5), 742

756. https://doi.org/10.1177/1745691617690042

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, The Replication Crisis and Clinical Psychology: Jennifer Tackett Investigates. Available from:<https://www.essaysauce.com/sample-essays/2018-10-24-1540362223/> [Accessed 11-04-26].

These Sample essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.