Home > Sample essays > Students’ Plausibility Gaps: Critical Thinking and Scientific Learning

Essay: Students’ Plausibility Gaps: Critical Thinking and Scientific Learning

Essay details and download:

  • Subject area(s): Sample essays
  • Reading time: 5 minutes
  • Price: Free download
  • Published: 1 April 2019*
  • Last Modified: 23 July 2024
  • File format: Text
  • Words: 1,457 (approx)
  • Number of pages: 6 (approx)

Text preview of this essay:

This page of the essay has 1,457 words.



Paste your Evaluation, Plausibility, and Knowledge

All scientific practices emerge from “processes of perpetual evaluation and critique that support progress in explaining nature” (Ford, 2015, p. 1043). Science education reform efforts call for students to engage in the scientific practices to help them achieve college- and career-readiness (NRC, 2012). In particular, students should engage in scientific practices to generate “plausible explanation[s] for an observed phenomenon that can predict what will happen in a given situation” (NRC, 2012, p. 67). The scientific community also compares the plausibility of alternative explanations when constructing scientific models and theories. However, with certain Earth science topics (e.g., climate change and hydraulic fracturing, aka “fracking”) scientists may generate explanations that seem implausible to students. In contrast, non-scientific explanations about such phenomena may seem more plausible than the scientific ones. This results in what Lombardi et al. (2013) call a “plausibility gap.”

Plausibility judgments may be associated with critical and scientific thinking. For example, Beyer (1995) notes that questioning the plausibility of explanations is one characteristic of skepticism, a disposition of good critical thinkers. Differentiating between evidence that supports the truth of a claim, and theory that supports the plausibility of a claim, is also a characteristic of those that are developing scientific thinking skills (Kuhn, 1999). By examining a theory’s potential truthfulness, plausibility judgments used in a critical mode are inherently evaluative. Critical evaluations about the plausibility of explanations are also fundamentally linked to an individual’s knowledge (Willingham, 2008), based on the presupposition that plausibility judgments are tentative in nature and may contribute to knowledge construction (Authors, 2016a). Plausibility judgments have also long been theoretically implicated as an important factor in the process of science learning (see, for example, Chinn & Brewer, 1993; Dole & Sinatra, 1998; Kapon & diSessa, 2012; Posner et al., 1982), but until recently, almost no empirical research has validated the importance of plausibility in knowledge construction and reconstruction (see Authors, 2016a, for a detailed philosophical, empirical, and theoretical review).

Constructing students’ conceptions that are consistent with scientific understanding is notoriously difficult (Chi, 2005). Authors (2016a) recently proposed a theoretical model that posits initial plausibility judgements might be reappraised through the process of critical evaluation (i.e., plausibility reappraisal may elevate initial plausibility judgments from regimes of low/implicit evaluation to high/explicit evaluation). Reappraisal, in turn, may be a component of constructing scientifically accurate knowledge, but only if the reconsidered plausibility judgment is now greater than the plausibility of preexisting conceptions.

Earth science involves many complex topics that require fundamental knowledge in multiple domains. These scientific ideas may be counter to students’ existing mental representations. To construct scientifically accurate knowledge, students must actively engage in the scientific practices promoting evidence-based evaluation that weighs the validity of alternative explanations.

Scientific Thinking through Critical Evaluation

Students may be curious about scientific topics, but they are not necessarily evaluative as they consider hypotheses and theories. Critical evaluation in science learning situations can involve judgments about the relationship between evidence and alternative explanations of a particular phenomenon (McNeill et al., 2006). Through critical evaluation, an individual seeks to weigh the strengths and weaknesses in the connection between evidence and explanations. Mere critique is not sufficient, because critical evaluation involves gauging how well evidence potentially supports both an explanation (e.g., an argument, a scientific model) and its plausible alternatives (e.g., a counterargument, a contrary hypothesis). In this way, critical evaluation embraces the criterion of falsifiability, where evidence may invalidate one explanation in favor of an alternative (Popper, 1963; Stanovich, 2007). Whereas individual scientists might not adhere to the falsifiability criterion, the scientific community ultimately eliminates hypotheses and theories with demonstrated evidentiary failures (Lakatos, 1970).

Critical evaluation demands that students be reflective about the process of knowledge construction (Mason et al., 2011). When students model practices used by scientific experts they may cognitively reflect and evaluate similar to scientists (Duschl et al., 2007). Students who engage in critical evaluation understand that scientific knowledge emerges from collaborative argumentation, which is a constructive and social process where individuals compare, critique, and revise ideas (Nussbaum, 2008). Chin and Osborne (2010) suggest that argumentative discourse activities can stimulate critical evaluation, when students challenge each other’s thinking through questions about the strength of evidence and explanation connections. Collaborative argumentation is different from adversarial argumentation, where opponents attempt to reduce one another’s viewpoint to a point of uselessness. Individual scientists may engage in adversarial argumentation; after all, scientists are human too. But as a community, science thrives due to collaborative argumentation, which is an inherently constructive process (Osborne, 2010).

Argument construction, however, does not necessarily promote greater critical evaluation. Nussbaum and Kardash (2005) showed that, when given a persuasion goal, students were less critical. Trying to persuade led to generation of fewer counterarguments and one-sided thinking. The researchers also found a connection with the intensity of students’ beliefs about a topic and their ability to generate counterarguments, where more extreme beliefs led to fewer counterarguments. Because students may not be critically reflective when engaging in collaborative argument, they may need instructional scaffolds to evaluate the quality of explanations (Nussbaum & Edwards, 2011). The MEL, which assists students in effectively coordinating evidence with scientific explanations (Chinn & Buckland, 2012), is a scaffold that—as shown from the results of our current project—promotes high school students to be more critical in their evaluations, engage in plausibility reappraisal, and construct scientifically accurate knowledge (Lombardi et al., 2016c). MELs specifically facilitate evaluation by helping students differentiate between evidence and scientific explanations—a scientific reasoning skill with which students often have difficulty (Duschl & Grandy, 2011; Kuhn & Pearsall, 2000).

Dynamic Relating Evaluation and Plausibility

Plausibility is a tentative epistemic judgment conducive to knowledge construction and reconstruction both in science and in science classrooms. For example, researchers have implicated plausibility in facilitating co-construction of knowledge in discourse associated with collaborative argumentation (Duschl et al., 2007; Nussbaum, 2011). Researchers have also proposed that plausibility may be an important judgment involved in construction of scientifically accurate knowledge (Dole & Sinatra, 1998; Pintrich et al., 1993; Posner et al., 1982). Authors (2016a) theoretical model describes how plausibility judgments may most often be formed through automatic cognitive processes, but explicit instruction may facilitate reappraisal of these implicit plausibility judgments toward a more scientific stance. Such instruction may be particularly relevant for complex and abstract scientific topics (e.g., climate change), where a gap exists between what students and scientists find plausible.

Empirical research has revealed that a plausibility gap exists for the topic of global climate change among middle school students (Authors, 2013a), undergraduate students (Authors, 2012), and elementary and secondary science teachers (Authors, 2013b). To address this gap, Authors (2013a) developed a model-evidence link diagram (MEL) for climate change and used it in grade 7 Earth science classrooms. Participants in the treatment group (using the climate change MEL diagram) experienced significant shifts in both plausibility and knowledge toward the scientifically accepted model of human-induced climate change; furthermore, these students retained knowledge gains six months after instruction. In comparison, grade 7 participants at the same school and taught by the same teachers did not experience plausibility or knowledge shifts when experiencing a curricular activity designed to promote scientific inquiry and deeper understanding of climate change (Smith et al., 2002). Although the comparison activity asked students to link evidence to explanations, the primary difference from the treatment activity (i.e., the MEL diagram) was that students did not weigh evidence between two competing models of climate change. Authors (2013a) speculated that the students’ plausibility reappraisal—a skill that is important for understanding the development of scientific knowledge (Hogan & Maglienti, 2001; Duschl et al., 2007)—was related to the MEL’s ability to facilitate students’ critical evaluation. Plausibility reappraisal, in turn, may have promoted the students’ enduring knowledge gains (Erduran & Dagher, 2014).

Linking Scientific Practices to Evaluation and Plausibility Reappraisal

Recent empirical research closely examining student work on the MEL activities shows that students engage in various levels of evaluation when considering alternative explanation about Earth and space science phenomena and that these evaluation levels significantly related to plausibility appraisals and knowledge about the phenomena (Lombardi et al., 2016b, 2017). Specifically, high school students shifted plausibility toward scientifically accepted explanations and increased their knowledge about relevant Earth science topics after participating in MEL activities. Greater levels of evaluation mediated the plausibility shifts and knowledge increases, as shown by structural equation modeling. Effect sizes were small to large, depending upon topic and instructional context. These findings support the idea that MEL activities move students to cognitively engage in scientific practices that give them the ability to think about scientific topics, knowing “that alternative interpretations of scientific evidence can occur, that such interpretations must be carefully scrutinized, and that the plausibility of the supporting evidence must be considered,” and ultimately “that predictions or explanations can be revised on the basis of seeing new evidence or of developing a new model that accounts for the existing evidence better than previous models did” (NRC, 2012, p. 251).

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Students’ Plausibility Gaps: Critical Thinking and Scientific Learning. Available from:<https://www.essaysauce.com/sample-essays/2017-7-14-1500058248/> [Accessed 10-04-26].

These Sample essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.