Motivation for and Development of a Standardized Introductory Meteorology Assessment Exam

Casey E. Davenport U.S. Air Force Academy, Colorado Springs, Colorado

Search for other papers by Casey E. Davenport in
Current site
Google Scholar
PubMed
Close
,
Christian S. Wohlwend U.S. Air Force Academy, Colorado Springs, Colorado

Search for other papers by Christian S. Wohlwend in
Current site
Google Scholar
PubMed
Close
, and
Thomas L. Koehler U.S. Air Force Academy, Colorado Springs, Colorado

Search for other papers by Thomas L. Koehler in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Education research has shown that there is often a disconnect between what instructors teach and what students actually comprehend. Much of this disconnect stems from students’ previous conceptions of the subject that often remain steadfast despite instruction. The field of meteorology is particularly susceptible to misconceptions as a result of the years of personal experience students have with weather before instruction. Consequently, it is often challenging for students to accurately integrate course material with their observations and personal explanations. A longitudinal assessment exam of the meteorology program at the U.S. Air Force Academy revealed that misconceptions of fundamental, introductory content can propagate through years of instruction, potentially impeding deeper understanding of advanced topics and hindering attainment of professional certifications. Thus, it is clear that such misconceptions must be identified and corrected early. This manuscript describes the development of the Fundamentals in Meteorology Inventory (FMI), a multiple-choice assessment exam designed to identify the common misconceptions of fundamental topics covered in introductory meteorology courses. In developing the FMI, care was taken to avoid complex vocabulary and to include plausible distractors identified by meteorology faculty members. Question topics include clouds and precipitation, wind, fronts and air masses, temperature, stability, severe weather, and climate. Applications of the exam for the meteorology community are discussed, including identifying common meteorology misconceptions, assessing student understanding, measuring teaching effectiveness, and diagnosing areas for improvement in introductory meteorology courses. Future work to be completed to ensure the efficacy of the FMI will also be acknowledged.

CURRENT AFFILIATION: University of North Carolina at Charlotte, Charlotte, North Carolina

CORRESPONDING AUTHOR: Casey E. Davenport, Department of Geography & Earth Sciences, University of North Carolina at Charlotte, 9201 University City Blvd., Charlotte, NC 28223, E-mail: Casey.Davenport@uncc.edu

Abstract

Education research has shown that there is often a disconnect between what instructors teach and what students actually comprehend. Much of this disconnect stems from students’ previous conceptions of the subject that often remain steadfast despite instruction. The field of meteorology is particularly susceptible to misconceptions as a result of the years of personal experience students have with weather before instruction. Consequently, it is often challenging for students to accurately integrate course material with their observations and personal explanations. A longitudinal assessment exam of the meteorology program at the U.S. Air Force Academy revealed that misconceptions of fundamental, introductory content can propagate through years of instruction, potentially impeding deeper understanding of advanced topics and hindering attainment of professional certifications. Thus, it is clear that such misconceptions must be identified and corrected early. This manuscript describes the development of the Fundamentals in Meteorology Inventory (FMI), a multiple-choice assessment exam designed to identify the common misconceptions of fundamental topics covered in introductory meteorology courses. In developing the FMI, care was taken to avoid complex vocabulary and to include plausible distractors identified by meteorology faculty members. Question topics include clouds and precipitation, wind, fronts and air masses, temperature, stability, severe weather, and climate. Applications of the exam for the meteorology community are discussed, including identifying common meteorology misconceptions, assessing student understanding, measuring teaching effectiveness, and diagnosing areas for improvement in introductory meteorology courses. Future work to be completed to ensure the efficacy of the FMI will also be acknowledged.

CURRENT AFFILIATION: University of North Carolina at Charlotte, Charlotte, North Carolina

CORRESPONDING AUTHOR: Casey E. Davenport, Department of Geography & Earth Sciences, University of North Carolina at Charlotte, 9201 University City Blvd., Charlotte, NC 28223, E-mail: Casey.Davenport@uncc.edu

The Fundamentals in Meteorology Inventory assessment exam is being developed to identify common meteorology misconceptions, assess student understanding, measure teaching effectiveness, and diagnose areas for improvement in introductory meteorology courses.

Assessment of student learning is a crucial com-ponent of science courses, providing feedback for both instructors and students as a measure of understanding of course content. These assessments are often limited to a small handful of exams throughout a semester, and do not necessarily measure deep understanding, as many exams tend to emphasize lower-order cognition (Crooks 1988). The Force Concept Inventory (FCI; Hestenes et al. 1992), developed in the early 1990s, revealed the superficial nature of conceptual understanding of introductory physics topics by a significant proportion of college students. The application of this result dramatically shifted perceptions of the teaching and learning of physics, and subsequently radically transformed conventional college-level physics instruction (Gonzales-Espada 2003). Recognizing the successes of the physics community, numerous other disciplines have also developed similar assessment exams, including astronomy (Zeilik et al. 1999), biology (Anderson et al. 2002), statistics (Allen et al. 2004), and the geosciences (Libarkin and Anderson 2005). Currently, the field of meteorology lacks an assessment exam comparable to the FCI, an issue that the authors are seeking to address and correct through the development of the Fundamentals in Meteorology Inventory (FMI). The FMI is designed to measure the conceptual understanding of fundamental meteorology concepts presented in most introductory courses. This article will explore the motivation behind the creation of this assessment tool, describe the current progress and implementation of the prototype exam, and suggest appropriate applications of the FMI in meteorology education.

BACKGROUND AND MOTIVATION.

It is important for instructors to recognize that learning does not happen in a vacuum. Unfortunately, many assume that once material has been presented, it is completely understood by students (Fisher and Moody 2002). Furthermore, teachers are likely to misjudge the extent and depth of student understanding (Driver 1985; Schneps 1997). Consequently, both the student and the teacher are likely to be frustrated with the results of formal assessments administered throughout a course. Previous conceptions about the subject held by a student may significantly impair that student’s conceptual understanding of a particular subject, thus undermining their performance on course assessments (e.g., Hestenes et al. 1992). Indeed, these misconceptions are impeding students from attaining a more scientific viewpoint and are extremely resistant to traditional classroom instruction, particularly when instructors are ignorant of their existence and do nothing to directly address or correct them (e.g., Halloun and Hestenes 1985; Hestenes et al. 1992; Wandersee et al. 1994).

In order for students to be convinced to let go of their previous conceptions and achieve learning gains, several conditions must be met. Posner et al. (1982) describe the process of conceptual change in a series of steps:

  1. dissatisfaction with the existing conception,

  2. existence of an intelligible new conception,

  3. plausibility of the new conception, and

  4. clear potential to apply the new conception to other areas.

In other words, students must be confronted with the gaps in their current understanding or conceptual model, and subsequently be presented with a logical and understandable new model. Once it is clear to the student that this new idea has the power to explain real-world scenarios and phenomena, and that it can be applied even further to other situations, then the new concept will be incorporated into his or her current scientific framework [e.g., as in the cognitive assimilation theory of Piaget (1929, 1930)]. However, this process is difficult to complete without an educated awareness of the typical misconceptions.

Meteorology in particular can be susceptible to misconceptions as a result of years of personal experience with the day-to-day weather before starting formal instruction; it is often difficult for students to reconcile the presented material with their own observations (Rappaport 2009). To date, limited research has been conducted to systematically identify common weather misconceptions, unlike other science disciplines (e.g., Hestenes et al. 1992; Zeilik et al. 1999; Anderson et al. 2002; Allen et al. 2004; Libarkin and Anderson 2005). A well-established challenge for students is the water cycle and associated phase changes (e.g., Osborne and Cosgrove 1983; Bar and Travis 1991; Ewing and Mills 1994; Chang 1999; Gopal et al. 2004). The abstract nature of some processes, such as evaporation or condensation, along with intangible vocabulary, such as saturation vapor pressure, make the topic particularly challenging (e.g., Bar and Travis 1991; Ewing and Mills 1994; Chang 1999). A few studies have examined the misconceptions of the general public (Aron et al. 1994; Dove 1998), but the majority of studies have focused on children’s conceptions of weather. These studies found that many of the misconceptions existed simply because of an incomplete knowledge and understanding of the material, which is perhaps expected given the focus on younger students (e.g., Stepans and Kuehn 1995; Tytler 1998; Henriques 2002; Saçkes et al. 2010). At the undergraduate level, the focus of previous research has typically been on misconceptions associated with specific phenomena, such as tornadoes or fog (Lewis 2006; Rappaport 2009; Polito 2010). While a lack of knowledge contributed to some of the identified misconceptions, Polito (2010) noted that misconceptions of select phenomena existed at numerous cognitive levels (i.e., freshmen through seniors), and were found with both majors and nonmajors.

Kahl (2008) represents the only study that has focused on measuring learning gains (and thus able to identify gaps in understanding) over the course of a semester in introductory meteorology at the undergraduate level. A survey was devised at one institution with three types of questions asked within each topical subject area: content, application, and deeper application. Additionally, the answer choices allowed students to indicate their degree of confidence in their answer (i.e., their perceived understanding). Questions related to content learning scored quite well (more than 75% of students saw improvement over the semester in each topical area), while questions related to applications and deep applications of topics saw much less improvement over the semester (8%–43% of students). A separate category of students answered the application questions correctly at the beginning and end of the semester (i.e., they did not show improvement as a result of consistent correct responses), thus providing an overall percentage of students with correct responses and demonstrated learning by the end of the semester. The overall portion of students demonstrating a correct understanding of meteorology varied significantly in the application questions, between 9% and 78%, with the deeper application questions on the lower end of that range. These results indicate that students tend to excel at memorizing meteorology content but struggle to truly understand concepts and apply them correctly to given situations.

Additional evidence of students struggling with meteorology concepts has been found within the meteorology program at the U.S. Air Force Academy (USAFA). The program evaluates its academic success using a homegrown longitudinal assessment exam known as the Meteorology Program Assessment Test (MPAT), measuring the academic evolution of meteorology majors as they progress through the curriculum. The MPAT was designed explicitly as an internal assessment tool to provide feedback to faculty members on the efficacy of USAFA’s specific curriculum and to indicate where improvement may be needed. The exam is a 39-item test that covers topics across the entire USAFA meteorology curriculum. Approximately half of the questions are related to one or two courses, while the others involve topics from several courses, allowing faculty members to assess how successfully cadets are able to integrate ideas from different parts of the curriculum. The first version of the exam (a second version was recently developed and administered starting in 2012) contained 15 questions related to fundamental concepts taught in the first-semester course. The graduating class of 2010 was the first set of meteorology majors to take the MPAT before, during, and after instruction of their 11 required meteorology courses (referred to as the precurriculum, midcurriculum, and postcurriculum periods, respectively).

The postcurriculum assessment taken by the class of 2009 revealed that 3 of the 15 questions related to fundamental concepts had an average score below 60%, demonstrating poor understanding and suggesting the persistence of misconceptions even after extensive instruction. Furthermore, these three questions had a notably lower average than on the midcurriculum assessment. One additional question related to introductory material (with a score above 60% on the postcurriculum assessment) also had a lower average than the previous year, resulting in a total of four questions with a decreasing average. In other words, performance on nearly one quarter of the questions related to fundamental concepts decreased from the midcurriculum to the postcurriculum MPAT assessment. Similar results exist for the other graduating classes that took the MPAT their junior and senior years (classes of 2010–2012; Table 1). Three questions in particular had low scores (average < 60%) for nearly each graduating class on the postcurriculum assessment, and they nearly always represented a question with a higher average score on the midcurriculum assessment. The topics included hydrostatic balance, temperature controls, and ocean currents. While these questions do not represent the only assessment of the above-mentioned topics on the MPAT, they do highlight a few areas that our cadets consistently struggle with and where changes need to be made. Additional research is necessary to determine the extent to which these topics are universally difficult for undergraduate meteorology students. Further research is particularly important, since the implicit assumption in our analysis is that the senior meteorology majors are putting sincere effort into answering all of the questions.

Table 1.

Breakdown by class year of the number of MPAT questions related to introductory material with an average score less than 60% on the postcurriculum assessment, as well as number of introductory questions with an average that decreased from the midcurriculum to the postcurriculum assessment.

Table 1.

One of many potential explanations for the lower scores on the postcurriculum MPAT could be related to students exceeding their cognitive load. Research has shown that learners can simultaneously process only a limited number of ideas in their working memory, a concept known as cognitive load theory (e.g., Sweller 1988; Paas et al. 2003; Sweller et al. 2011). Exceeding cognitive capacity can lead to poorer understanding of previous or new concepts (e.g., Heckler and Sayre 2010). Given the large number of required meteorology courses USAFA cadets take their senior year (five), it is reasonable to expect some confusion. However, since more advanced topics build upon fundamental concepts, the expectation is that student understanding of the basics would remain steady or increase over time, which does not appear to always be the case. Therefore, these concepts need to be further reinforced early on in the curriculum.

Another issue that could be related to documented struggles with conceptual understanding is related to the different learning styles often exhibited by faculty members versus students. Roebber (2005) conducted a survey of meteorology students and faculty members on satisfaction within his specific program. The survey indicated that students tend to be goal oriented, meaning that they prefer concrete experiences, deriving theory from repeated practice. On the other hand, faculty members prefer abstract thinking, first considering theory and then applying theory in practice. This mismatch becomes evident during instruction, since curricular design tends to be more in line with faculty member learning styles rather than student learning styles (Schroeder 1993; Roebber 2005). Consequently, students may not be learning at their full potential.

To effectively teach toward long-lasting conceptual change, we must identify the common stumbling blocks. A standardized assessment exam known as the Fundamentals in Meteorology Inventory has been developed to assess student understanding of basic concepts addressed in introductory meteorology courses. Notably, the field of meteorology draws from concepts in other disciplines such as geoscience and physics, which have their own conceptual inventory assessments. However, in order to fully understand meteorology, students must be able to apply those concepts correctly as they relate to the atmosphere. Thus, while not all of the meteorological concepts tested in the FMI are completely independent of other scientific disciplines, the exam will be able to highlight where understanding and appropriate application are lacking. Accordingly, the main goal of the exam is to assist instructors in identifying concepts that may cause the most difficulty for their students. The exam also provides a means with which learning and teaching effectiveness can be evaluated based on a consistent measure. For example, the results of the MPAT have been a continuous source of self-reflection and reevaluation for the USAFA meteorology curriculum; it is the hope of the authors that the FMI will promote similar considerations and positive changes in introductory meteorology course offerings across the country. The utility and value of conceptual inventories developed in other science disciplines, as well as the efficacy of the MPAT at USAFA in identifying problem areas, inspired the development of the FMI. The authors have used the pedagogical foundation of the MPAT and other scientific concept inventories, such as the FCI, as a means to ensure success for the FMI. The details of the formation of the FMI questions will be discussed next.

QUESTION DEVELOPMENT.

For ease of grading and to ensure rapid attainment of results for instructors, the format of the FMI was chosen to be multiple choice, similar to the assessment exams in other science disciplines. To ensure that the FMI was assessing higher-order student understanding (instead of rote memorization) of meteorological concepts in the desired format, the authors followed the guidelines of Haladyna et al. (2002) in formatting each question. Following an analysis of numerous studies on multiple-choice item-writing guidelines, Haladyna et al. (2002) provide a synthesis of 31 recommendations for writing test items. The authors sought to follow these guidelines as much as possible, as appropriate for the intended goals of the FMI. For example, particular attention was paid to the complexity of the vocabulary of the test items, striving to keep it simple wherever possible in the interest of testing scientific understanding instead of vocabulary aptitude. When specific vocabulary was necessary, a short description was added to the question to define the expression. Additionally, to ensure that the FMI would be able to identify common misconceptions, the distractors (i.e., incorrect responses) were written to be plausible and typical of errors that instructors observe in the course.

The specific content of the FMI was largely driven by the broad topics covered in many introductory meteorology courses, split into seven categories, each consisting of 5–7 questions: clouds and precipitation, wind, fronts and air masses, temperature, stability, severe weather, and climate. Example questions can be found in Fig. 1. Each question was written by the first author of this paper and consequently reviewed by other meteorology faculty members at USAFA, resulting in revisions as needed to ensure appropriateness of content, language, and answer choices. Ongoing adjustments are being made based on student and nonmeteorology faculty member feedback (see “Future work and validation” section). Further adjustments will be made to ensure the reliability and efficacy of each question.

Fig. 1.
Fig. 1.

Sample questions from each of the FMI question categories. Check marks denote the correct answers.

Citation: Bulletin of the American Meteorological Society 96, 2; 10.1175/BAMS-D-13-00157.1

APPLICATIONS.

The FMI has a number of potential applications for introductory meteorology courses. The primary use of many concept inventories (such as the FCI) is to serve as a way to quantify student learning gains over the course of a semester. Learning gains represent the ratio of actual gain (change in performance) to potential gain (maximum improvement possible based on initial performance); this quantification is often achieved by administering the assessment at the beginning and end of the course. Trends in learning gains can thus reveal remaining misconceptions and pinpoint areas that instructors should consider retooling.

FMI results can also be a method to assess the strength and effectiveness of meteorology programs. Departments using the FMI could ensure the production of quality graduates with a solid understanding of the fundamentals of meteorology. The degree of success or failure could have significant implications, not only for department reputation, but also for the ability of graduates to earn professional certifications and licenses. For example, the U.S. Air Force and the Federal Aviation Administration in particular are interested in ensuring accurate understanding of the weather, as evidenced by the weather knowledge tests pilots are required to pass following training. It is thus worthwhile for meteorology programs to assess the efficacy of their curricula.

The FMI would serve well as a way to assess the effectiveness of any modifications made to the introductory course, such as new teaching methods, a new textbook, or different course prerequisites. If changes can be made without deleterious impacts on FMI performance, then those modifications could have more staying power and justification to be applied elsewhere in the curriculum.

An additional application of the FMI exam is to use the questions as part of an in-class assessment of student understanding. Instructors unable to devote two full class periods in a semester to pretesting and posttesting could instead intersperse the questions throughout the semester, presenting relevant questions during class as a way to measure current comprehension. The benefit of this approach is that the instructor can immediately identify an issue and correct any misconceptions head-on (e.g., the Just-in-Time Teaching pedagogy; Novak et al. 1999).

The FMI could also be used to personalize instruction for each class. For example, administering the FMI at the beginning of a semester could reveal that one section had more misconceptions related to fronts and air masses, while another section struggled with climate-related content. This information could then be used to tailor the course in such a way to spend more time addressing each class’s specific misconceptions.

FUTURE WORK AND VALIDATION.

The FMI exam underwent a preliminary evaluation phase during the fall 2013 semester, whereby relevant questions were introduced after classroom instruction. Students responded to each question via an anonymous electronic polling system, followed by classroom discussion. After the class identified the correct answer and provided justification, the instructor probed the students to ensure that the question wording was clear, determined the extent to which each distractor represented a logical choice, and whether students considered an answer not given in the question. Additional feedback was also attained from several other nonmeteorology faculty members. After compiling all of the comments and suggestions, the authors revisited each question and applied changes as needed. Nearly all questions were altered in some way to improve clarity, with some requiring significant editing to ensure they were aligned with the goals of the exam, and some being removed altogether. Following this initial iterative editing process, local testing of the FMI is currently underway. The exam was administered in USAFA's introductory meteorology course at the beginning and end of the spring 2014 semester to facilitate a preliminary assessment of the ability of the FMI to identify consistent areas of struggle for students. Additional testing occurred during the fall 2014 semester, and analysis of the results is ongoing.

It is important to note that the development of a mature concept inventory in meteorology that has sufficient power to identify misconceptions is going to be an iterative process that can be expected to take many years. Care must be taken to ensure that each test item truly assesses the desired concept. High-quality concept inventories must importantly fulfill reliability and validity criteria (Engelhardt 2009). Toward this end, there are several assessment techniques that will be employed as we continue to develop the FMI.

Internal validity refers to the ability of the exam to establish appropriate and consistent correlations that are evident among several samples. One method that has been used to quantify internal validity is Rasch analysis, where consistency is determined based on the combination of student ability and item difficulty. The probability of a specific response is modeled as a function of person and item parameters (e.g., Libarkin and Anderson 2006). External validity refers to the ability of exam results to be generalized to other populations. Although the FMI was authored at USAFA, its intent is to be universally applicable. Significantly different results (e.g., as determined by a Student’s t test or Z test; Engelhardt 2009) between USAFA and other institutions for specific questions or the exam as a whole will signal the need for further editing. Biases will also need to be measured for other types of populations, such as different genders, which can be measured using differential item functioning, whereby the degree of difficulty is matched to demographic data (e.g., Libarkin and Anderson 2006). To aid this pursuit, USAFA will be collaborating with the atmospheric sciences program at the South Dakota School of Mines and Technology (SDSMT) and the University of North Carolina at Charlotte (UNCC) over the next few years, allowing for valuable comparisons to be made and preliminary evaluation of any biases.

Establishing the reliability of the FMI involves ensuring that the test will produce the same results from one set of circumstances to another. Several statistical measures are useful toward this end, including computing the coefficient alpha, which quantifies the interrelatedness of test items; the Kuder–Richardson correlation coefficient, which measures the covariance of test items; and the split-halves method, which determines the correlation between different halves of test scores (e.g., Libarkin and Anderson 2006; Engelhardt 2009). Additionally, the FMI will ideally be able to produce a wide range of scores, indicating a strong discriminatory power for the exam (i.e., the ability to distinguish between individual students), where Ferguson’s delta is the recommended parameter (e.g., Allen et al. 2004; Engelhardt 2009).

The development of other scientific conceptual inventories (e.g., Hestenes et al. 1992; Zeilik et al. 1999; Anderson et al. 2002; Allen et al. 2004; Libarkin and Anderson 2005) provides a significant resource for recommended best practices that will ensure the FMI is a high-quality product. The authors are committed to developing the FMI as a valid and reliable tool for meteorology educators across the country. Following successful testing at USAFA, SDSMT, and UNCC, additional universities will be recruited to expand the sample size and to test additional populations by administering the FMI to their introductory meteorology students, thus further facilitating the validation process of the exam. While this process may take several years, we anticipate that it will be a valuable tool for educators to allow them to pinpoint common areas of difficulty and then work to develop effective instructional approaches that improve student learning.

ACKNOWLEDGMENTS

The authors would like to acknowledge the feedback and input from other USAFA meteorology faculty members, including Capt. Kristin Dowd, Capt. Matt Ellis, and Lt. Col. David Vollmer. Valuable discussions of the project were also provided by the members of USAFA’s Center for Physics Education Research, including Dr. Kimm de la Harpe, Dr. Fred Kontur, Rebecca Lickiss, Capt. Carolyn Tewksbury, 2nd Lt. Ed Schroder, and Maj. Nate Terry. Distribution A, approved for public release, distribution unlimited.

REFERENCES

  • Allen, K., A. Stone, T. R. Rhoads, and T. J. Murphy, 2004: The Statistics Concept Inventory: Developing a valid and reliable instrument. Proc. 2004 American Society for Engineering Education Annual Conf. and Exposition, Salt Lake City, UT, ASEE, 958. [Available online at http://search.asee.org/search/fetch?url=file%3A%2F%2Flocalhost%2FE%3A%2Fsearch%2Fconference%2F28%2FAC%25202004Paper958.pdf&index=conference_papers&space=129746797203605791716676178&type=application%2Fpdf&charset=.]

  • Anderson, D. L., K. M. Fisher, and G. J. Norman, 2002: Development and validation of the Conceptual Inventory of Natural Selection. J. Res. Sci. Teach., 39, 952–978, doi:10.1002/tea.10053.

    • Search Google Scholar
    • Export Citation
  • Aron, R. H., M. A. Francke, B. D. Nelson, and W. J. Biasrd, 1994: Atmospheric misconceptions. Sci. Teach., 61, 30–33.

  • Bar, V., and A. S. Travis, 1991: Children’s views concerning phase changes. J. Res. Sci. Teach., 28, 363–382, doi:10.1002/tea.3660280409.

    • Search Google Scholar
    • Export Citation
  • Chang, J.-Y., 1999: Teachers college students’ conceptions about evaporation, condensation, and boiling. Sci. Educ., 83, 511–526, doi:10.1002/(SICI)1098-237X(199909)83:53.0.CO;2-E.

    • Search Google Scholar
    • Export Citation
  • Crooks, T. J., 1988: The impact of classroom evaluation practices on students. Rev. Educ. Res., 58, 438–481, doi:10.3102/00346543058004438.

    • Search Google Scholar
    • Export Citation
  • Dove, J., 1998: Alternative conceptions about the weather. Sch. Sci. Rev., 79, 65–69.

  • Driver, R., 1985: Children’s Ideals in Science. Open University Press, 208 pp.

  • Engelhardt, P. V., 2009: An introduction to classical test theory as applied to conceptual multiple-choice tests. Getting Started Phys. Educ. Res., 2 (1). [Available online at http://www.per-central.org/items/detail.cfm?ID=8807.]

  • Ewing, M. S., and T. J. Mills, 1994: Water literacy in college freshmen: Could a cognitive imagery strategy improve understanding? J. Environ. Educ., 25, 36–40, doi:10.1080/00958964.1994.9941963.

    • Search Google Scholar
    • Export Citation
  • Fisher, K. M., and D. E. Moody, 2002: Students’ misconceptions in biology. Mapping Biology Knowledge, K. M. Fisher, J. H. Wandersee, and D. E. Moody, Eds., Science and Technology Education Library, Vol. 11, Bluwer Academic, 55–76.

  • Gonzales-Espada, W. J., 2003: Physics education research in the United States: A summary of its rationale and main findings. Rev. Educ. Cienc., 4, 5–7.

    • Search Google Scholar
    • Export Citation
  • Gopal, H., J. Kleinsmidt, J. Case, and P. Musonge, 2004: An investigation of tertiary students’ understanding of evaporation, condensation, and vapor pressure. Int. J. Sci. Educ., 26, 1597–1620, doi:10.1080/09500690410001673829.

    • Search Google Scholar
    • Export Citation
  • Haladyna, T. M., S. M. Downing, and M. C. Rodriguez, 2002: A review of multiple-choice item-writing guidelines for classroom assessment. Appl. Meas. Educ., 15, 309–334, doi:10.1207/S15324818AME1503_5.

    • Search Google Scholar
    • Export Citation
  • Halloun, I., and D. Hestenes, 1985: The initial knowledge state of college physics students. Amer. J. Phys., 53, 1043–1055, doi:10.1119/1.14030.

    • Search Google Scholar
    • Export Citation
  • Heckler, A. F., and E. C. Sayre, 2010: What happens between pre- and post-tests: Multiple measurements of student understanding during an introductory physics course. Amer. J. Phys., 78, 768–777, doi:10.1119/1.3384261.

    • Search Google Scholar
    • Export Citation
  • Henriques, L., 2002: Children’s ideas about weather: A review of the literature. Sch. Sci. Math., 102, 202–215, doi:10.1111/j.1949-8594.2002.tb18143.x.

    • Search Google Scholar
    • Export Citation
  • Hestenes, D., M. Wells, and G. Swackhamer, 1992: Force Concept Inventory. Phys. Teach., 30, 141–158, doi:10.1119/1.2343497.

  • Kahl, J. D. W., 2008: Reflections on a large-lecture, introductory meteorology course: Goals, assessment, and opportunities for improvement. Bull. Amer. Meteor. Soc., 89, 1029–1034, doi:10.1175/2008BAMS2473.1.

    • Search Google Scholar
    • Export Citation
  • Lewis, T. R., 2006: The tornado hazard in southern New England: History, characteristics, student and teacher perceptions. J. Geogr., 105, 258–266, doi:10.1080/00221340608978695.

    • Search Google Scholar
    • Export Citation
  • Libarkin, J. C., and S. W. Anderson, 2005: Assessment of learning in entry-level geoscience courses: Results from the Geoscience Concept Inventory. J. Geosci. Educ., 53, 394–401.

    • Search Google Scholar
    • Export Citation
  • Libarkin, J. C., and S. W. Anderson, 2006: Development of the Geoscience Concept Inventory. Proceedings of the National STEM Assessment Conference, D. Deeds and B. Callen, Eds., Drury University Doc., 148–158. [Available online at www.openwatermedia.com/downloads/STEM(for-posting).pdf.]

  • Novak, G. M., E. T. Patterson, A. D. Gavrin, and W. Christian, 1999: Just-in-Time Teaching: Blending Active Learning with Web Technology. Prentice Hall, 188 pp.

  • Osborne, R. J., and M. M. Cosgrove, 1983: Children’s conceptions of the changes of state of water. J. Res. Sci. Teach., 20, 825–838, doi:10.1002/tea.3660200905.

    • Search Google Scholar
    • Export Citation
  • Paas, F., A. Renkl, and J. Sweller, 2003: Cognitive load theory and instructional design: Recent developments. J. Educ. Psychol., 38, 1–4, doi:10.1207/S15326985EP3801_1.

    • Search Google Scholar
    • Export Citation
  • Piaget, J., 1929: The Child’s Conception of the World. Routledge and Kegan Paul, 399 pp.

  • Piaget, J., 1930: The Child’s Conception of Physical Causality. Kegan Paul, Trench, Trubner & Co., 332 pp.

  • Polito, E. J., 2010: Student conceptions of weather phenomenon across multiple cognitive levels. M. A. dissertation, Dept. of Geosciences, San Francisco State University, 268 pp.

  • Posner, G. J., K. A. Strike, P. W. Hewson, and W. A. Gertzog, 1982: Accommodation of a scientific conception: Toward a theory of conceptual change. Sci. Educ., 66, 211–227, doi:10.1002/sce.3730660207.

    • Search Google Scholar
    • Export Citation
  • Rappaport, E. D., 2009: What undergraduates think about clouds and fog. J. Geosci. Educ., 57, 145–151, doi:10.5408/1.3544249.

  • Roebber, P. J., 2005: Bridging the gap between theory and applications: An inquiry into atmospheric science teaching. Bull. Amer. Meteor. Soc., 86, 507–517, doi:10.1175/BAMS-86-4-507.

    • Search Google Scholar
    • Export Citation
  • Saçkes, M., L. M. Flevares, and K. C. Trundle, 2010: Four-to six-year-old children’s conceptions of the mechanism of rainfall. Early Child. Res. Quart., 25, 536–546, doi:10.1016/j.ecresq.2010.01.001.

    • Search Google Scholar
    • Export Citation
  • Schneps, M., 1997: Lessons from thin air. Minds of Our Own, Harvard-Smithsonian Center for Astrophysics, DVD.

  • Schroeder, C. C., 1993: New students—New learning styles. Change, 25, 21–26, doi:10.1080/00091383.1993.9939900.

  • Stepans, J., and C. Kuehn, 1995: What research says: Children’s conceptions of weather. Sci. Child., 23, 44–47.

  • Sweller, J., 1988: Cognitive load during problem solving: Effects on learning. Cognit. Sci., 12, 257–285, doi:10.1207/s15516709cog1202_4.

    • Search Google Scholar
    • Export Citation
  • Sweller, J., P. Ayres, and S. Kalyuga, 2011: Cognitive Load Theory. Springer, 274 pp.

  • Tytler, R., 1998: Children’s concepts of air pressure: Exploring the nature of conceptual change. Int. J. Sci. Educ., 20, 929–958, doi:10.1080/0950069980200803.

    • Search Google Scholar
    • Export Citation
  • Wandersee, J. H., J. J. Mintzes, and J. D. Novak, 1994: Research on alternative conceptions in science. Handbook of Research on Science Teaching and Learning, D. L. Gabel, Ed., Macmillan, 177–210.

  • Zeilik, M., C. Schau, and N. Mattern, 1999: Conceptual astronomy. II. Replicating conceptual gains, probing attitude changes across three semesters. Amer. J. Phys., 67, 923–927, doi10.1119/1.19151.

    • Search Google Scholar
    • Export Citation
Save
  • Allen, K., A. Stone, T. R. Rhoads, and T. J. Murphy, 2004: The Statistics Concept Inventory: Developing a valid and reliable instrument. Proc. 2004 American Society for Engineering Education Annual Conf. and Exposition, Salt Lake City, UT, ASEE, 958. [Available online at http://search.asee.org/search/fetch?url=file%3A%2F%2Flocalhost%2FE%3A%2Fsearch%2Fconference%2F28%2FAC%25202004Paper958.pdf&index=conference_papers&space=129746797203605791716676178&type=application%2Fpdf&charset=.]

  • Anderson, D. L., K. M. Fisher, and G. J. Norman, 2002: Development and validation of the Conceptual Inventory of Natural Selection. J. Res. Sci. Teach., 39, 952–978, doi:10.1002/tea.10053.

    • Search Google Scholar
    • Export Citation
  • Aron, R. H., M. A. Francke, B. D. Nelson, and W. J. Biasrd, 1994: Atmospheric misconceptions. Sci. Teach., 61, 30–33.

  • Bar, V., and A. S. Travis, 1991: Children’s views concerning phase changes. J. Res. Sci. Teach., 28, 363–382, doi:10.1002/tea.3660280409.

    • Search Google Scholar
    • Export Citation
  • Chang, J.-Y., 1999: Teachers college students’ conceptions about evaporation, condensation, and boiling. Sci. Educ., 83, 511–526, doi:10.1002/(SICI)1098-237X(199909)83:53.0.CO;2-E.

    • Search Google Scholar
    • Export Citation
  • Crooks, T. J., 1988: The impact of classroom evaluation practices on students. Rev. Educ. Res., 58, 438–481, doi:10.3102/00346543058004438.

    • Search Google Scholar
    • Export Citation
  • Dove, J., 1998: Alternative conceptions about the weather. Sch. Sci. Rev., 79, 65–69.

  • Driver, R., 1985: Children’s Ideals in Science. Open University Press, 208 pp.

  • Engelhardt, P. V., 2009: An introduction to classical test theory as applied to conceptual multiple-choice tests. Getting Started Phys. Educ. Res., 2 (1). [Available online at http://www.per-central.org/items/detail.cfm?ID=8807.]

  • Ewing, M. S., and T. J. Mills, 1994: Water literacy in college freshmen: Could a cognitive imagery strategy improve understanding? J. Environ. Educ., 25, 36–40, doi:10.1080/00958964.1994.9941963.

    • Search Google Scholar
    • Export Citation
  • Fisher, K. M., and D. E. Moody, 2002: Students’ misconceptions in biology. Mapping Biology Knowledge, K. M. Fisher, J. H. Wandersee, and D. E. Moody, Eds., Science and Technology Education Library, Vol. 11, Bluwer Academic, 55–76.

  • Gonzales-Espada, W. J., 2003: Physics education research in the United States: A summary of its rationale and main findings. Rev. Educ. Cienc., 4, 5–7.

    • Search Google Scholar
    • Export Citation
  • Gopal, H., J. Kleinsmidt, J. Case, and P. Musonge, 2004: An investigation of tertiary students’ understanding of evaporation, condensation, and vapor pressure. Int. J. Sci. Educ., 26, 1597–1620, doi:10.1080/09500690410001673829.

    • Search Google Scholar
    • Export Citation
  • Haladyna, T. M., S. M. Downing, and M. C. Rodriguez, 2002: A review of multiple-choice item-writing guidelines for classroom assessment. Appl. Meas. Educ., 15, 309–334, doi:10.1207/S15324818AME1503_5.

    • Search Google Scholar
    • Export Citation
  • Halloun, I., and D. Hestenes, 1985: The initial knowledge state of college physics students. Amer. J. Phys., 53, 1043–1055, doi:10.1119/1.14030.

    • Search Google Scholar
    • Export Citation
  • Heckler, A. F., and E. C. Sayre, 2010: What happens between pre- and post-tests: Multiple measurements of student understanding during an introductory physics course. Amer. J. Phys., 78, 768–777, doi:10.1119/1.3384261.

    • Search Google Scholar
    • Export Citation
  • Henriques, L., 2002: Children’s ideas about weather: A review of the literature. Sch. Sci. Math., 102, 202–215, doi:10.1111/j.1949-8594.2002.tb18143.x.

    • Search Google Scholar
    • Export Citation
  • Hestenes, D., M. Wells, and G. Swackhamer, 1992: Force Concept Inventory. Phys. Teach., 30, 141–158, doi:10.1119/1.2343497.

  • Kahl, J. D. W., 2008: Reflections on a large-lecture, introductory meteorology course: Goals, assessment, and opportunities for improvement. Bull. Amer. Meteor. Soc., 89, 1029–1034, doi:10.1175/2008BAMS2473.1.

    • Search Google Scholar
    • Export Citation
  • Lewis, T. R., 2006: The tornado hazard in southern New England: History, characteristics, student and teacher perceptions. J. Geogr., 105, 258–266, doi:10.1080/00221340608978695.

    • Search Google Scholar
    • Export Citation
  • Libarkin, J. C., and S. W. Anderson, 2005: Assessment of learning in entry-level geoscience courses: Results from the Geoscience Concept Inventory. J. Geosci. Educ., 53, 394–401.

    • Search Google Scholar
    • Export Citation
  • Libarkin, J. C., and S. W. Anderson, 2006: Development of the Geoscience Concept Inventory. Proceedings of the National STEM Assessment Conference, D. Deeds and B. Callen, Eds., Drury University Doc., 148–158. [Available online at www.openwatermedia.com/downloads/STEM(for-posting).pdf.]

  • Novak, G. M., E. T. Patterson, A. D. Gavrin, and W. Christian, 1999: Just-in-Time Teaching: Blending Active Learning with Web Technology. Prentice Hall, 188 pp.

  • Osborne, R. J., and M. M. Cosgrove, 1983: Children’s conceptions of the changes of state of water. J. Res. Sci. Teach., 20, 825–838, doi:10.1002/tea.3660200905.

    • Search Google Scholar
    • Export Citation
  • Paas, F., A. Renkl, and J. Sweller, 2003: Cognitive load theory and instructional design: Recent developments. J. Educ. Psychol., 38, 1–4, doi:10.1207/S15326985EP3801_1.

    • Search Google Scholar
    • Export Citation
  • Piaget, J., 1929: The Child’s Conception of the World. Routledge and Kegan Paul, 399 pp.

  • Piaget, J., 1930: The Child’s Conception of Physical Causality. Kegan Paul, Trench, Trubner & Co., 332 pp.

  • Polito, E. J., 2010: Student conceptions of weather phenomenon across multiple cognitive levels. M. A. dissertation, Dept. of Geosciences, San Francisco State University, 268 pp.

  • Posner, G. J., K. A. Strike, P. W. Hewson, and W. A. Gertzog, 1982: Accommodation of a scientific conception: Toward a theory of conceptual change. Sci. Educ., 66, 211–227, doi:10.1002/sce.3730660207.

    • Search Google Scholar
    • Export Citation
  • Rappaport, E. D., 2009: What undergraduates think about clouds and fog. J. Geosci. Educ., 57, 145–151, doi:10.5408/1.3544249.

  • Roebber, P. J., 2005: Bridging the gap between theory and applications: An inquiry into atmospheric science teaching. Bull. Amer. Meteor. Soc., 86, 507–517, doi:10.1175/BAMS-86-4-507.

    • Search Google Scholar
    • Export Citation
  • Saçkes, M., L. M. Flevares, and K. C. Trundle, 2010: Four-to six-year-old children’s conceptions of the mechanism of rainfall. Early Child. Res. Quart., 25, 536–546, doi:10.1016/j.ecresq.2010.01.001.

    • Search Google Scholar
    • Export Citation
  • Schneps, M., 1997: Lessons from thin air. Minds of Our Own, Harvard-Smithsonian Center for Astrophysics, DVD.

  • Schroeder, C. C., 1993: New students—New learning styles. Change, 25, 21–26, doi:10.1080/00091383.1993.9939900.

  • Stepans, J., and C. Kuehn, 1995: What research says: Children’s conceptions of weather. Sci. Child., 23, 44–47.

  • Sweller, J., 1988: Cognitive load during problem solving: Effects on learning. Cognit. Sci., 12, 257–285, doi:10.1207/s15516709cog1202_4.

    • Search Google Scholar
    • Export Citation
  • Sweller, J., P. Ayres, and S. Kalyuga, 2011: Cognitive Load Theory. Springer, 274 pp.

  • Tytler, R., 1998: Children’s concepts of air pressure: Exploring the nature of conceptual change. Int. J. Sci. Educ., 20, 929–958, doi:10.1080/0950069980200803.

    • Search Google Scholar
    • Export Citation
  • Wandersee, J. H., J. J. Mintzes, and J. D. Novak, 1994: Research on alternative conceptions in science. Handbook of Research on Science Teaching and Learning, D. L. Gabel, Ed., Macmillan, 177–210.

  • Zeilik, M., C. Schau, and N. Mattern, 1999: Conceptual astronomy. II. Replicating conceptual gains, probing attitude changes across three semesters. Amer. J. Phys., 67, 923–927, doi10.1119/1.19151.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Sample questions from each of the FMI question categories. Check marks denote the correct answers.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1244 837 320
PDF Downloads 366 88 8