Designing Contrasting Cases for Inductive Learning

Designing Contrasting Cases for Inductive Learning is a Goal 1 (Exploration) project to the Cognition and Student Learning program. The purpose was to develop and test a theory for how to pick sets of problems that help students understand the quantitative structure of empirical phenomena. Based on prior research, problem sets that incorporate contrasting cases can foster an appreciation of deep structure, flexibility, transfer, and preparation for future learning. Contrasting cases exhibit systematic variation, which is useful for inducing similarities and differences. Crucially, a vetted theory for how to pick the right contrasting cases is missing, which means that the selection of examples is often based on intuition rather than science. In this work the malleable factor is the instructional examples and how they interact with fundamental cognitive processes of induction.

The research took place in the laboratory and regular classrooms. One hour laboratory studies with community college students permit rapid experimentation on basic hypotheses. Classroom studies enable more sustained interventions that examine cumulative effects and determine robustness. The laboratory sample came from Foothill Community College students (CA), because they have more variability in educational background than Stanford students. The classroom sample came from secondary schools at the semi-urban Milpitas and San Carlos (CA) school districts, which both have high ethnic and SES diversity. We targeted science topics relevant to 8th-10th grade students and consistent with the new common core standards.

The research interventions involved multiple laboratory studies plus yearly classroom studies to test laboratory results in situ. In general, all participants would be given similar task demands (e.g., invent a way to characterize the variation among the cases). The experimental comparisons involved parametric variations in the cases that students receive. Afterwards, participants completed post-tests to determine what they learned from the cases. Factorial designs isolated key parameters of contrasting cases (e.g., cases that exemplify main effects, interaction effects, both, or neither). We used random assignment for laboratory studies and stratified random assignment for classroom studies (using science and math achievement as the stratification variable).

The research method is experimental. In most studies, instructional treatments were organized as 2×2 between-subjects factorials. Learning outcomes involved two classes of measures of theoretical interest and that have been validated in prior research. One class of measure evaluated deep understanding through students’ abilities to explain a formula (e.g., why does the density formula divide by volume), to recreate the instructional examples (e.g., do students redraw the deep structure), to transfer (e.g., do students use the basic notion of a ratio to describe a new set of cases), preparation for future learning (e.g., do students exhibit superior subsequent learning on new topics when no longer in treatment and receiving regular instruction). The second class of measure emphasized fluency including abilities to apply the formula and solve word problems. The two classes of measure serve as a within-subjects factor (deep understanding versus fluency) that is crossed with the between-subjects factors. In general, successful results depended on an interaction such that fluency performance across conditions looks the same (indicating that all treatments successfully taught the basic content) but deep understanding differs by treatment. Some measures will appear at both pre- and post-test, but transfer measures are more delicate, because they can be reactive, and will only appear at post-test.

Data analysis varies by specific experimental design, data type, and classroom realities. In general, we used MANOVAs with experimental treatments as between-subjects factors and dependent measures and time of test (when appropriate) as within-subject factors.

Subordinate MANCOVAs examined correlations between performance during learning and post-test as well as effects of pre-existing conditions. Given the practical reality of schools, we anticipated that some of the studies will employ in-tact classes, in which case we used mixed models with class as a random variable.

 

Primary Investigator: Daniel Schwartz (Stanford Graduate School of Education)
Sponsored by: United States Department of Education
Dates: 07/01/14 – 06/30/18