https://learnlab.org/wiki/api.php?action=feedcontributions&user=128.2.177.115&feedformat=atomLearnLab - User contributions [en]2021-12-04T08:04:39ZUser contributionsMediaWiki 1.31.12https://learnlab.org/wiki/index.php?title=Educational_Research_Methods_10&diff=8697Educational Research Methods 102008-12-08T21:00:28Z<p>128.2.177.115: /* Grading: */</p>
<hr />
<div>==Research Methods for the Learning Sciences 85-748==<br />
Spring 2008 Syllabus Carnegie Mellon University<br />
<br />
====Class times:====<br />
4:30 to 5:50 Tuesday & Thursday<br />
<br />
====Location:====<br />
3501 NSH<br />
<br />
====Instructors:==== <br />
Professor Kenneth R. Koedinger<br />
Office hours by appointment<br />
Location: 3601 Newell-Simon Hall<br />
Phone: 8-7667<br />
Email: Koedinger@cmu.edu<br />
<br />
Dr. Philip I. Pavlik Jr.<br />
Office hours by appointment<br />
Location: 300S Craig St, 224<br />
Phone: 8-1618<br />
Email: ppavlik@andrew.cmu.edu<br />
<br />
====Class URL:==== <br />
[http://www.learnlab.org/research/wiki/index.php/Educational_Research_Methods_09 Main page]<br />
<br />
====Goals:====<br />
The goals of this course are to learn data collection, design, and analysis methodologies that are particularly useful for scientific research in education. The course will be organized in modules addressing particular topics including overview of methods, cognitive task analysis, qualitative methods, protocol and discourse analysis, and educational data mining and log analysis. A key goal is to help students think about and learn how to apply these methods to their own research programs.<br />
<br />
====Course Prerequisites:====<br />
To enroll you must have taken 85-738, "Educational Goals, Instruction, and Assessment" or get the permission of the instruction. <br />
<br />
====Readings Textbook:==== <br />
"The Research Methods Knowledge Base: 3rd edition" by William M.K. Trochim and James P. Donnelly<br />
Other readings will be assigned in class. <br />
<br />
====Reading Reports:====<br />
Students are required to submit at least two posts per week to the course discussion group/blog before class, on either Sunday/Monday/Tuesday before class (for readings due Tuesday) or on Tuesday/Wednesday/Thursday before class (for readings due Thursday). These posts can be about <br />
#a question you had about the reading, something important you did not understand<br />
#an idea inspired by the reading<br />
#an interesting connection with something you learned or did previously in this or another course, or in other professional work or research<br />
#an on-topic, relevant response, clarification, or further comment on another student’s post<br />
<br />
====Grading:==== <br />
<br />
There will be assignments associated with each section of the course. Grades will be determined by your performance on these assignments, by your participation in Reading Reports, and by your participation in class.<br />
<br />
* Course work<br />
** 30% Reading reports <br />
** 70% Homework assignment for each of the 7 main sections<br />
* Project & final paper?<br />
** Design a new study based on one (or more) of these methods that pushes your own research in a new direction.<br />
<br />
====Class Schedule: ==== <br />
(Topics continue into blanks!)<br />
<br />
*[[1-13-09 ERM]] Basic Research & Experimental Methods (Koedinger, Pavlik)<br />
*[[1-15-09 ERM]]<br />
*[[1-20-09 ERM]] <br />
*[[1-22-09 ERM]] <br />
*[[1-27-09 ERM]] Cognitive Task Analysis (Koedinger, Pavlik)<br />
*[[1-29-09 ERM]] <br />
*[[2-3-09 ERM]] <br />
*[[2-5-09 ERM]] Video and Verbal Protocol Analysis (Lovett, Rosé)<br />
*[[2-10-09 ERM]] <br />
*[[2-12-09 ERM]]<br />
*[[2-17-09 ERM]]<br />
*[[2-19-09 ERM]]<br />
*[[2-24-09 ERM]]<br />
*[[2-26-09 ERM]] Ethnography & Design Experiments?<br />
*[[3-3-09 ERM]] Surveys, Questionnaires, Interviews (Kiesler)<br />
*[[3-5-09 ERM]] <br />
*[[3-10-09 ERM]] NO CLASS – Spring break<br />
*[[3-12-09 ERM]] NO CLASS – Spring break<br />
*[[3-17-09 ERM]] Psychometrics, reliability, Item Response Theory (Junker, Koedinger)<br />
*[[3-19-09 ERM]] <br />
*[[3-24-09 ERM]] <br />
*[[3-26-09 ERM]] Educational data mining (Scheines, Pavlik)<br />
*[[3-31-09 ERM]]<br />
*[[4-2-09 ERM]]<br />
*[[4-7-09 ERM]]<br />
*[[4-9-09 ERM]] <br />
*[[4-14-09 ERM]] <br />
*[[4-16-09 ERM]] NO CLASS – Spring Carnival<br />
*[[4-21-09 ERM]] Cognitive Task Analysis - Revisited (Koedinger, Pavlik)<br />
*[[4-23-09 ERM]]<br />
*[[4-28-09 ERM]] Wrap-up<br />
*[[4-30-09 ERM]] Wrap-up</div>128.2.177.115https://learnlab.org/wiki/index.php?title=Composition_Effect_Kao_Roll&diff=8534Composition Effect Kao Roll2008-11-13T19:14:02Z<p>128.2.177.115: /* Meta-data */</p>
<hr />
<div>== The Composition Effect - What is the Source of Difficulty in Problems which Require Application of Several Skills? ==<br />
Ido Roll, Yvonne Kao, Kenneth E. Koedinger<br />
<br />
=== Meta-data ===<br />
<br />
PI's: Ido Roll, Yvonne Kao, Kenneth E. Koedinger<br />
<br />
Other Contributers: <br />
{| border="1"<br />
! Study # !! Start Date !! End Date !! LearnLab Site !! # of Students !! Total Participant Hours !! DataShop? <br />
|-<br />
| '''1''' || 9/2006 || 10/2006 || CWCTC (Geometry) || 98 || 98 || yes<br />
|}<br />
<br />
=== Abstract ===<br />
<br />
<br />
This study found that the presence of distracters creates <br />
significant difficulty for students solving geometry area <br />
problems, but that practice on composite area problems <br />
improves students’ ability to ignore distracters. In addition, <br />
this study found some support for the hypothesis that the <br />
increased spatial processing demands of a complex diagram <br />
can negatively impact performance and could be a source of a <br />
composition effect in geometry. <br />
<br />
=== Glossary ===<br />
<br />
- Composite problems: Problems which require the application of several skills, such as solving 3x+6=0 for x.<br />
<br />
- Single-step problems: Problems which require the application of a single skill, such as y+6=0 or 3x=-6<br />
<br />
- DFA (Difficulty Factor Analysis): A test that includes pairs of items varying along one dimension only. It allows to evaluate the difficulty level of the single dimensions along which the problems differ.<br />
<br />
- The Composition Effect: The effect according to which composite problems are harder than a set of single-step problems using the same skills.<br />
<br />
<br />
=== Research question ===<br />
<br />
What is the main source of difficulty in composite problems?<br />
<br />
<br />
=== Background and Significance ===<br />
<br />
Although much work has been done to improve students’ <br />
math achievement in the United States, geometry <br />
achievement appears to be stagnant. While the 2003 TIMSS <br />
found significant gains in U.S. eighth-graders’ algebra <br />
performance between 1999 and 2003, it did not find a <br />
significant improvement on geometry items between 1999 <br />
and 2003 (Gonzales et al., 2004). Furthermore, of the five <br />
mathematics content areas assessed by TIMSS, geometry <br />
was the weakest for U.S. eighth-graders (Mullis, Martin, <br />
Gonzales, & Chrostowski, 2004). While students have <br />
often demonstrated reasonable skill in “basic, one-step <br />
problems,” (Wilson & Blank, 1999, p. 41) they often <br />
struggle with extended, multi-step problems in which they <br />
have to construct a free response, rather than selecting a <br />
multiple-choice item. Thus it is our goal to examine the <br />
sources of difficulty in multi-step geometry problems and to <br />
determine how to address these difficulties during <br />
instruction. <br />
Heffernan and Koedinger (1997) found a composition <br />
effect in multi-step algebraic story problems—the <br />
probability of correctly completing the multi-step problem <br />
was less than the product of the probability of correctly <br />
completing each of the subproblems, P(Composite) < <br />
P(Subproblem A) × P(Subproblem B). They suggested that <br />
this difference could be due to an exponential increase in the <br />
number of possible problem-solving actions as a problem <br />
became more complex, or it could be due to a missing or <br />
over-specialized knowledge component, such as students <br />
not understanding that whole subexpressions could be <br />
manipulated like single numbers or variables. Our research <br />
questions are: is there a composition effect in multi-step <br />
geometry area problems, e.g., a problem in which the <br />
student must subtract the area of an inner shape from the <br />
area of an outer shape to find the area of a shaded region, <br />
and if so, what might be the source of the composition <br />
effect? Would it be a missing or over-specialized <br />
knowledge component, as concluded by Heffernan and <br />
Koedinger, or would it be a combinatorial search? <br />
In order to answer these questions, we first needed to <br />
assess the difficulty of a single-step area problem. <br />
Koedinger and Cross (2000) found that the presence of <br />
distracter numbers on parallelogram problems—the length <br />
of a side was given in addition to the lengths of the height <br />
and base—significantly increased the difficulty of the <br />
problems due to students’ shallow application of area <br />
knowledge. In particular, students seemed to have over- <br />
generalized a procedure for finding the area of rectangles— <br />
multiplying the lengths of adjacent sides—to <br />
parallelograms. In addition, Koedinger and Cross <br />
conjectured that non-standard orientations for shapes—non- <br />
horizontal bases and non-vertical heights—would also <br />
expose students’ shallow knowledge. Given that a multi- <br />
step area problem inherently contains distracters and often <br />
features shapes that are rotated from their standard <br />
orientations, it will be important for us to follow Koedinger <br />
and Cross’s lead and get a baseline measure of how <br />
distracters and orientation affect performance on single-step <br />
area problems. Then we will study how combining single- <br />
step area problems into a typical, “find the shaded area” <br />
composite area problem effects performance. In these types <br />
of problems, students are required to perform three steps: <br />
calculate the area of the outer shape, calculate the area of <br />
the inner shape, and then subtract the values of the two <br />
areas. <br />
We believe that we will find a composition effect in these <br />
types of geometry area problems. One possible source of <br />
the effect is the additional spatial-processing demands <br />
placed by a more complex diagram. Koedinger and <br />
Anderson (1990) found that a hallmark of geometry <br />
expertise was the ability to parse a complex diagram into <br />
perceptual chunks that could be used to guide a search of <br />
problem-solving schemas. Geometry novices most likely <br />
are not able to parse complex diagrams into meaningful <br />
perceptual chunks quickly or efficiently and thus increasing <br />
diagram complexity could result in increased problem <br />
difficulty. This explanation would be more consistent with <br />
the combinatorial search explanation for the composition <br />
effect than the missing-skill explanation favored by <br />
Heffernan and Koedinger. This conjecture leads to an <br />
interesting prediction: in contrast to the composition effect <br />
found by Heffernan and Koedinger, the probability of <br />
correctly completing a composite problem should be greater <br />
than the product of the probability of correctly completing <br />
each of its three subproblems. This is because in <br />
completing a single composite problem, the act of parsing <br />
the complex diagram need only be performed once whereas <br />
it needs to be performed at least twice when completing the <br />
three subproblems separately. This prediction has two <br />
corollaries: performance on the Outer Shape subproblem <br />
should be lower than performance on a mathematically <br />
equivalent problem using a simple diagram, and that the <br />
probability of correctly completing a composite problem <br />
should be equal to the product of the probabilities of <br />
correctly completing the Subtraction subproblem, the Inner <br />
Shape subproblem, and a simple-diagram equivalent of the <br />
Outer Shape subproblem. <br />
<br />
=== Independent Variables ===<br />
<br />
An instruction in the form of solved-example, targeting a common misconception - identifying base and hight in a cluttered environment. <br />
<br />
=== Dependent variables ===<br />
<br />
Three tests are used in the study:<br />
- Pre-test: given before all instruciton<br />
- Mid-test: given after students learned about single-step problems and before composite problems<br />
- Post-test: after students have learned and practice all material. <br />
<br />
The tests include the following items. Some of which are [[transfer]] items, evaluating robust learning, since they require and adaptive application of the knowledge learned and practiced in class.<br />
<br />
* Simple diagram:<br />
*# no distractors, canonical orientation<br />
*# distractors, canonical orientation<br />
*# no distractors, tilted orientation<br />
*# distractors, tilted orientation<br />
* Complex diagram:<br />
*# Given complex diagram, ask for skill A<br />
*# Given complex diagram, ask for skill B<br />
*# Given steps A and B, ask for skills C (which requires A and B)<br />
*# Given complex diagram, ask for C (which requires A and B)<br />
<br />
=== Hypothesis ===<br />
<br />
# Adding distracters to a basic area problem and rotating the figure from its standard orientation will make the problem more difficult. <br />
# We will find a composition effect in area problems, in that the probability of correctly completing a composite problem is not equal to the product of the probabilities of correctly completing the three subskills: P(Composite) ≠ P(Outer) × P(Inner) × P(Subtract). <br />
# P(Composite) > P(Outer) × P(Inner) × P(Subtract), P(Outer) < P(Simple Outer Eq.), and P(Composite) = P(Simple Outer Eq.) × P(Inner) × P(Subtract) due to the demands of spatially parsing the diagram.<br />
<br />
=== Findings ===<br />
<br />
<br />
An alpha value of .05 was used for all statistical tests. <br />
<br />
==== Comparison of Mid-test and Post-test Performance ====<br />
Scores on the pretest were at floor, ranging from 0 to 50% <br />
correct (M = 14.94%, SD = 13.61%). Pre-test scores did not <br />
correlate significantly with either mid-test scores or post- <br />
test scores. Thus we did not analyze the pretest further. <br />
Performance on the mid-test and the post-test were <br />
significantly correlated (r2 = 0.239, p < .001). A paired t- <br />
test found significant gains in overall performance from <br />
mid-test to post-test, t(65) = 3.115, p = 0.003, 2-tailed, with <br />
highly significant gains on Simple problems, t(65) = 3.104, <br />
p = 0.003, 2-tailed, and significant gains on Complex <br />
problems, t(65) = 2.308, p = 0.024. Participants performed <br />
better on the Simple problems than on the Complex <br />
problems. This difference was significant at mid-test, t(65) <br />
= 2.214, p = .030, 2-tailed, and at post-test, t(65) = 2.355, p <br />
= .022, two-tailed. These results are presented in Table 1. <br />
<br />
==== Effects of Distracters, Orientation, and Shape on Simple Problem Performance ====<br />
<br />
We used a binary logistic regression analysis to predict the probability that students would answer a Simple problem correctly on the mid-test and the post-test. Distracters, Rotation, and three dummy variables coding diagram shape—parallelogram, pentagon, trapezoid, or triangle - were entered into the equation as predictors. <br />
<br />
Table 1: Mean performance on mid-test and post-test by diagram type1 <br />
<br />
{| border="1"<br />
! !! Mid-test (%)!! !!Post-test (%)!! !! Gain (%) !!<br />
|- <br />
! Type !! M !! SD !! M !! SD !! M !! SD <br />
|- <br />
| Overall || 65.34 || 27.79 || 75.41 || 23.66 || 10.07** || 26.26 <br />
|-<br />
| Simple || 68.56 || 30.79 || 79.17 || 23.03 || 10.61** || 27.76 <br />
|-<br />
| Complex || 61.74 || 30.13 || 71.21 || 31.08 || 9.47* || 33.33 <br />
|-<br />
| Simple-Complex || 6.82* || 25.03 || 7.96* || 27.44 <br />
|}<br />
<br />
At mid-test, the full model differed significantly from a <br />
model with intercept only, χ2 (5, N = 264) = 17.884, <br />
Nagelkerke R2 = .092, p = .003. Overall, this model had a <br />
success rate of 69.3%, correctly classifying 90.1% of correct <br />
responses and 24.1% of incorrect responses. The presence <br />
of distracters was a significant predictor of problem <br />
accuracy at mid-test. Students were only .541 times as <br />
likely to respond correctly when the problem contained <br />
distracters. Shape was also a significant predictor. Students <br />
were only .484 times as likely to respond correctly when the <br />
problem involved a pentagon over a triangle. Rotation was <br />
not a significant predictor of problem accuracy. <br />
At post-test, the full model differed significantly from a <br />
model with intercept only, χ2 (5, N = 264) = 15.533, <br />
Nagelkerke R2 = .089, p = .008. Overall, this model had a <br />
success rate of 79.2%, correctly classifying 100% of the <br />
correct responses, but 0% of incorrect responses. <br />
Distracters were no longer a significant predictor at post- <br />
test, and Rotation remained a non-significant predictor. <br />
Shape remained a significant predictor of problem accuracy. <br />
Students were only .335 times as likely to respond correctly <br />
when the problem involved a pentagon over a triangle. <br />
<br />
==== Effects of Skill Composition and Shape on Complex Problem Performance ====<br />
<br />
We used a binary logistic regression analysis to predict <br />
the probability that students would answer a Complex <br />
problem correctly on the mid-test and the post-test. Whether <br />
the problem required an Outer calculation, an Inner <br />
calculation, or Subtraction were entered into the equation as <br />
predictors. As before, three dummy variables coding <br />
diagram shape were entered as well. At mid-test, the full <br />
model differed significantly from a model with intercept <br />
only, χ2 (6, N = 264) = 12.862, Nagelkerke R2 = .065, p = <br />
.045. Overall, this model had a success rate of 64.0%, <br />
correctly classifying 85.9% of correct responses and 28.7% <br />
of incorrect responses. At post-test, the full model also <br />
differed significantly from a model with intercept only, χ2 <br />
(6, N = 264) = 25.019, Nagelkerke R2 = .129, p < .001. Overall, this model had a success rate of 70.5%, correctly classifying 95.7% of correct responses and 7.9% of incorrect responses. In both models, the Outer calculation was the only significant predictor of a correct response. <br />
When the problem required an Outer calculation—in the Outer and Composite conditions—students were only .439 and .258 times as likely to respond correctly at mid-test and post-test, respectively. <br />
<br />
==== Predictors of Performance on Composite Problems. ====<br />
<br />
We used a binary logistic regression to predict the <br />
probability that a student would answer a Composite Complex problem correctly on the mid-test and the post-test, given his/her success on: Inner, Outer, and Subtract, the <br />
Distracters+Rotation (DR) Simple problem that is <br />
mathematically-equivalent to Outer; and the shape of the <br />
diagram—parallelogram, pentagon, trapezoid, or triangle— <br />
coded using three dummy variables. This model differed <br />
significantly from a model with intercept only for both mid- <br />
test, χ2 (7, N = 66) = 26.567, Nagelkerke R2 = .442, p < .001, and post-test, χ2 (7, N = 66) = 30.466, Nagelkerke R2 = <br />
.495, p < .001. The mid-test model had overall success rate <br />
of 75.8%, correctly classifying 75.0% of correct responses <br />
and 76.5% of incorrect responses. The post-test model had <br />
an overall success rate of 77.3%, correctly classifying 83.8% of correct responses and 69.0% of incorrect <br />
responses. <br />
The significant predictor variables in the model changed <br />
from mid-test to post-test. At mid-test, Subtract and DR <br />
success were significant predictors of a correct response on <br />
Composite problems. Students who answered Subtract <br />
correctly were 9.168 times more likely to answer Composite <br />
correctly. Students who answered DR correctly were 5.891 <br />
times more likely to answer Composite correctly. At post- <br />
test, Subtract and DR success remained significant <br />
predictors with odds ratios of 20.532 and 9.277, <br />
respectively. In addition, Outer success became a <br />
significant predictor. Students who answered Outer <br />
correctly were 6.366 times more likely to answer Composite <br />
correctly. Shape was not a significant predictor at either <br />
mid-test or post-test. These results are presented in Table 1. <br />
<br />
==== Assessing the Difficulty of Spatial Parsing ====<br />
<br />
We took Accuracy(Outer) × Accuracy(Inner) × Accuracy(Subtract) <br />
and compared this to Accuracy(Composite) for each student <br />
using a paired, 2-tailed t-test. This difference was <br />
significant at mid-test, t(65) = 2.193, p = .032. Students <br />
performed better on Composite (M = 48.48%, SD = 50.36%) <br />
than was predicted by the product (M = 33.33%, SD = <br />
47.52%). This difference was no longer significant at post- <br />
test. In contrast, Accuracy(DR) × Accuracy(Inner) × <br />
Accuracy(Subtract) did not differ significantly from <br />
Accuracy(Composite) at either mid-test or post-test. Paired, <br />
2-tailed t-tests found that performance did not differ <br />
significantly between the Outer and DR at either mid-test, <br />
t(65) = -.851, p = .398, or post-test, t(65) = -1.356, p = .180. <br />
<br />
<br />
<br />
=== Explanation ===<br />
<br />
<br />
We will return to our original hypotheses to begin the <br />
discussion. It was clear that distracters had a negative <br />
impact on Simple performance at mid-test, although this <br />
effect had largely disappeared by post-test. Although we <br />
did not find significant effects of Rotation on Simple <br />
performance, we did find evidence that many students <br />
simply rotated the paper until they were viewing the figure <br />
in a standard orientation, effectively negating our Rotation <br />
manipulation. Thus, our first hypothesis is partially <br />
supported. <br />
We did find a composition effect in area problems, and <br />
the probability of success on Composite problems could not <br />
be predicted by simply multiplying the probabilities of <br />
success for the three subproblems. Thus, our second <br />
hypothesis is supported. <br />
We only found partial support for our hypothesis that the <br />
source of the composition effect is due to the diagram <br />
parsing load. We did find that the probability of success on <br />
Composite problems was greater than the product of <br />
probabilities for the three subproblems, but only at mid-test. <br />
In addition, there were no significant differences between <br />
Outer and DR at either mid-test or post-test, although we <br />
feel it is worth noting that the data are trending in the <br />
predicted direction, with Outer being more difficult than DR <br />
on both mid-test and post-test. Finally, our P(Composite) = <br />
P(Simple Outer Eq.) × P(Inner) × P(Subtract) model did a <br />
good job of predicting actual performance on Composite <br />
problems. <br />
To conclude our discussion, we would like to address the <br />
differences between performance at mid-test and <br />
performance at post-test and the implications for designing <br />
instructional interventions. First, we would like to note that <br />
instruction in the Area Composition unit of the Geometry <br />
Cognitive Tutor was able to improve performance on all <br />
skills, not just skills new to composite problems. This <br />
suggests that students may not have fully mastered the <br />
single-step area skills prior to beginning Area Composition, <br />
but that Area Composition continues to provide practice on <br />
these skills. Furthermore, the single-step skill practice in <br />
the Area Composition unit seems particularly effective at <br />
removing the effects of distracters on performance. This <br />
makes a great deal of intuitive sense if you consider <br />
composite area problems to inherently contain distracters. <br />
Second, although we did not find strong support for our <br />
contention that spatial parsing is difficult for students, we <br />
feel that training students to quickly identify important <br />
perceptual chunks can still have a positive impact on <br />
performance. If, for example, students were trained to look for a “T” or “L” structure and map the segments onto the <br />
base and height of a shape, students might be less prone to <br />
developing shallow knowledge about area. <br />
<br />
<br />
=== Descendents ===<br />
<br />
<br />
<br />
=== Annotated bibliography ===<br />
<br />
# Bransford (2000). How people learn: brain, mind, experience, and school National Academy Press.<br />
# Gonzales, P., Guzman, J. C., Partelow, L., Pahlke, E., Jocelyn, L., Kastberg, D., & Williams, T. (2004). Highlights From the Trends in International Mathematics and Science Study: TIMSS 2003. Washington, DC: National Center for Education Statistics. <br />
# Heffernan, N. T. & Koedinger, K. R. (1997). The composition effect in symbolizing: The role of symbol production vs. text comprehension. In Proceedings of the Nineteenth Annual Conference of the Cognitive Science Society,307-312. Hillsdale, NJ: Erlbaum. <br />
# Koedinger, K. R., & Anderson, J. R. (1990). Abstract planning and perceptual chunks: Elements of expertise in geometry. Cognitive Science, 14, 511-550. <br />
# Koedinger, K. R., Anderson, J. R., Hadley, W. H., & Mark, M. (1997). Intelligent tutoring goes to school in the big city. International Journal of Artificial Intelligence in Education, 8, 30-43. <br />
# Koedinger, K. R., & Cross, K. (2000). Making informed decisions in educational technology design: Toward meta-cognitive support in a cognitive tutor for geometry. In Proceedings of the Annual Meeting of the American Educational Research Association (AERA), New Orleans, LA. <br />
# Mullis, I. V. S., Martin, M. O., Gonzalez, E. J., & Chrostowski, S. J. (2004). Findings from IEA’s Trends in International Mathematics and Science Study at the fourth and eighth grades. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College. <br />
# Owen, E., & Sweller, J. (1985). What do students learn while solving mathematics problems? Journal of Educational Psychology, 77, 272-284.<br />
# Simon, H. A., & Lea, G. (1974). Problem solving and rule induction: A unified view. In L. W. Gregg (Ed.), Knowledge and cognition. Hillsdale, NJ: Erlbaum.# Wilson, L. D. & Blank, R. K. (1999). Improving mathematics education using results from NAEP and TIMSS. Washington, DC: Council of Chief State School Officers, State Education Assessment Center. <br />
<br />
<br />
[[Category:Empirical Study]]</div>128.2.177.115