The full conversation--facilitated through Google Hangouts--can be viewed on Raymond Johnson's blog here.
It must first be said that I loved the article; it would be a great starting point for any math educator to begin to wrestle with the distinctions obnoxiously present in the field of math education. It also holds timeless value for those more experienced in the digestion of such literature. Full reference can be found at the end of this post.
In a nutshell, Skemp proposes that there are two types of understanding when it comes to the field of mathematics. The first--relational understanding--describes the process of knowing what to do and why you are doing it. The second--instrumental understanding--describes the process of applying rules to arrive at answers. It must be said right that these two understandings are not mutually exclusive. In fact, most of our conversation revolved around their classification, necessity, and interrelation.
The group--following Skemp's lead--addresses the issue of assessment in light of relational understanding. How can we begin to assess deep understanding of mathematics, especially understanding that is not our own. The methods of portfolios and interviews were suggested, but dismissed as regular assessment pieces due to the extensive time burden.
Here is my thought on the difficulty of assessing relational understanding:
Relational understanding is difficult to assess because teachers use an instrumental approach in all assessment.
Allow me to briefly explain...
A lot of teachers match instrumental--or algorithmic--assessments to their instrumental teaching. That seems like the correct thing to do. Recent work in mathematics education suggest that including lessons that build relational understanding has certain benefits for student learning. The only problem is that teachers need to be able to spot and reward that learning.
Let's say you are working through a task where a student has generalized a problem, worked effectively with others, deduced possible pathways toward a solution, checked the reasonableness of their solution, and even posed further problems or abstracted universal truths. In order to judge their mathematical progress, you naturally start looking for key indicators of success.
- Did they use a diagram?
- Did they consider all possible cases?
- Did they link the problem to a previous one?
- Did they check their answer?
- Did they persevere throughout the process?
Teachers are using an algorithm to assess work that is designed to empower students to see the larger picture. I believe that, over the years, teachers develop their own set of rules that govern assessment and then apply them instrumentally to assess students. (Now this is a very young belief, and I am open for counter-beliefs).
I also think that within this relational-instrumental struggle exists the birth of the infallible (**cough**) assessment tool known as the almighty rubric. All rubrics do is attempt to instrumentally describe relational skills, yet they are upheld as the ideal way to achieve relational assessment. As teachers, our entire assessment framework has been rubricized; everything must fit into a nice box in a nice grid.
This is the portion where I should suggest a solution, but I am far from it. I have only just begin to realize that even my observations during otherwise "relational" tasks are of an instrumental nature. Algorithms are a part of mathematics, and assessing them is--by its very nature--algorithmic. I am now wondering what a non-algorithmic assessment looks like.
Skemp, R. R. (1976/2006). Relational understanding and instrumental understanding. Mathematics Teaching in the Middle School, 12(2), 88–95. Originally published in Mathematics Teaching. Retrieved from http://www.jstor.org/stable/41182357