The full conversation--facilitated through Google Hangouts--can be viewed on Raymond Johnson's blog here.

It must first be said that I loved the article; it would be a great starting point for any math educator to begin to wrestle with the distinctions obnoxiously present in the field of math education. It also holds timeless value for those more experienced in the digestion of such literature. Full reference can be found at the end of this post.

Moving forward...

In a nutshell, Skemp proposes that there are two types of understanding when it comes to the field of mathematics. The first--relational understanding--describes the process of knowing what to do and why you are doing it. The second--instrumental understanding--describes the process of applying rules to arrive at answers. It must be said right that these two understandings are not mutually exclusive. In fact, most of our conversation revolved around their classification, necessity, and interrelation.

The group--following Skemp's lead--addresses the issue of assessment in light of relational understanding. How can we begin to assess deep understanding of mathematics, especially understanding that is not our own. The methods of portfolios and interviews were suggested, but dismissed as regular assessment pieces due to the extensive time burden.

Here is my thought on the difficulty of assessing relational understanding:

*Relational understanding is difficult to assess because teachers use an instrumental approach in*

**all**assessment.

Allow me to briefly explain...

A lot of teachers match instrumental--or algorithmic--assessments to their instrumental teaching. That seems like the correct thing to do. Recent work in mathematics education suggest that including lessons that build relational understanding has certain benefits for student learning. The only problem is that teachers need to be able to spot and reward that learning.

Let's say you are working through a task where a student has generalized a problem, worked effectively with others, deduced possible pathways toward a solution, checked the reasonableness of their solution, and even posed further problems or abstracted universal truths. In order to judge their mathematical progress, you naturally start looking for key indicators of success.

Things like:

- Did they use a diagram?
- Did they consider all possible cases?
- Did they link the problem to a previous one?
- Did they check their answer?
- Did they persevere throughout the process?

Teachers are using an algorithm to assess work that is designed to empower students to see the larger picture. I believe that, over the years, teachers develop their own set of rules that govern assessment and then apply them

*instrumentally*to assess students. (Now this is a very young belief, and I am open for counter-beliefs).
I also think that within this relational-instrumental struggle exists the birth of the infallible (**cough**) assessment tool known as the almighty rubric. All rubrics do is attempt to instrumentally describe relational skills, yet they are upheld as the ideal way to achieve relational assessment. As teachers, our entire assessment framework has been

*; everything must fit into a nice box in a nice grid.***rubricized**
This is the portion where I should suggest a solution, but I am far from it. I have only just begin to realize that even my observations during otherwise "relational" tasks are of an instrumental nature. Algorithms are a part of mathematics, and assessing them is--by its very nature--algorithmic. I am now wondering what a non-algorithmic assessment looks like.

NatBanting

Reference:

Skemp, R. R. (1976/2006). Relational understanding and instrumental understanding. Mathematics Teaching in the Middle School, 12(2), 88–95. Originally published in Mathematics Teaching. Retrieved from http://www.jstor.org/stable/41182357

I try something very different in the assessment of my statistics students. First, there's only one "formal" assessment -- the final exam. Other than that, I tell students that it is my job to understand their thinking no matter where I see it, whether it's in a class discussion, group work, office hours, homework, or something else. My personal record-keeping is a simple marking of "doesn't know," "is getting to know," and "gets it," and beyond that my focus is on feedback, which often comes in the form of one-to-one feedback in class, or screencasting a response if I'm grading their homework outside of class. The feedback is paramount, not the scoring and ranking. Am I still looking for instrumental understanding? Sometimes. But often I just want to sit and listen to a student talk about how they approached a problem, and in that I think I'm looking for the relational.

ReplyDeleteI should mention that my class was small and only met once a week. I also recognize that I get almost complete autonomy in choosing how I teach the course, and deviating from instrumental traditions is easier when the building you work in is filled with assessment experts who describe what you think you're trying to do. We should all be so supported.

If you get a chance, I suggest you try reading Lorrie Shepard's "Assessment in a Learning Culture." It's not long and I apologize for the diagrams not being clear, but it's a really important article in this area. It gives me hope that we can keep rethinking and refining our assessment practices and reach Shepard's goal of having good assessment tasks that would be interchangeable with good instructional tasks.