My Math Education Blog

"There is no one way"

Thursday, June 16, 2016

New(ish) on my Web site

I will probably blog much less during the summer months, given that I am leading many workshops, one after another, with little breathing time in between. I will also probably do little updating of my Web site, so this is a good time to let you know of some recent additions and tweaks.

New: An excellent Shrinky Dinks activity by Rachel Chou, a teacher and department chair at Menlo School in Atherton, CA. I link to it in my Geometry Labs page, as it is very compatible with those activities both in subject matter (Section 10 of the book) and in the hands-on approach.

Tweak: I improved the worksheet on deriving a proof of the quadratic formula based on translating a parabola so its vertex is at the origin. Prerequisites are on my Parabolas and Quadratics page.

Tweak: I improved the worksheet on Graphing Square Roots, which among other things leads to the formal definition of absolute value as a piecewise function.

New-ish: An introduction to the glide reflection, a rigid motion every teacher should know about. (It is an edited concatenation of two blog posts, so it would not be new to regular readers of this blog.)

New-ish: I improved the worksheet on Perspective, and added teacher notes and a GeoGebra applet. It is a good way to introduce inverse variation by way of a real-world lab.


Maybe you can find time this summer to incorporate some of these materials into next year's classes?


PS: some of these updates are part of preparing for my Visual Algebra and No Limits (Algebra 2 / Precalc) workshops, late July in Saint Louis. More info.

Thursday, June 9, 2016

Forward Design

  Earlier posts in this series:
     Legitimate Uses of Assessment
     Problematic Uses of Assessment
     The Meaning of Grades
     De-emphasizing Grades
     Grades: the Research
     The Perils of Backward Design
     Assessment Tools and Strategies

The Assessment Trap, Part 8: Forward Design

It's time to wrap up this series. Here is a summary of the main points:

- From the point of view of teaching and learning, the most useful assessments are formative, not summative. They allow both teacher and student to make the best use of upcoming instruction.

- No matter how much people deny it, grades have a single purpose: comparing students to each other and ranking them.

- Trying to use grades to manipulate student motivation and behavior is counterproductive, because grades can reinforce a fixed mindset, because measurable progress as a learner may take longer than any one grading period, and because one cannot improve beyond an A+.

- Overemphasis on grades not only sabotages individual student learning, but also undermines curriculum and pedagogy by pressuring teachers to quantify everything in search of an illusory objectivity, and making us lose sight of important but hard-to-measure goals.

Therefore, it is incumbent upon us to

- Prioritize formative assessments

- Lower the stakes on summative assessments by any means available: awarding points for corrections, varying the types of assessments, de-emphasizing grades as much as possible, etc.

Finally, in planning a course, a unit, or a lesson, practice forward design. (I am indebted to Carlos Cabana for this concept. See this blog post.) Start your planning by asking these questions, preferably in conversation with colleagues:

- What are the big ideas? If you can only come up with specific microskills, think some more: what are the underlying concepts that connect these skills? Can different representations throw light on this? (For example, looking at an algebraic topic geometrically, or graphically, or in a "real world" context.)

- What tools are available to provide a way for students to engage in thinking and exploring? This can include manipulatives, technology, and/or pencil-paper tools. The right tool can make it possible to formulate a question that all students can engage in; it can support reflection and discussion; and it can add variety to your course. (I have written a lot about tools. See "For a Tool-Rich Pedagogy".)

- What contexts (themes) are there, whether "real world" or not, that can provide useful problems?

For examples of these first three ingredients of forward design ("themes, tools, concepts"), see this blog post on proportional relationships (centered on a concept), or this piece on area (centered on a theme — scroll down to page T23.)

- What curricular resources can complement or replace the textbook? Look on your shelves, search the Web, ask your colleagues. This step is crucial, as you most likely do not have time to create everything from scratch, and moreover, freshly-minted activities usually require some classroom testing and tweaking.

- How will the students be working at different stages? individually? in pairs? in groups? in whole-class discussions? Different modes are appropriate to different activities, and doing it all in a single one of those is a costly mistake if you aim to avoid lethal boredom and want to reach the full range of students.

After you've done this preliminary work, you can resort to some backward design strategies such as designing your assessments in advance, making lists of specific learning goals, and so on. Starting with forward design will help you keep those practices under control, save you from losing perspective, and prioritize what is most important.

Admittedly, forward design involves a lot of work. Collaboration is key: who can help you? Colleagues at your school are in the best position to work with you, but alas many teachers have told me that such collaboration is not possible at their school, as their colleagues are not interested, or (in small schools) they have no colleagues. In that situation, you'll need to develop offsite collaborations, perhaps through the #MTBoS. But remember: you do not have to do all this at once. Get started now, do what you can, and do a little more each year. As soon as you start this forward motion, you'll start to see the signs of improvement in your classroom. One step at a time.

Societal pressures often push in the opposite direction, relentlessly. By resisting the bean-counting culture and by trusting that your students can enjoy learning, you can help them gradually move from extrinsic to intrinsic motivation. Few things are more depressing to me than educators who have given up on that, and choose instead to treat their students as if they were programmable entities, or customers haggling over grades at a flea market.

The beauty of the subject matter, the power of the ideas, the thrill of problem solving: the same things that motivate you to learn can work for your students, but only if you avoid falling in the assessment trap.


Monday, June 6, 2016

Assessment Tools and Strategies

 Earlier posts in this series:
     Legitimate Uses of Assessment
     Problematic Uses of Assessment
     The Meaning of Grades
     De-emphasizing Grades
     Grades: the Research
     The Perils of Backward Design

The Assessment Trap, Part 7: Assessment Tools and Strategies

In previous posts, I discussed the ways in which an overemphasis on assessment undermines curriculum, pedagogy, and student learning. Of course, it is politically impossible to avoid assessment, as it is a major preoccupation of students, parents, and administrators. Moreover, it is actually impossible to teach without assessing student understanding. As I mentioned in the first post of this series, there are legitimate, even essential uses of assessment: fine-tuning the course; helping both teacher and student know what the student understands and can do; and offering learning opportunities. All these are best served by decreasing the stakes. Lower stress translates into more accurate assessment. Ungraded formative assessments can serve the most important goals of assessment, and should play a bigger role than they do in many classes.

Still, much can be said in defense of traditional tests and quizzes: they provide a lot of information, they are easy to grade, and they are expected by all constituencies. Given that they are here to stay, how can we make them more effective? Here are some suggestions:

- Within reason, give students as much time as they want. If a student is not fast, or does not do well under time pressure, so what? It does not mean they don't understand the material. Racing belongs in PE, not in the classroom.

- Do not over-penalize students for small computational errors that could be eliminated by the use of technology such as calculators and computer algebra systems. Prioritize evidence of understanding, not nit-picking accuracy.

- Of course getting the right answer matters, which is why you might give less than full credit in the case of such errors. But if accuracy really matters to you, allow any and all technology during most tests. (Yes, there is a place for no-technology tests, but they should not be the default.)

- Offer points for test corrections. This lowers the stakes in a good way. Everyone knows that doing well the first time is better, but if learning is the goal, what difference does it make if the learning occurs a week later? In my version of this, I told students they could get half-way to a perfect score by turning in high-quality corrections. I allowed my students to get help from anyone, but all writing must be their own. This assumes a standard of explanation that is higher than on the test itself.

- Lag the quizzes: give new topics a chance to settle into your students' consciousness before testing them. (See this post on extending exposure.)

- Periodically, administer cumulative tests, which include topics from earlier in the course. This way, you communicate the message that students are learning concepts for the long haul. Especially in combination with test corrections, this helps to reduce the stakes: students get more than one chance to show their understanding of a given topic, and midterms and finals are not so exceptional and intimidating.

- Include "bonus" or "extra credit" questions, which are important to challenge your strongest students, and which can be used to deepen or extend understanding. I usually required those of all students in the test corrections. This gives the message that getting 100% is not easily achievable, and keeps everyone from getting complacent. It also helps to communicate that a test is a learning opportunity. There will be pushback on this ("This is not fair!") but in fact, what would not be fair would be to limit tests to questions everyone can answer, as it would lower course expectations. Working hard on those items as part of the test corrections makes everything else more accessible. Of course, such problems should not carry a much weight, points-wise.

In addition to better tests and quizzes, it is important to have significant at-home assignments, including especially the test corrections mentioned above. Other possibilities:

- Reports. Ask students to summarize a unit in their own words and with illustrations. Keep those to a reasonable length: one or two pages, or a poster. I have found this works well with 9th and 10th graders.

- Projects. For example, write a (very) short science-fiction story involving exponential growth, with an appendix explaining the underlying calculations. Or use GeoGebra or Cabri 3D to construct an Archimedean solid. Projects are harder to think of, but I have used them successfully, especially with 11th and 12th graders.

- I have also used take-home tests. Those can and should be more difficult, and require more time, than in-class tests. My policy was the same as the one for test corrections: students can get help but they must write everything in their own words. But, you say, in that case, how can you use them as part of a student's grade? If your view is that sorting students is more important than teaching them, you have a point. But I have found that on balance, test corrections and take-home tests do help student learning. A somewhat less differentiated set of grades is a price I'm willing to pay.

My experience is that at-home assignments reveal that some of the students who do exceedingly well in a classroom test do poorly when the assignment requires a more thoughtful approach. This is important information for the teacher, and moreover, as long as we're trying to be fair, it levels the playing field some. In practice, test corrections are quick to grade, but reports and projects are not. One of those per grading period suffices.

Those were the approaches I used. Depending on your school and department culture, not all of them may work for you. Perhaps totally different strategies are in order at your school. (For example, group tests, or participation quizzes.) In any case, you should do what you can to reduce the stakes, to vary the assessments, and to make sure assessment does not dominate your teaching. More on this in the next and final post in this series.


Wednesday, June 1, 2016

The Perils of Backward Design

Earlier posts in this series:
     Legitimate Uses of Assessment
     Problematic Uses of Assessment
     The Meaning of Grades
     De-emphasizing Grades
     Grades: the Research

The Assessment Trap, Part 6: The Perils of Backward Design

In previous posts, I addressed the true nature of grades and their corrosive impact on learning. Alas, over-emphasis on grades and summative assessment also corrupts curriculum and pedagogy. In other words, it not only affects students individually, it also affects all students as a group. This is what I will try to argue in this post.

Backward design, an idea pioneered by Grant Wiggins and others, is based on the idea that curriculum should be designed by first discussing what you want the student to be able to do at the end of the course or unit, and then build a curriculum that will provide a path to that destination. In principle, that makes a lot of sense, and I am sure it can be done well. However there are many ways this approach can backfire.

First of all, some of the most important destinations are hard to specify. Say that your goal is deep understanding of a given topic. What is that? How do you know whether a student has achieved it? Likewise, if you aim for an appreciation of the beauty of math, or student self-confidence, or increased curiosity, or social responsibility, or an ethical stance. None of those things lend themselves to straightforward measurement. The most important goals of education are hard to pin down, and this makes it difficult to design backward from there.

Even within a narrow definition of the discipline and even if one accepts basic goals within that framework, backward design can lead to a demand for easy-to-measure outcomes. Tell us what you want the students to be able to do, and we can see whether you were successful. We need data! And thus starts the descent into lists of micro-skills, items that can readily be checked off, or not checked off. Such lists are powerful, because they are easy to communicate to students, to parents, and to administrators. Pretty soon, the essential goals fade away, and the pressure is on to produce results in the form of check marks on a rubric.

This in turn affects how the subject is taught. Some time ago on this blog, I used equation-solving as an example of this. (Read the whole post on  "How To".) Instead of empowering the student with the essential concepts they might use in solving an equation, we ask them to memorize many cases (one-step equations, two-step equations, etc.) The advantage of atomizing the subject like that is that those micro-skills are easy to assess, and the resulting assessments yield "data". Simultaneously, it relieves the students from having to think, which is something they appreciate if that is how they have been taught math in the past. The disadvantage of this approach is that it does not work. It is not effective in helping students develop understanding, self-confidence, or an appreciation for mathematics.

This atomization of a subject can result from many different assessment policies, ranging from the standardized test mania which has done so much damage, to well-intentioned standards-based rubrics and grading schemes. Overemphasis on assessment inexorably pushes curriculum towards "how to do" things. There's a place for that, of course, but too much of it and you're treating the student as a programmable device, and preventing them from engaging with the subject matter intellectually. The message you are communicating is "Since I've already given up on your ability to think, I will have you memorize these easy-to-remember steps..." and your de facto low expectations will be self-fulfilling, as they reinforce students' fixed mindset about their abilities.

Teaching for understanding is hard to assess, it is hard to capture in a checklist, and it is hard to define in a few words. In spite of all that, it is the most important part of our job. How can assessment help us rather than hinder us as we strive to do it? How can we resist the temptation to demean our students with low expectations, and our discipline by reducing it to simple recipes? I will make some suggestions in the next post.


Sunday, May 29, 2016

Grades: what does research say?

Earlier posts in this series:
     Legitimate Uses of Assessment
     Problematic Uses of Assessment
     The Meaning of Grades
     De-emphasizing Grades

The Assessment Trap, Part 5: What Does Research Say About Grades?

This is a guest post by Sarah Clowes, a science teacher at the Urban School of San Francisco, where I used to work. Her research-based comments support some of the points I made in my previous posts.


(To be continued! Part 6: The Perils of Backward Design)

I did some work on assessment while I was in graduate school, so that is what informs a lot of my thinking outlined below.

•    Inevitably assessment is an imperfect process of reasoning from evidence to make judgments about what students know and can do (there is an excellent paper on assessment by Pellegrino et al. . . . Pellegrino, J.W., Chudowsky, N. & Glaser, R. (2001). Knowing What Students Know: the science and design of educational assessment. Washington, DC: National Academy Press). Grades are arbitrary and should therefore carry as little meaning as possible.

•    Grading is not essential to the instructional process - the primary purpose of grading is not facilitation of teaching or learning. Grades are the response to compulsory education mandated in the 1800s . . . as schools grew larger, there was a need to rank and categorize students. Teachers do not need grades (just formative assessments) to teach well (Black and Wiliam, 1998; Pellegrino et al., 2001).

•    Letter grades offer parents and others a brief description of students' achievement. But letter grades reduce of a lot of information into a single "bucket" or symbol. In addition, the distinction between grades are always arbitrary and difficult to justify (even when they are standards based) because they always rely on a teacher's judgment. Letter grades lack the richness of more detailed reporting methods, such as narrative reports.

•    Narrative evaluations offer specific information that is useful in documenting student achievement and provide more valuable and nuanced feedback to students. But good narratives take time to prepare and are often difficult for parents to understand. Parents often wonder if their child's achievement is comparable with that of other students (even when grades are standards based).

•    Because no single grading method adequately serves all purposes, schools must identify their purpose for grading and develop an approach that matches the school's mission and values.

•    Grades do not provide comprehensive assessment, nor do they promote thorough self-evaluation. Unfortunately, educational research shows that the impact and meaning of narrative evaluations accompanied by grades are diminished (Black and Wiliam, 1998; Butler and Nisan, 1986; Salili et al., 1976). Students and parents attend to the details of a narrative evaluation considerably less when there is a grade to refer to. Research shows that students and parents approach narrative evaluations with grades in the same way, but we have the opposite goal as educators - it is essential that students receive and attend to nuanced feedback about their learning.

•    In addition, research shows that grades alter student motivation - there is considerable research in the field of motivational psychology to support this claim (Beck et al., 1991; all the Butler articles listed below; Salili et al., 1976). In addition, there is evidence that grades reduce students' willingness to try challenging tasks (Harter, 1978; Hughes et al., 1985). There is also growing evidence that relying less on grades and more on intrinsic motivation better serves students of diverse backgrounds (Wlodkowski et al., 1995).

•    Self-regulated learning promotes cognitive strategies, meta-cognition, motivation, task engagement, and social support (Paris & Paris, 2001). Self-regulated learners are aware of their strengths and weaknesses, possess a repertoire of cognitive strategies that they utilize appropriately, set challenging but achievable goals, and are capable of assessing and overseeing implementation of those strategies as they work to achieve their goals (Alexander, 2006; Zimmerman, 2000).

-- Sarah Clowes


Alexander, P.A. (2006). Psychology in Learning and Instruction. Upper Saddle River, NJ: Pearson Merrill Prentice Hall.

Beck, H. P., S. Rorrer-Woody, and L. G. Pierce.  “The Relations of Learning and Grade Orientations to Academic Performance.”  Teaching of Psychology 18 (1991): 35-37.

Black, P., and Wiliam, D. "Inside the Black Box: Raising Standards Through Classroom Assessment." Phi Delta Kappan, October 1998:  139-148.

Butler, R., and M. Nisan.  “Effects of No Feedback, Task-Related Comments, and Grades on Intrinsic Motivation and Performance.”  Journal of Educational Psychology 78 (1986): 210-16.

Butler, R.  “Task-Involving and Ego-Involving Properties of Evaluation: Effects of Different Feedback Conditions on Motivational Perceptions, Interest, and Performance.”  Journal of Educational Psychology 79 (1987): 474-82.

Butler, R.  “Enhancing and Undermining Intrinsic Motivation: The Effects of Task-Involving and Ego-Involving Evaluation on Interest and Performance.”  British Journal of Educational Psychology 58 (1988): 1-14.

Cameron, Judy, and Pierce, W. David. 1994."Reinforcement, Reward, and Intrinsic Motivation: A Meta-Analysis."Review of Educational Re-search 64 (3):363 - 423.

Cameron, Judy, and Pierce, W. David. 1996. "The Debate about Rewards and Intrinsic Motivation: Protests and Accusations Do Not Alterthe Results." Review of Educational Research 66 (1):39 - 51.

Harter, S.  “Pleasure Derived from Challenge and the Effects of Receiving Grades on Children's Difficulty Level Choices.”  Child Development 49 (1978): 788-99.

Hughes, B., H. J. Sullivan, and M. L. Mosley.  “External Evaluation, Task Difficulty, and Continuing Motivation.”  Journal of Educational Research 78 (1985): 210-15.

Koretz, D. (2008). Measuring Up. Cambridge, Massachusetts: Harvard University Press.

Krumboltz, J. D., and C. J. Yeh.  “Competitive Grading Sabotages Good Teaching.”  Phi Delta Kappan, December 1996:  324-26.

Moeller, A. J., and C. Reschke.  “A Second Look at Grading and Classroom Performance:  Report of a Research Study.”  Modern Language Journal 77 (1993): 163-69.

National Research Council.  (1999). How People Learn: Brain, Mind, Experience, and School. Washington, DC: National Academy Press.

Paris S., & Paris, A.(2001). Classroom applications of research on self-regulated learning. Educational Psychologist, 36, 89 –101.

Pellegrino, J.W., Chudowsky, N. & Glaser, R. (2001). Knowing What Students Know: the science and design of educational assessment. Washington, DC: National Academy Press.

Salili, F., M. L. Maehr, R. L. Sorensen, and L. J. Fyans, Jr.  “A Further Consideration of the Effects of Evaluation on Motivation.”  American Educational Research Journal 13 (1976): 85-102.

Wlodkowski, R. J., Ginsberg, M. B. "A framework for culturally responsive teaching " Educational Leadership. Alexandria: Sep 1995. Vol. 53, Iss. 1.

Zimmerman, B.J. (2000).Attaining self-regulation: A social cognitive perspective. In M. Boekaerts, P.R. Pintrich & M Zeidner, (Eds.), Handbook of self-regulation (p 13-39). San Diego,CA: Academic Press.