The Problem with “Formative Assessment Tools” (part 2 of 2)

The Problem with %22Formative Assessment Tools%22 (part 2 of 2)In the previous post (part 1 of 2), we explored the fact that student response apps (Socrative, Kahoot!, Plickers, etc.) are often mislabeled as “formative assessment tools.” What makes them formative depends on the context in which they are used. Formative assessment is a process, and in order for a tool to play a part in this process the results/data it produces must be leveraged to differentiate instruction or learning.

The Problem

Now, let’s explore a second problem with these apps, which is the belief that they are not generally associated with higher-order thinking.

From what I have experienced, we are largely stuck in this rut when it comes to using student response tools, and there are two main reasons why:

  • For the most part, old school “student clickers” included only multiple-choice questions (and maybe a little something else), which is a format that tends to result in lower level questioning. It has been easy to copy and paste these inadequate practices (or questions) on to our newer technologies, even though these apps are capable of a whole lot more.
  • When it comes to classroom instruction, I also think it is easy to view student response tools as an all or nothing decision. Either the entire lesson is centered around their use, or they are not used at all. From what I have experienced (and have been guilty of as well), if these tools are the focal point of a class, chances are the students are simply answering one multiple-choice question after another (which aligns with the education world’s current fascination with hard, quantitative data). This means more lower level questions that travel in only one direction, from teacher to students. There is no encouragement of dialogue, collaboration, inquiry, etc. Everything is black and white, when we all know that higher-order thinking and inquiry-based learning are all about shades of grey.

The Solution

I do feel that multiple-choice and lower level questions have their place in the classroom, as higher-order thinking and inquiry are built on top of solid foundations and basic understandings. After all, you can’t think critically about nothing.

At the same time, I firmly believe that the majority of the questions asked in school, at the very least, should promote thought, curiosity and some level of exploration.

Here are two ideas as to how to encourage higher-order thinking with student response tools:

  • Flipped Clickers: Tony Wagner defines critical thinking as “the ability to ask the right question, ask really good questions.” In Charlotte Danielson’s Framework for Teaching, the Distinguished level for Domain 3b (Using Questioning and Discussion Techniques) says, “Students formulate many questions, initiate topics, challenge one another’s thinking…” So, let’s flip the way student response tools our used by having students ask the questions. Overall, this shift can be accomplished by (1) the teacher promoting activities in which students have to respond with questions that they formulate (questions that can then be used creatively by the teacher to extend these activities), and/or (2) providing students with both student and teacher/administrative rights for as many tools as possible (for example, think small group literature circles in which students take turns leading the discussion). Just when you think you have all the answers, the students ask the questions.
  • Fewer Questions for a Deeper Understanding: One of the components of Danielson’s Domain 3b reads, “When teachers ask questions of high quality, they ask only a few of them and provide students with sufficient time to think about their responses, to reflect on the comments of their classmates, and to deepen their understanding.” This quote addresses head-on what needs to be done in order to promote cultures of thinking in our classrooms. Many lower level questions (such as those that accompany stories in basal readers), should be converted to only a few higher-order questions (with the help of something like Webb’s Depth of Knowledge), and around these questions thinking routines should be formed (in which student response tools do not serve as the focal point, but are used to assist in facilitating discussion and increase opportunities to respond). Additionally, teachers need explicit professional development on how to shift from lower level questions to rigorous thinking routines, rather than just focusing on converting questions from lower level to higher-order. A bunch of higher-order questions asked in the same exact way (with or without technology) as an equal number of lower level questions will do very little to deepen students’ understanding of what they are learning.

For both options, it is not an either/or decision regarding whether or not the response tools are used, but rather finding the appropriate level of technology integration to enhance or redefine student learning experiences.

In the End 

Once again, we need to emphasize pedagogy over technology by starting with the end in mind – higher-order questions and thinking routines – and then leveraging the tools that we have available to us in order for our students to arrive at the appropriate destination.

At the same time, we should keep in mind that although all educators are at different points on the learning curve when it comes to effectively integrating technology, the last thing we want is for instruction to be consistently inferior because technology just has to be included. Don’t try to cram a square peg into a round hole.

What are your thoughts on these apps? What are some unique ways in which you have seen them used to promote high-order thinking? Do you think there is a place for “flipped clickers” in the classroom?

Connect with Ross on Twitter.


The Problem with “Formative Assessment Tools” (part 1 of 2)

The Problem with %22Formative Assessment Tools%22 (part 1 of 2)The Problem

It started with generally clunky and overpriced “student clickers” by such brands as SMART Technologies and Einstruction, and over the past few years it has transitioned into slick apps like Socrative, Kahoot!, and Plickers. Time and time again we have seen these apps demoed during professional development sessions and written about on websites and blogs. Nevertheless, we need to be careful that we do not prioritize technology over pedagogy by referring to these apps as “formative assessment tools” when they are anything but.

When James Popham defines formative assessment, he states:

Formative assessment is a planned process in which teachers or students use assessment-based evidence to adjust what they’re currently doing.

In other words, if teachers or students are not leveraging results/data (from Socrative, Kahoot!, Plickers, etc.) to then differentiate instruction or learning, the app inspired dog and pony show does not qualify as a formative assessment.

Formative assessment is a process…not an event, questions on a piece of paper, or even an app. What makes an assessment formative depends on the context in which it is used.

The Solution

I do feel that professional development that includes these apps can start with the apps themselves, as “cool tools” are an easy way to grab an audience’s attention, but they should be presented within the context of formative assessment or something like Total Participation Techniques. In other words, “Why are we learning what we’re learning, and how can it benefit our students?”

Since what takes place after the apps are used – the differentiated instruction – is what matters most, the majority of professional development time should then be dedicated to this stage of instruction and learning. In other words, “We have our results/data, now what do we do with it?”

Here are some ideas as to what this could look like:

  • The presenter issues to teachers authentic student results from when one of the apps was used in her classroom. Some context is provided, and then the teachers are asked, “How would you modify your planning based on what you now know?” (In this instance, I would be particularly interested in any teacher questions that may arise to possibly gain additional context.)
  • The presenter issues to teachers authentic student results from when one of the apps was used in her classroom. Some context is provided, and then the teachers are asked, “Based on the questions and answers, how would you revise the questions to learn more about what the students know/don’t know?” (This activity can go hand in hand with the first bullet point. In Part 2 of this post we will dive deeper into quality questioning.)
  • The presenter uses an app to pre-assess teacher knowledge of a specific topic (possibly formative assessment). The results are immediately shared out and the teachers are asked, “Where should the professional development go from here?”

The idea is that the increased emphasis on “the after” during professional development will correlate with teachers thinking more deeply about “the after” during classroom instruction.

In the End

In the end, there is obviously nothing wrong with the tools themselves, but what matters most is the context in which they are presented during professional development, and ultimately the differentiated instruction that follows their classroom use.

None of these tools are that complicated, and any teacher can learn how to use them. However, what is complicated (and often times, messy) is what to do after the students’ results show up on our device. This is where our focus needs to be.

In Part 2, we will look at leveraging these apps to promote higher-order questioning and thinking.

What are your thoughts on these apps? What are some unique ways in which you have seen them used in the classroom and/or during professional development? How do you think they relate to the formative assessment process?

Connect with Ross on Twitter.

Total Participation Techniques

For my latest district professional development day I conducted a one-hour presentation on the topic of student opportunities to respond, which focuses on how long each student has to be actively engaged in order to “make it through” the current lesson.

Featured is the slide deck that I created for the presentation, and I used Total Participation Techniques by Persida and William Himmele as the primary resource for my work. According to the book, “Total Participation Techniques (TPTs) are teaching techniques that allow for all students to demonstrate, at the same time, active participation and cognitive engagement in the topic being studied.” Here is a quick overview of some of the slides that are not entirely self-explanatory:

    • Slide 1: Contains a link to the session’s resources (, which includes a PDF version of the slide deck and two handouts that outline many TPTs for teachers to use, clearly and concisely.
    • Slide 6: To model best practice, the presentation is wrapped in an essential question, which is “How can I create more beach balls?”
    • Slide 10: Mentions Webb’s Depth of Knowledge in order to connect past professional development to new learning.
    • Slide 12: A quote that emphasizes that regularly incorporating TPTs will not only increase student learning, but it is an effective instructional shift that is made by simply working smarter and not harder.
    • Slide 14: Connects the contents of the presentation to the Charlotte Danielson rubric, which is used for Pennsylvania’s models for Formal Observation and Differentiated Supervision.
    • Slide 17: To pique participant interest, four technologies that promote active participation are described: Socrative, Plickers, Nearpod, and Kahoot!.
    • Slide 18: To model best practice, everything is wrapped up by tying it back into the essential question.

Throughout the presentation the participants engage in four different TPTs: Quick-Writes (Slide 2), Ranking (Slide 8), Think-Pair-Share (Slide 13), and The 3-Sentence Wrap-Up (Slide 15). Every professional development session is an opportunity to model best practice, and in this instance the teachers are able to literally experience TPTs while learning about them.  

Overall, the goals of the presentation are to (1) identify a problem that can exist (the beach ball scenario described on Slide 4), and then reveal explicit strategies (TPTs) to help solve this problem. Follow-up sessions could include (1) teachers sharing and reflecting upon the use of the TPTs in their classrooms, (2) ways to establish a classroom culture of risk-taking and active engagement (which lends itself to the use of TPTs), and (3) an in depth look at the technologies featured on Slide 17.  

If you would like to modify the slide deck for your own use, feel free to contact me and I will be more than happy to send you the original version, which was created in Apple Keynote (version 6.5.2).

Standards-Based Report Cards: Four Ideas

Developing Standards-Based Report CardsMy current school district is about to begin the process of examining our standards-based report cards, particularly at the elementary level. When I was made aware of this initiative, I had just finished reading Fair Isn’t Always Equal by Rick Wormeli and How to Grade for Learning by Ken O’Connor, and I was in the process of making my way through Developing Standards-Based Report Cards by Thomas Guskey and Jane Bailey. Without hesitation, I highly recommend all three. While one of my previous posts is based on Wormeli’s book, this post is based mostly on the work of Guskey and Bailey. (Inspiration from O’Connor is sprinkled throughout both, and I will dedicate more time to his book in the future.) I should also mention that the contents of each book are not mutually exclusive, as there is definitely a great deal of overlap when discussing assessing and grading in the standards-based classroom. However, Wormeli tends to focus more on daily instruction, while Guskey and Bailey provide more research for standards-based report cards.  

Based on my reading and highlighting, here are four points to consider when creating or revising standards-based report cards:

A Clear Purpose: Guskey and Bailey (2010) announce, “Nearly every failed effort to revise the report card that we know of can be traced to the lack of a well-defined and commonly understood purpose” (p. 25). Before anything else is decided, the purpose of the report card must be determined. Furthermore, upon its completion, it should be printed on the form. Guskey and Bailey (2010) provide a few purpose statements that have been used by different districts. One such example reads, “The purpose of this report card is to describe students’ learning progress to their parents and others, based on our school’s learning expectations for each grade level. It is intended to inform parents and guardians about learning successes and to guide improvements when needed” (p. 35). Keep in mind that the specific purpose of the report card is subject to change as students progress through each level of schooling.

Parent and Student Involvement: “The report card should communicate clear and ambiguous information about students’ performance to parents, students, and others” (Guskey & Bailey, 2010, p. 7). Therefore, consider involving these parties in the report card creation process, in one way or another, or at least provide them with opportunities to offer feedback after implementation. At the elementary level, the primary audience for report cards is usually the parents, but this changes as students get older. “In the upper elementary grades, and especially in middle school and high school, however, educators increasingly see students as another important recipient of the information in the report card” (Guskey & Bailey, 2010, p. 135). Ultimately, we want students and parents to utilize their report cards as part of the formative assessment process, so we must make sure that each report card functions as a “feedback device” that helps in driving learning and instruction. If parents and/or students have a say in what type of feedback is provided and in what format it is presented, there is a stronger chance that they will be able to make use of the information.

Reporting Standards: “Unfortunately, in many jurisdictions, there are too many standards and/or teachers have too many students to manage tracking of every standard for every student, so they must find a compromise” (O’Connor, 2009, p. 49). In dealing with report cards, here are five recommendations for dealing with this issue:

    • Develop reporting standards by combining multiple, related standards into broader groups, which are then labeled accordingly. These labels are what appear on the report cards.
    • The reporting standards should be “specific enough to communicate the knowledge and skills students are expected to acquire but not so detailed that they lose their utility when shared with parents” (Guskey & Bailey, 2010, p. 22). Also, without question, parents and students must be able to understand and make sense of what they are reading.
    • For each subject with the exception of Language Arts, four to six reporting standards are recommended. For Language Arts, try for four to six reporting standards per subcategory. This range “helps clarify precisely enough what students are expected to learn and be able to do but does not overwhelm parents and students with unnecessary detail” (Guskey & Bailey, 2010, p. 42).
    • Create a report card supplementary document that more thoroughly breaks down each reporting standard. This document can be grade-level specific, and it can also contain other pertinent report card related information.
    • To avoid confusion amongst educators, parents, and students, strive for consistency by creating report cards that contain the same reporting standards across multiple grade levels. Meanwhile, the aforementioned supplementary document can change as necessary. Keep in mind that “The one exception to this general trend of using common reporting forms across grade levels is a standards-based report card for the kindergarten level” (Guskey & Bailey, 2010, p. 92).

Emphasis on Classroom Instruction: According the Guskey and Bailey (2010), “Success in improvement efforts based on standards will always hinge on what happens at the classroom level” (p. 19). In other words, a committee can spend countless hours locked in a room, slaving away on the most research-based report cards known to mankind, but the initiative will not reach its fullest potential if teachers (and students and parents) are not effectively communicated with in regards to (1) the research behind the report cards and their creation, and (2) how the forms can be leveraged in order to maximize student potential. Much of this communication can be in the form of ongoing professional development, parent nights, and keeping the lines of communication open amongst all parties involved. In a previous post I discussed five changes that I would make to my assessing and grading procedures if I were to return to the classroom as a teacher. These determinations were made after reading Rick Wormeli’s Fair Isn’t Always Equal.

Along with the books that I have already read, I also hope to take a look at A Repair Kit for Grading by Ken O’Connor, On Your Mark by Thomas Guskey, and Grading Smarter, Not Harder by Myron Dueck.

On social media, make sure to join the Facebook groups Standards Based Learning and Grading and Teachers Throwing Out Grades. For Twitter, Standards-Based Learning Chat (#sblchat) takes place every Wednesday at 9 pm, and Teachers Throwing Out Grades Chat (#ttog) happens every other Monday at 7 pm.


Guskey, Thomas R., and Jane M. Bailey. Developing Standards-Based Report Cards. Thousand Oaks, CA: Corwin, 2010.

O’Connor, Ken, and Ken O’Connor. How to Grade for Learning, K-12. Thousand Oaks, CA: Corwin, 2009.

Wormeli, Rick. Fair Isn’t Always Equal: Assessing & Grading in the Differentiated Classroom. Portland, Me.: Stenhouse, 2006.

Rethinking Assessing & Grading

Fair Isn't Always EqualI have always thought that assessing and grading is the one area in which there is the widest gap between research and what is actually taking place in classrooms (with my classroom having not been the exception). Over the past few days I finally decided to read through Rick Wormeli’s Fair Isn’t Always Equal. This book does a tremendous job of touching upon all of the topic’s key points without getting too technical. I could definitely see this resource being used for a teacher and/or administrator book study. You can read it cover to cover, or you can easily just dive into certain chapters to target areas of interest.

After reading the book, here are five changes that I would make to my assessing and grading procedures if I were to return to the classroom as a teacher.

    1. Revamping the Grade Book
      Grades were categorized by assessment. For example, at the top I listed “Lesson 1 Language Arts Test.” Then next to each student’s name I listed his percentage score on the entire test.

      After: The names of the assessments will still be in the grade book, but the format of the grade book will subdivide each assessment into learning goals. For every assessment students will receive multiple grades, one for each goal. (Often times, multiple assessments will contribute to the same goal.) Although I cannot say this with complete certainty, each grade will be on a scale of 1-4.
    1. Grading on a Smaller Scale
      Before: For the majority of their grades, with the exception of project-based learning experiences, students were provided with either a percentage grade or a raw score.
      After: Whenever possible, each learning goal will be assessed on a scale of 1-4, with the small scale promoting grading consistency with “distortions less likely.” Rubrics with clear descriptors will be used to determine grades. A “universal” rubric can be developed to encompass most goals, while different situations will undoubtedly call for more specialized rubrics. I could also consider using a different number scale so students and parents do not equate “the highest numerical value (4.0) with an A, the next highest value with the next highest letter grade, B, and so on.”
    1. Availability of Student Reports
      Before: I recorded scores in my grade book and in our learning management system, Moodle. Grades were categorized by subject and then assessment.
      After: My grade book will still be kept online, but it will be in the format described in my previous two points. When looking at the grade book – for students, parents, and teachers – student proficiency for each standard will be crystal clear.
    1. Retakes, Retakes, and Retakes
      Before: Math was the only subject in which I heavily emphasized retakes. Other times, I worked with certain students in specific areas, but regrading did not generally occur.
      After: “We don’t want to admonish students for not learning at the same pace as their classmates.” So, for all assessments or learning goals students will be provided with multiple opportunities for retakes. Students can use the online grade book to track their own progress and request a retake at specified points in time. Certain requirements might be put into place, such as a student having to attend a study session prior to being retested. Both students and parents will be made aware of all rules at the beginning of the school year, and they might even be provided with chances to contribute to their creation.
    1. Less Group Grades, More Individual Accountability
      Before: In Language Arts and science, group grades and individual grades each made up roughly an equal portion of a student’s report card grade.
      After: With my revised system in place there will still be a place for group grades, but there will be built in accountability for each student in regards to his learning goals. As a result, students will hopefully be more vested in their learning, and they will be able to make explicit connections between their actions and their goals. In the end, this process will help in ensuring “that no student receives a lower grade for another student’s lack of achievement.”

The five points that have been listed all deal with teachers, students, and parents being able to more effectively identify areas in which students are strong or need additional support. Often times, the manner in which multiple goals are scattered across the same assessment is entirely arbitrary, as is the setting of specific dates and times for when students must meet these goals. What matters most is that we treat each goal as its own separate entity, provide students with multiple opportunities to show mastery, and then ultimately issue report cards that are indicative of each student’s present level of performance for each goal.

What tips can you share for grading in the classroom?