Aligned to My Instruction
Pros: Aligned to the standards I teach
Cons: Does not assess higher-order thinking • Misses growth of some students, such as those far below and/or far above grade level • Takes students too long to complete
I was initially excited about my district's adoption of an individualized math assessment, however, after using mclass math, I was incredibly disappointed. Unfortunately, my feelings have only grown worse since. Reasons for my distaste are perhaps equally shared between weakness of the test itself and the nonsensical way my district forces me to use the tests (I must individually test all students 3 times or more per year, approximately 10-12 minutes per student, even when a student clearly demonstrates mastery of skills tested at the beginning of the year, which is the case for many of my students and an obvious waste of instructional time). District issues aside, what follows are a sampling of my concerns with the tests themselves.
Perhaps, the most illustrative joke of mclass math is the counting test. First graders must count to 118 in one minute in order to be branded an established counter at the end of the year. Try it yourself and see how you do but keep in mind there is no prompt to inform students that they should be counting as quickly as they can. In 3 years of using this test, I've only had two students who made it to 118. As for the rest, their inability to do so had nothing to do with their ability to count fluently. Most 6-7 year-olds I have worked with who are fluent counters get to between 70 and 85 in a minute; this allows them a natural speaking rate and a few pauses to breathe, swallow or clear their throat. Those who score better tend to keep counting on the inhale. As with the DIBELS assessments, it seems that scores better reflect rate of speaking than counting fluency.
Another disappointment I immediately recognized is that the tests' scores themselves do not tell me specific information about a student’s errors, what he/she needs to work on within a particular skill, or even if he/she needs to work on that skill at all. For example, on the counting test, a student who counts without error for a minute but at a reasonable speaking rate (as opposed to super-speed counting) will score similar to a student who counted faster but made multiple errors. Similarly, students who make errors on the other tests will often score nearly the same as students who make no errors at all. The fact that number of correct responses in a minute makes up the score and not percentage of correct responses is puzzling to me. While I understand that automaticity for these skills is important, it shouldn’t count over accuracy. The fact that it does confuses the data and makes it difficult to use. It is also frustrating that the ipod we use allows no way to record actual errors so that the teacher can analyze error patterns later. This is an important missing piece if I am to form efficiently targeted intervention groups.
The final concern I will share is concerning the number facts test. This test is not appropriate at the first grade level and is contrary to sound curriculum which emphasizes proper development of part-part-whole relationships before memorization of facts. It also misses in providing important information that the teacher needs to know regarding the development of the part-part-whole relationship, which is arguably the single most important understanding in early mathematics. I would love the opportunity to observe how students solve basic addition and subtraction facts with a well-crafted individual assessment. However, with the number facts test, students are unable to show what they might know because the problem is given to them only verbally and without the use of materials such as counters and number lines. With more tools at their disposal, I would be able to observe what understandings they actually have and don’t have so that I can plan instruction to push them to the next level—which will eventually lead to automatic fact recognition with solid conceptual understandings. I find more value using this test at the end of year, however, another fault of the program is the randomization of the facts beginning presented. Scores fluctuate regardless of actual student skill because sometimes there is a higher percentage of easy facts asked at the beginning (which students get to within a minute's time) and other times more difficult facts seem to dominate the test's beginning. Ironically, the easier lead facts are at the end of the year assessment which gives the appearance of great progress but doesn't tell me anything valid about a student's actual growth. And, again, there is no way to record patterns of error or types of facts or strategies a student may have in order to target instruction.
I am told that these tests are only to identify students who potentially need more support and require a follow-up test to identify the information I am seeking. That's certainly true. The trouble is it identifies nearly everyone (mostly in error) and takes so much time for the screenings that I don't know how I'd ever have time for the longer diagnostics. Furthermore, though I've never administered the diagnostics, I have looked at them and am not impressed. I have my own more efficient and targeted ways to assess students and guide my instruction. I only wish I could be trusted as a professional make my own assessment choices and thus save my students' valuable learning time. If that day should ever come, mclass math will be the first to go.