by Dr. Bryan Drost & Tricia Ebner
As Ohio’s statewide testing window comes to a close, it’s a good time to think about the purposes and uses of Ohio’s assessments. While we realize that there is a ton of frustration regarding testing, we are also aware that there are a number of misconceptions regarding these assessments. It is our hope that through this entry, you will lower frustration as you will be informed with accurate information regarding our accountability system and will continue to focus on the true purpose of the assessments: documenting students’ learning progress.
State assessments are diagnostic in nature. Honestly, the state assessments are summative. They are assessing students’ skills and understanding of the standards for the school year. This is one of the reasons they are administered in April and May; teachers need as much time as can be reasonably provided to work with their students on the work for the year.
That said, the assessments can certainly be used in a diagnostic way, by next year’s teachers. As a teacher, I use the assessment results in two ways. First, I reflect on my practice over the previous school year and look to see if there are gaps in students’ performance. If those gaps seem widespread, then I know I may need to look at how well I addressed those particular standards. This becomes a revision point for my instruction in the coming school year. Secondly, I look at my incoming students’ results to get a snapshot of their skills and understanding in April and May of the previous school year. I use this as one of many data points to help me determine my starting points with instruction in the coming year. Those are certainly diagnostic uses I have in the fall, looking at the previous spring’s assessments. However, the assessments are summative for the current school year.
Kids do better on paper-pencil testing. Over the last year, there has been quite the discussion regarding paper-pencil testing versus electronic testing, and any perceived differences with the format. Misconception alert: these two studies on Ohio’s testing system (not New York’s, California’s, etc.) have shown that although there are some small differences at various grade levels, overall kids do just as well on paper as they do on the computer. http://oh.portal.airast.org/ocba/wp-content/uploads/OST_Spring_2016_Mode_Comparability_Report.pdf; http://oh.portal.airast.org/ocba/wp-content/uploads/OST_Spring_2016_Mode_Comparability_Report.pdf
In some cases, the reverse has actually been researched to be true: “Mode constants identified in the lower grade math assessments indicated that math tests administered online were somewhat easier than when administered on paper.”
The data collected on students is being sold. The answer to this one is simple: no, it isn’t being sold. It’s against Ohio’s laws to sell information on students. In fact, the Ohio Department of Education doesn’t even see students’ names when testing information is collected. This is why students have to log into the assessment portals with their SSID, a numerical code. Administrators are not permitted to share student names with ODE; this is also a violation of state law. As an example, when I have had data appeals, I am specifically only allowed to share a student’s last name or their SSID, never both in communications.
The test changes significantly when the vendor changes. This misconception was at its height when Ohio moved from the PARCC assessment to the AIR and still seems to be circulating as value added data comes back into play next year with the expiration of Safe Harbor. To see the issue with this misconception, it’s important to understand how an assessment is constructed. A blueprint is developed based on standards and that is given to the assessment vendor. The blueprint specifically identifies the skills and understanding to be assessed in relationship to the standards. This means the vendor used to craft the assessment isn’t going to make a significant difference in the kinds of items, skills, or understanding assessed, unless the standards change. Think of it this way: cities have standards for the type of houses they are allowed to build; when a future homeowner purchases a blueprint from an architect, the blueprint is based on the standard. Change the contractor, and the house is still going to look extremely similar to any other home built using the same blueprint. In other words, if Ohio were to throw out AIR this year, we would still have a similar blueprint as the standards have not changed.
The writing of Ohio’s tests is secret, done by people in back rooms with trenchcoats and fedoras. Nothing could be farther from the truth! Each year, ODE in conjunction with AIR assembles a team of teachers, administrators, and other educators who write draft assessment questions. These questions are then scrutinized many times. After a decent chunk of questions is approved by the Content Advisory Committee, the questions are sent to the Sensitivity and Fairness group where discussion ensues related to bias, appropriateness for testing, as well as accessibility for all students. Any questions that do not meet this group’s strict criteria are thrown out. From here, questions have to be field tested; after data is collected on the questions, the testing groups meet again to ensure that the questions don’t have unintended bias. Because of all of these steps, it can take upwards of two years for questions to appear on exams. In other words, the questions that the team is writing this year, won’t appear for at least another two years. All of Ohio’s created questions must conform to high psychometric levels as well as meet Ohio’s guidelines for sensitivity and fairness.
Sometimes it’s easy to fall into the misconceptions, especially when we are anxiously awaiting this year’s results. It’s important to be mindful of these five critical facts about Ohio’s assessments. This can help us more clearly focus on the real goal of Ohio’s assessment system: documenting our students’ learning progress.