Interim Assessments Predict Student Performance on State Tests
- Grades: 3–5, 6–8, 9–12
If you are like me, as soon as you get your state test results, the first thing you want to know is if your students made annual yearly progress (AYP). Then you look for who made the cutoff and who did not. This is all significant and interesting information, but it has limited value. By the time I get my state test data, my students have moved on. I can use the data to identify gaps in my curriculum. Still, the following year I will have a new cohort with different needs and interests, so this, too, is limiting. If you want data in time to assess student weaknesses, provide intervention, and improve performance on upcoming state assessments, interim assessments should do the trick.
What Are Interim Assessments?
According to the education policy organization Achieve, summative assessments serve an instructive, evaluative, and predictive role in education. They are district level assessments that can be used by teachers to assess students’ mastery of skills and drive instruction, but they are also used at the administrative level to make district decisions regarding curriculum and policies. In contrast to state assessments, interim assessments are not used to evaluate teacher performance. If designed correctly, they can serve as a tool to predict student performance on state assessments. Achieve has published a beginners' guide to interim assessments, “The Role of Interim Assessments: A Policy Brief.” I suggest anyone interested in interim assessments read this document.
How Many Interim Assessments per Year?
In his book, Driven by Data, Paul Bambrick-Santoyo suggests that interim assessments be implemented quarterly or, better yet, every six to eight weeks to provide time to address student needs and instructional gaps.
Where Do I Start?
You can purchase predesigned interim assessments, but in my opinion, the most effective assessments are those in which the teachers are aware of the student outcomes. You should know exactly what is on the assessment to eliminate any guessing games. Ideally, you should design the assessment before the school year begins. Each question is aligned to a learning standard skill or understanding previously taught during that interim. In addition, they should be cumulative, meaning that you should also add questions to spiral and review to assess the retention of content taught previously in the year.
According to Bambrick-Santoyo, you should start the year with the state assessment or final exam goals in mind. Start by identifying the knowledge and skills the students must master. Make a list, putting the items in order of sequence to teach. Decide on the number of interim assessments that you will implement. Divide these standards-based skills and understandings into the number of interim assessments you plan to administer. For example, I have a 40-week school year. If I decide to implement one interim assessment every 8 weeks, I would sequence the skills and divide them into five assessments. My learning experiences would be designed to focus on the skills my students must learn in order to succeed on the first interim assessment.
If you want to get your feet wet with the data process now to help you better prepare for next year, try what I am doing. It is too late for me to design my interim assessments for this school year, so I decided to use a modified state assessment targeting the skills that I have taught up to this point. I know that the questions are standardized. And, I can use the NYSED item analysis charts to quickly identify the state learning standard and analyze the data. This will identify areas of weakness and instructional gaps in time to intervene before the state assessments. However, this is not the ideal approach because NYSED changed my 6th grade test structure, time allotment, and complexity of tasks three times in the last three years. It is my understanding that the 2013 assessments will be changed to align with the Common Core. Currently there is access to NYSED test archives up to 2010, but these are outdated. In this regard, building interim assessments to predict student performance on state tests is a challenge.
Designing Interim Assessment Questions
Most interim assessment questions reflect those of the state test: multiple-choice, short answer responses, and extended responses. However, if your district goal is to align curriculum throughout departments and across grade levels, the questions must be standardized. Some multiple-choice questions require higher-level thinking than others. There must be consistency amongst all teachers who are designing questions so the questions are of equal rigor. It is suggested to replicate the questions, format, and time frames of state assessments. Michigan State University has a library of resources for designing rigorous, standardized multiple-choice questions. When you design the questions, take the time to link each question to the learning standard and the skill or understanding you are assessing. This is the information you need to drive your instruction and identify students’ strengths and weaknesses.
How Do I Analyze the Data?
Evaluation and analysis should be done within 48 hours. I am working with my colleagues to see if we can use response system clickers for the multiple-choice sections as a time saver for grading the assessments. Multiple choice questions are easy to assess, but what about written responses?
I have put a lot of thought into written responses. If my student scores a 20 out of 30 on an essay, it doesn’t tell me their area of weakness in writing. The easiest and least time-consuming way I found is to assign points to each skill or criteria. It's similar to a rubric, except the skills are very specific. For example, if the paragraph requires three details from two sources, I assign five points for each text-based, relevant detail from each source. Depending on the assignment, I may also require students to cite the source for five points. This way, when I analyze the data, I can identify the area of strength and weakness for each student in regards to writing.
I use Microsoft Excel spreadsheets to analyze the data. Feel free to download a formatted spreadsheet. Plug in your student names and save it, before entering data, so you have a time-saving template for future use. I record all my students’ responses to each question. There are sections linked to each CCSS and the skill or understanding is linked to each question. This information is necessary when analyzing student strengths and weaknesses.
Using Data to Drive Instruction
All the work thus far is for nothing if we don’t use the information to improve instruction. Ten-minute mini lessons address significant gaps. Daily reviews or bell ringers target skills that need reinforcement. Scholastic’s Daily Starter offers daily reviews for language and math by grade ranges. If only a few students are confused, you can work with them in a conference or share the information with the teacher who provides Academic Intervention Services (AIS). This provides the opportunity for them to get support in small groups. If the student has an Individual Educational Program (IEP}, suggest it as an IEP goal.
It is easy to get caught up in data and state assessments. Successful teachers find a balance between assessments and rich classroom learning experiences, providing opportunities for students to express their understandings using their unique gifts. In the Scholastic Administrator article “Secrets of Success: What Top-Performing Schools Have in Common,” Karin Chenowith discusses how successful schools balance rich curriculum, state testing, and data.
If you are using interim assessments, please share your experiences below.