When I take a look at tests that I have written, I know that I reviewed what was discussed in the unit (which was based on objectives aligned with the learning standards) and wrote questions accordingly. I attempted to have question frequency represent the importance of the topic in relation to the unit and the entire year and the time spent in class discussing and explore the topic. (One noted exception was Hardy-Weinberg equilibrium, but I told students about that ahead of time. We spent over a week learning how to solve these problems, but there were only 2 questions on the unit exam. This was mainly due to my desire to prep the students for Advanced Placement Biology and received positive feedback from both the students and the teacher about this practice...but I digress)
Recently, I was looking at a rating system for a intervention model of a grant application that our district is going to pursue. We were going to have to have the committee members look at particular characteristics and then, based on the characteristics selected, determine which intervention model is the best fit. Because I have been on various committees like this and know that some people, including myself, are a little mouthy and push their ideas onto others, I wanted to find a way to allow everyone to express their own ideas and then look at the group data to protect the integrity of everyone's voice. As I did this, I wrote some equations into the Excel spreadsheet that would, based on the characteristics selected, would determine the percent alignment with each model.
While this is an isolated event, the concept can be expanded to looking at standards and objective-based assessments as opposed to nugget-based. Previously, I wrote a post about providing students with more immediate feedback using Google forms and included some "coding" instructions. I want to expand on that here...
Before, I was writing about grading the entire assessment based on total points. If the questions are written to reflect one specific objective/topic, then the questions can be coded as such (overtly done in my example below) and the scoring can then be adjusted to reflect topic/standard specific questions and their level of mastery. Take a look at the sample below:
You can see that in the 2nd line of each question, I have included an Objective number. Using these, we can then select those questions to score based by objective and get a mastery level based on these particular questions.
What you will need to do is to develop your quiz in the Google docs and then enter the answer key as the first entry in the spreadsheet. When you look at the spreadsheet, you will see the the questions/column headers are in the first row. After the questions, I added the column headers of Objective 1 and Objective 2.
The grading formula for Objective 1 is seen below.
Essentially, what the formula is telling the spreadsheet to do is if the entry in cell B2 is the same as $B$2, then give it a score of 1, if not, a score of 0. Now, the difference between B2 and $B$2 is that when you drag this formula down the spreadsheet for every entry, the B2 will change to B3, B4, B5, etc for each subsequent entry; the $ in front of the cell letter and number makes it static, and will not change with a dragging of the formula. (The same is try for C, D, and E).
Since we are looking for mastery of Objective 1, I made the scoring formula reflect only the assessment items that were coded for that objective. On the Google form, they are items 1, 4, & 5, which correspond to columns B, E, and F in the spreadsheet. In order to develop a mastery level, the spreadsheet will take the score for each of those items and then divide by 3 because that is the total number of items for this objective. The color coding occurs with some simple conditional formatting. If the numerical value in the Objective 1 column (H) is greater than .5, the background will become green. Likewise, if less than .5, it will become red. Because there are three items, it is impossible to get an answer of .5, which is why I made it the scoring differentiator. For this instance, we will define mastery as 67% or 2 out of 3 questions per objective. In order to get the conditional formatting, you can either right-click in the particular cell or look under the "Format" menu.
The same was done for the Objective 2 column (I), but the scoring formula was adjusted to reflect only the objective 2 questions (2, 3, & 6 or columns C, D, and G, respectively).
Once students have taken the assessment, you can drag the contents of the cells H2 and I2 down for each entry on the spreadsheet. By using the "$" in the scoring formula, only the cells without the "$" will change to match the subsequent entry lines. The conditional formatting will be dragged down to the subsequent cells also! Unfortunately, you cannot drag the contents before students have made the entry on the form. If you try the assessment, try dragging it yourself by clicking on the spreadsheet link!
This can be extended to more questions per objective and more objectives. This can also be expanded to include question types of "Choose all that apply" (using the Check Box option on Google forms), but this will require a little more work.
Explore and play.
If you have any questions, feel free to contact me via the blog comments or on Twitter (@misterabrams).