Showing posts with label data. Show all posts
Showing posts with label data. Show all posts

Thursday, February 07, 2013

DRIPs can lead to good conversation

This post is a commentary on the shifts that are needed in schools in order to make the role of a data coach more effective.

The shift that I will comment on is this:
  • Optional data discussions need to become essential parts of teaching and learning
If you aren't familiar with the acronym, DRIP stands for Data Rich, Information Poor. This tends to be a common problem found in schools. Schools collect so much data (student achievement, attendance, tardies, discipline, etc) that before you get results from one data collection, you are already looking to collect a new set. There seems to be no time to desegregate the data and get to the meaning of it. In the rare occasions that there is a chance to look at the meaning of the numbers, often it is an autopsy with little chance to make mid-course corrections. If this is our only experience looking at data for meaning, we may make the mistake the the data is an ending point and we just move on.

I had the chance to attend and present at the ESEA/NCLB conference in Chicago yesterday. At the luncheon, we listened to the keynote address of Dr. Robert Meyer from UW Madison. He has worked with multiple large urban school districts to develop the value-added model for student achievement. He had stated the educator effectiveness can be measured using multiple data sources, including student growth, instructional observations, and student surveys (would this include data from the 5Essentials survey?). He also recommended that you check out the Measurements of Effective Teaching from the Gates Foundation.

I think one of the best thing stated in this keynote address was his recognition that NCLB fails to recognize that when a student enters your classroom 4 grade levels below expectations and leaves only 2 grade levels below that the teacher has been effective. NCLB only looks at the benchmark score of readiness and, regardless of any growth, if you didn't hit that benchmark, you didn't do anything. The value-added model can make corrections to this error in reporting.

Meyer stated that it is meant to be teacher friendly and supportive for new teachers. Essentially, value-added is a statistical model that can separate out external factors that might lead to student growth and narrow in on the effect of the teacher. It takes the results of the previous year's assessment and will predict what the student will score in the current year. The value-added comes in when the student scores higher than what was predicted and thus, indicated the effectiveness of the teacher.

If it will work that way, great...

The other thing that Dr. Meyer stated that was beneficial is that the data report is the beginning of inquiry and not the answer to a question.

This fed right into my presentation about digital classroom walkthroughs. Walkthroughs are a great method of collecting instructional data and providing good and timely information to teachers and buildings to make mid-course corrections from the difference between our perceptions of what goes on in the classroom and what is really happening.

We have used walkthroughs for over 4 years now and when the information is presented to the staff, they begin asking questions...this is the key! They take what is now know through observations and attempt to make instructional changes in order to get more desirable results. The examination of the data leads to great questions in a non-threatening manner because individual teachers are not named. When we share the data, it is aggregate for the school or department.

Monday, December 24, 2012

Convert from consumer to producer

This post is a commentary on the shifts that are needed in schools in order to make the role of a data coach more effective.

The shift that I will comment on is this:
  • Adults need to focus less on the teaching and more on the learning
In earlier days, I was a teacher of biology. One of my favorite units to teach was on Ecology. I loved to see the surprise in my students' faces when they learned that the study of ecology and the environment was so much more than a recycling program that gets discussed at the middle school level. One of the most dynamic parts of ecology is the study of energy transfer.


http://www.flickr.com/photos/naturewise/5060452732/sizes/m/in/photostream/
 
 

My reason for bringing up the food web is to discuss the concept of producers and consumer. A producer, or autotroph, is an organism that can create its own food within its cell via a chemical process (i.e. photosynthesis or chemosynthesis), whereas a consumer, or heterotroph, is an organism that much eat another organism in order to sustain its existence. In the classroom, I would discuss the current carrying capacity of the Earth and how the that could be increased if we all acted as lower order consumers instead of the tertiary or higher consumers in the food web (i.e. top consumer).

Students would then ask if they could be a producer, instead of a consumer. This would indicate to me that we needed to discuss some cellular concepts a little more...

But, when we move from an ecological discussion to one of educational preparation, the environment of the classroom takes on a new look in the terms of producers and consumers. For anyone who has been through a teacher preparation program up through and including the past few years, was taught to have their students be consumers of knowledge in the class.

Teachers, classically, have been taught and prepared to be the smartest person in the room and provide all of the information to students. Even older teacher evaluation models focus on how well the teacher can impart knowledge upon the class and create a sense of order and control of the young people in their room. The focus has been completely on the teaching in the room and had very little do with the learning that is happening by the students. (Take a look at how student grades are entered -- mentioned in my last post).

Recently, there has been a (needed) change in education that teachers need to guide and facilitate the learning of the students instead of directing it. (The biggest challenge to this is that Federal laws and tests that are required.) If this change can be realized in the classroom, then the students will be able to make the change from the consumers of knowledge to the producers of their own learning. This would allow focus to shift from the teaching to the learning.

Friday, December 21, 2012

Coaching needs to reduce blame

This post is a commentary on the shifts that are needed in schools in order to make the role of a data coach more effective.

The shift that I will comment on is this:
  • Placing blame needs to shift to inquiries in examining potential solutions
I feel that a lot of this feeling of blame and accusation when looking at data comes from the issues of when we look at the data. Most times, we look at autopsy data. Autopsy data is the information that comes out and is available after time to affect any change on the issue has passed. Ex. Parent-Teacher conferences after the grading quarter has ended. If we examine data when there is no chance for change the questions the arise are all about "why did this happen" and "what didn't you do".

Another issue is that while we are bombarded with numbers and data, we don't know where to being. This causes us to suffer from the DRIPs (Data Rich, Information Poor). Teachers can look no further than their own grade books to begin finding a lot of valuable information.

When I was in the classroom, I know that I would fall into the grading trap of either doing a mad rush of grading at the progress report and report card times or, when I was on top of it, just entering the grades in the grade book without examining what the data is telling us. Think of a time when you were entering grades into your grade book...how often do you look at the aggregate? Most times, teachers get so caught up in the individual cell at the intersection of the assignment and the student, that they do not look at the entire row (to see if any patterns are developing for that student) or the entire column (to determine if there are patterns developing for the class on that assignment).

(Looking for some alternatives to traditional grades? You can look at this article on Motivating $tudent$ or De-grading your classroom)

Simple measures of central tendency can illustrate volumes about a particular assignment. If you are asking "measures of central tendency" that is the fancy way of saying average. There are multiple ways to measure this though. We can take a look at the mean, median, and mode.

The arithmetic mean is commonly known as just the mean and what we think of when we discuss average. Simply put, add up all of the numbers and divide the sum by the number of terms. This can give you an idea of how most students performed.

The mode is simply the most repeated term in a set. The mode can help explain a low or high mean and also provide another insight into how students performed.

The median is simply the middle number of a set when the terms are arranged from lowest to highest. If you have a data set that is skewed, this can help provide more insight that simply the mean.

Even if you hated sadistics (or statistics), these are simple things that can be calculated and provide insight into student achievement. Importantly, these measures can get you or your curricular team asking questions about the assignments and level of understanding of the students. More importantly, these calculations can be done quickly, help provide immediate feedback to students and the class, and allow for change before the autopsy of the report card.

By examining the data as it is entered, blame is reduced because it is live information and the information from the data can help raise questions about how to improve the practices within the classroom.

Avoid the blame game by doing these calculations in your own class. Get comfortable with your own information and then begin working with colleagues. When we can move to a space that is safe and supportive, we can then seek out the help of a teacher who has better or improving student achievement to determine how to improve one's own practices.

Wednesday, December 19, 2012

Being the data coach

I have had the opportunity to attend a 6-day coaching institute from Learning Forward. For those who may not know, Learning Forward is the new name of what was formerly the National Staff Development Council (NSDC). Learningforward.org has some wonderful resources surrounding the latest research in providing professional development and professional learning. (e.g the newest version of professional learning standards can be found here.) Each standard begins with the stem of "Professional learning that increases educator effectiveness and results for all students...". Within the stem of each standard exists the emphasis of professionalism, continual learning, increasing effectiveness, and equity for all student...and that is just the stem!

I digress...

The coaching institute has provided me with the opportunity to reflect on how I have worked with teachers in professional development, evaluations, and even day-to-day interactions. One of the great things I have learned is that in the coaching role, as opposed to the administrative/evaluator role,  there is no preconceived answer or solution. The coach is a support and an equal partner in the learning that will occur between the two professionals. While there are times that the administrator is needed, I think leaders should emphasize the coaching aspect more often.

My current position involves looking at all of our data and making it meaningful to those who need it. There are a lot of spreadsheets, equations, and bar graphs. Luckily, I like exploring Excel.

One of the things discussed today in the workshop were the critical shifts that needs to happen within an organization in order to make the role of a data coach more effective and successful. They are as follows:
In the next few days, I will elaborate more on each of the bullet points above. How do you accomplish these in your school? Has the shift happened? Has there been a realization that the shift needs to happen?

Monday, November 26, 2012

What is the right number to measure?

Over the long Thanksgiving weekend, I had the chance to watch the movie Moneyball again. If you haven't seen it, the trailer is below:


The movie is based on a book by Michael Lewis, which was based on real events of the Oakland Athletics who took an obscure 1964 book (Percentage Baseball by Earnshaw Cook) written about baseball statistics to "change the way the game is played". Essentially, the A's wanted to look at statistics and data that would directly result in winning more games by scoring more runs by getting more players on base...didn't matter how it happened.

It wasn't until this morning, when I received a tweet from Brenda Colby to check out an article  written by Michael Brick, titled "When 'Grading' Is Degrading". As usual, something from my tweeps got me thinking. :)

NCLB, for better or worse, has radically altered the way that the public views its schools and the ways that schools are being measured as successful (or not). Schools now take a look at their results from previous years and with their practice testing try to accurately predict how students will perform and what 'grade' the school will receive. I read the article, which mirrors multiple conversations I have had online and face to face about the meaning of the test scores, and thought to myself, "What is the right number to measure?"

If you watched the trailer, you heard one of the scouts actually equate the beautiful-ness of a baseball player's girlfriend to his idea of self-confidence and whether that player will make a positive addition to the organization. This seems to be a bit of a stretch of logic to me, but the scout, who brings the 150 years of past practice, states that this way of doing things is valid and works. It also seems to me that measuring a school's success on set of test scores seems just as illogical. So again, "What is the right number to measure"?

As I was exploring the Brick article, I came across an older article on www.schoolleadership20.com with a powerful statement right at the beginning of it:

"Data itself has no meaning, until it is organized and displayed in charts or graphs that can be interpreted, usually in multiple ways. These interpretations may usefully inform our dialogue, decisions and subsequent actions so data definitely can be valuable, but it often seems to be granted undue reverence simply because it is numerical. Although insight can derive from analysis of data, equally it can arise out of intuition and, in fact, I wonder if some analyses are not actually rationalizations subconsciously imposed on data to justify intuitive speculations." (Beairsto, 2010) (Taken from http://www.schoolleadership20.com/profiles/blogs/don-t-let-data-drive-your-dialogue-by-dr-bruce-beairsto)

As someone who is trying to move from autopsy data to predictive data, are we just seeing what we want to see or what past experience has told us? I completely agree with Beairsto that we need to triangulate data points to get more meaning and that we must value a qualitative research paradigm to get at the underlying meaning of what the numbers say. Especially as teacher evaluations, under Race to the Top, will bring in elements of student achievement to a teacher's rating, we need to be able to triangulate our numbers to derive meaning. But, just as FOX and MSNBC can look at data and come to completely different conclusions, what will happen when teacher unions and administrators differ on the conclusions from the data?

As we look globally, to the success of schools, school districts, and the American education system, What are the right numbers to measure?