Data driven instruction has become increasingly critical to the educational process for accountability purposes as the federal accountability standards established through No Child Left Behind known as Adequate Yearly Progress (AYP) have reached percentages where more than 50% of districts and campuses in some states are not currently able to meet the standards. For example, a press release from the Florida Department of Education dated June 30, 2011 states that only 325 of 3,063 schools made AYP. http://www.fldoe.org/news/2011/2011_06_30.asp. Texas had similar public school results as only 27% of districts and 44% of all schools in 2012 met AYP according to a preliminary report posted on the Texas Education Agency web page. http://ritter.tea.state.tx.us/ayp/2012/summaries12.pdf . While it is gratifying that data driven instruction is finally receiving the attention and focus that it has long deserved, and has been an important component of successful schools and districts long before this national attention, the current focus on accountability and a singular summative evaluative criteria known in many circles as State Mandated, High Stakes testing has created a system where many valuable data sources and structures have been virtually ignored by many systems. The purpose of this paper is to explore available data sources that are often ignored by schools and describe a fundamental plan that can be utilized by schools to inform classroom instruction by exploring the processes currently utilized by a public school in Florida.
School Demographic Information
The school I explored focuses on fine arts and serves students in grades six through twelve. The student population of 753 consists predominantly of Caucasians with 571 students being identified. However, it is interesting to note that 454 of the 753 students are identified as coming from the Hispanic/Latino Ethnicity. Other basic demographic information of interest is that the school has approximately twice as many girls as boys; 56% of the students are identified as economically disadvantaged because they qualify for either free or reduced price meals; and even though there is a high Hispanic/Latino Ethnicity, there are only 23 students identified as having limited English ability. Finally, the American Indian, African American, and Asian races are evenly represented with each race representing approximately 10 to 11% of the total school population.
Current School Data Analysis by Campus
This school exhibits the tendency of many schools as the data analysis focuses on the summative data generated by the Florida Comprehensive Assessment Examination (FCAT). The campus results in Reading indicate that the three greatest areas of need are the African American race, the American Indian Race, and the Economically Disadvantaged. As mentioned in the demographic summary, the greatest number of students represented in these three groups is the Economically Disadvantaged group as over 50% of the campus population is in this category. The campus does analyze the results in terms of the subtests as one of the art teachers told me that they targeted writing as the greatest need this year, so that as an art teacher the curriculum is including more written reports such as reports about famous artists.
Current Art Teacher Data Analysis
The evaluative techniques utilized by the art teacher consist of rubrics, critiques, and self-evaluation. The rubrics always have seven categories with each category being ranked one through four. Typical rubric comments are, “The overall picture is taken artistically” and “Artist’s critique offered insight to the meaning and purpose behind their photograph.” The critique has 12 questions including items such as, “What stands out the most when you see the picture for the first time?” and “What leads your eye around from place to place? The self-evaluation has seven questions including such items as, “Execution: Does the piece reflect the artist’s original intent and meaning with their final work?” and “Development of idea: Did the artist use adequate time to sketch, plan and develop the concept and theme of their piece?”
A Comprehensive Data Analysis Plan
A comprehensive data analysis plan includes far more than the analysis of the state assessment data. The state assessment data may identify potential areas of concern but that data only represents one piece of the puzzle. Frequently the test results reflect a symptom without addressing the causes of the problem. For example, we know from the basic analysis of the testing data that the campus has an issue with the Economically Disadvantaged student. However, we have not established the causes that are influencing this issue. A true data analysis plan goes beyond the symptoms and identifies the true causes of the issue because without identifying the true issues the instructional approaches cannot be appropriately adjusted.
Five pieces of information that can be collected in order to help find the true causes of an issue include: (1) student attendance and discipline patterns; (2) the student’s social, emotional, and physical needs; (3) the educational expectations of the student’s parents/guardians; (4) the student’s interests, strengths and weaknesses: and (5) current school interventions that are intended to improve student performance. Many components of this information or information related to this data can easily be gathered by the art teacher and art offers a reasonably safe atmosphere for the student to share many personal feelings.
Five art lessons that can help teachers gather this information include: (1) asking students to create a project that portrays both their likes and dislikes; (2) asking students to create a project that best represents them as a person; (3) asking students to create a project that portrays what their parents/guardians view as important; (4) asking students to create a project that portrays their favorite class; and (5) asking students to create a project that portrays what they like least about school. Gathering information of this nature helps teachers develop logical hypotheses as to what instructional adjustments are most likely to create the intended outcomes.
Intended outcomes are produced through a systematic approach. Many systematic approaches have been developed but all of them contain some combination of data gathering, data analysis, development of hypotheses, testing of the hypotheses, and revised hypotheses based on testing results. The keys to any quality approach depend on the following: (1) making sure that the data gathered is both valid and reliable; making sure that any hypotheses include observable behaviors and/or expectations that can be utilized to determine if the hypotheses are supported or rejected; and (3) making sure that data is collected and reviewed on a consistent schedule.
The most common mistake related to any data plan is that the data collected does not truly measure what it is intended to measure. This happens when questions and/or projects are not responded to as expected. For example, if I assign a project that is supposed to help me determine if the student will be successful on the state reading test and the student receives an “A” on the project but fails the state reading test, then the data I gathered is not a good predictor of performance on the state exam. If I modify my instruction in order to help students that are not successful on my project achieve success with the intention that this will improve student performance on the reading test then my instruction will be completely off target because there is a low positive correlation between performance on my project and performance on the test.
The next most common mistake related to any data plan is that the expected outcomes from the project are not easily measurable or have too large of a change in behavior required in order to be determined to have been successful. For example, if I write as my expected outcome that a student will improve their reading skills, but I do not have a measurable item such as reading speed or vocabulary development that has both a baseline number as well as a specific, consistent method for measuring reading speed and/or increased numbers of vocabulary words mastered than my data is based on subjective feelings. My objectives should also include both short term and long term goals. My long term goal could be that I will expect the student to double the number of currently known vocabulary words within a subject. My short term goal could be that every three weeks the student will be expected to increase their currently known vocabulary words within a subject by ten words.
Finally, a third common mistake related to data plans are that data are not collected with sufficient frequency. For example, if I collect data only once each year when the state test is administered then I am not giving myself the opportunity to modify and adjust my approach because I have not gathered formative data to measure progress on a regular basis. The collection of data needs to be frequent enough to ensure ample opportunities to monitor and adjust without over evaluation. The actual number of formative data events depend on the variables of the item being measured, the perceived gap between the student’s current level and the desired level, and the amount of time required for the expected intervention to have logically been implemented enough times to have an impact. The typical range of formative data events are every week to a maximum of every three weeks. If this range appears to the reader to be excessive, it is frequently because there is a lack of identified short term objectives.